question
stringlengths
15
150
question_id
int64
258k
2.12M
score
int64
1
208
link
stringlengths
60
131
body
stringlengths
101
9.72k
answers
listlengths
1
288
verbolised
stringlengths
650
683k
Finite morphism of varieties - morphism of sheaves
2,107,054
5
https://math.stackexchange.com/questions/2107054/finite-morphism-of-varieties-morphism-of-sheaves
<p>Let $f:X\rightarrow Y$ be a finite morphism of non-singular projective varieties of degree $d$.</p> <ul> <li>Consider the map of sheaves $O_Y\rightarrow f_*O_X$. Is this morphism injective? Why? $f_*O_X$ is a rank $d$ vector bundle.</li> <li>Suppose the above morphism of sheaves is injective. Consider the exact sequence where $L$ is the cokernel: $$0\rightarrow O_Y\rightarrow f_*O_X\rightarrow L\rightarrow 0.$$ Does this exact sequence split and give us $ f_*O_X=O_Y\oplus L$?</li> <li>Suppose $f$ is of degree 2 and $X$ and $Y$ are non-singular surfaces. Let $C\subset Y$ be a non-singular curve along which $f$ is branched. Then is there is some relation between $L$ (as above ) and $O_Y(C)$?</li> </ul>
[ { "answer_id": 2107414, "score": 8, "text": "<p>Yes, the map is injective assuming the map is dominanat. Since <span class=\"math-container\">$1\\in\\mathcal{O}_Y$</span> goes to <span class=\"math-container\">$1\\in f_*\\mathcal{O}_X$</span>, you should be able to check this.</p>\n\n<p>The exact sequence above may not split in positive characteristic, but it does in zero characteristic (or characteristic not dividing <span class=\"math-container\">$d$</span>), using the trace map.</p>\n\n<p>For the last part (char <span class=\"math-container\">$\\neq 2$</span>), one has <span class=\"math-container\">$f_*\\mathcal{O}_X=\\mathcal{O}_Y\\oplus L$</span> and checking the algebra structure on <span class=\"math-container\">$f_*\\mathcal{O}_X$</span>, one can show that <span class=\"math-container\">$L^2=\\mathcal{O}_Y(-E)$</span> where <span class=\"math-container\">$E$</span> is the branch locus.</p>\n" }, { "answer_id": 2107430, "score": 6, "text": "<p>For the first part of your question, a morphism of schemes $f : X \\to Y$ such that $\\mathcal{O}_Y \\to f_*\\mathcal{O}_X$ is injective in said to be schematically dominant. If $f$ is dominant and $Y$ is reduced then $f$ is schematically dominant (EGA IV3 prop 11.10.4). </p>\n" }, { "answer_id": 2107414, "score": 8, "text": "<p>Yes, the map is injective assuming the map is dominanat. Since <span class=\"math-container\">$1\\in\\mathcal{O}_Y$</span> goes to <span class=\"math-container\">$1\\in f_*\\mathcal{O}_X$</span>, you should be able to check this.</p>\n\n<p>The exact sequence above may not split in positive characteristic, but it does in zero characteristic (or characteristic not dividing <span class=\"math-container\">$d$</span>), using the trace map.</p>\n\n<p>For the last part (char <span class=\"math-container\">$\\neq 2$</span>), one has <span class=\"math-container\">$f_*\\mathcal{O}_X=\\mathcal{O}_Y\\oplus L$</span> and checking the algebra structure on <span class=\"math-container\">$f_*\\mathcal{O}_X$</span>, one can show that <span class=\"math-container\">$L^2=\\mathcal{O}_Y(-E)$</span> where <span class=\"math-container\">$E$</span> is the branch locus.</p>\n" }, { "answer_id": 2107430, "score": 6, "text": "<p>For the first part of your question, a morphism of schemes $f : X \\to Y$ such that $\\mathcal{O}_Y \\to f_*\\mathcal{O}_X$ is injective in said to be schematically dominant. If $f$ is dominant and $Y$ is reduced then $f$ is schematically dominant (EGA IV3 prop 11.10.4). </p>\n" }, { "answer_id": 2107414, "score": 8, "text": "<p>Yes, the map is injective assuming the map is dominanat. Since <span class=\"math-container\">$1\\in\\mathcal{O}_Y$</span> goes to <span class=\"math-container\">$1\\in f_*\\mathcal{O}_X$</span>, you should be able to check this.</p>\n\n<p>The exact sequence above may not split in positive characteristic, but it does in zero characteristic (or characteristic not dividing <span class=\"math-container\">$d$</span>), using the trace map.</p>\n\n<p>For the last part (char <span class=\"math-container\">$\\neq 2$</span>), one has <span class=\"math-container\">$f_*\\mathcal{O}_X=\\mathcal{O}_Y\\oplus L$</span> and checking the algebra structure on <span class=\"math-container\">$f_*\\mathcal{O}_X$</span>, one can show that <span class=\"math-container\">$L^2=\\mathcal{O}_Y(-E)$</span> where <span class=\"math-container\">$E$</span> is the branch locus.</p>\n" }, { "answer_id": 2107430, "score": 6, "text": "<p>For the first part of your question, a morphism of schemes $f : X \\to Y$ such that $\\mathcal{O}_Y \\to f_*\\mathcal{O}_X$ is injective in said to be schematically dominant. If $f$ is dominant and $Y$ is reduced then $f$ is schematically dominant (EGA IV3 prop 11.10.4). </p>\n" }, { "answer_id": 2107414, "score": 8, "text": "<p>Yes, the map is injective assuming the map is dominanat. Since <span class=\"math-container\">$1\\in\\mathcal{O}_Y$</span> goes to <span class=\"math-container\">$1\\in f_*\\mathcal{O}_X$</span>, you should be able to check this.</p>\n\n<p>The exact sequence above may not split in positive characteristic, but it does in zero characteristic (or characteristic not dividing <span class=\"math-container\">$d$</span>), using the trace map.</p>\n\n<p>For the last part (char <span class=\"math-container\">$\\neq 2$</span>), one has <span class=\"math-container\">$f_*\\mathcal{O}_X=\\mathcal{O}_Y\\oplus L$</span> and checking the algebra structure on <span class=\"math-container\">$f_*\\mathcal{O}_X$</span>, one can show that <span class=\"math-container\">$L^2=\\mathcal{O}_Y(-E)$</span> where <span class=\"math-container\">$E$</span> is the branch locus.</p>\n" }, { "answer_id": 2107430, "score": 6, "text": "<p>For the first part of your question, a morphism of schemes $f : X \\to Y$ such that $\\mathcal{O}_Y \\to f_*\\mathcal{O}_X$ is injective in said to be schematically dominant. If $f$ is dominant and $Y$ is reduced then $f$ is schematically dominant (EGA IV3 prop 11.10.4). </p>\n" }, { "answer_id": 2107414, "score": 8, "text": "<p>Yes, the map is injective assuming the map is dominanat. Since <span class=\"math-container\">$1\\in\\mathcal{O}_Y$</span> goes to <span class=\"math-container\">$1\\in f_*\\mathcal{O}_X$</span>, you should be able to check this.</p>\n\n<p>The exact sequence above may not split in positive characteristic, but it does in zero characteristic (or characteristic not dividing <span class=\"math-container\">$d$</span>), using the trace map.</p>\n\n<p>For the last part (char <span class=\"math-container\">$\\neq 2$</span>), one has <span class=\"math-container\">$f_*\\mathcal{O}_X=\\mathcal{O}_Y\\oplus L$</span> and checking the algebra structure on <span class=\"math-container\">$f_*\\mathcal{O}_X$</span>, one can show that <span class=\"math-container\">$L^2=\\mathcal{O}_Y(-E)$</span> where <span class=\"math-container\">$E$</span> is the branch locus.</p>\n" }, { "answer_id": 2107430, "score": 6, "text": "<p>For the first part of your question, a morphism of schemes $f : X \\to Y$ such that $\\mathcal{O}_Y \\to f_*\\mathcal{O}_X$ is injective in said to be schematically dominant. If $f$ is dominant and $Y$ is reduced then $f$ is schematically dominant (EGA IV3 prop 11.10.4). </p>\n" }, { "answer_id": 2107414, "score": 8, "text": "<p>Yes, the map is injective assuming the map is dominanat. Since <span class=\"math-container\">$1\\in\\mathcal{O}_Y$</span> goes to <span class=\"math-container\">$1\\in f_*\\mathcal{O}_X$</span>, you should be able to check this.</p>\n\n<p>The exact sequence above may not split in positive characteristic, but it does in zero characteristic (or characteristic not dividing <span class=\"math-container\">$d$</span>), using the trace map.</p>\n\n<p>For the last part (char <span class=\"math-container\">$\\neq 2$</span>), one has <span class=\"math-container\">$f_*\\mathcal{O}_X=\\mathcal{O}_Y\\oplus L$</span> and checking the algebra structure on <span class=\"math-container\">$f_*\\mathcal{O}_X$</span>, one can show that <span class=\"math-container\">$L^2=\\mathcal{O}_Y(-E)$</span> where <span class=\"math-container\">$E$</span> is the branch locus.</p>\n" }, { "answer_id": 2107430, "score": 6, "text": "<p>For the first part of your question, a morphism of schemes $f : X \\to Y$ such that $\\mathcal{O}_Y \\to f_*\\mathcal{O}_X$ is injective in said to be schematically dominant. If $f$ is dominant and $Y$ is reduced then $f$ is schematically dominant (EGA IV3 prop 11.10.4). </p>\n" } ]
Question: Finite morphism of varieties - morphism of sheaves Answer 1: <p>Yes, the map is injective assuming the map is dominanat. Since <span class="math-container">$1\in\mathcal{O}_Y$</span> goes to <span class="math-container">$1\in f_*\mathcal{O}_X$</span>, you should be able to check this.</p> <p>The exact sequence above may not split in positive characteristic, but it does in zero characteristic (or characteristic not dividing <span class="math-container">$d$</span>), using the trace map.</p> <p>For the last part (char <span class="math-container">$\neq 2$</span>), one has <span class="math-container">$f_*\mathcal{O}_X=\mathcal{O}_Y\oplus L$</span> and checking the algebra structure on <span class="math-container">$f_*\mathcal{O}_X$</span>, one can show that <span class="math-container">$L^2=\mathcal{O}_Y(-E)$</span> where <span class="math-container">$E$</span> is the branch locus.</p> Answer 2: <p>For the first part of your question, a morphism of schemes $f : X \to Y$ such that $\mathcal{O}_Y \to f_*\mathcal{O}_X$ is injective in said to be schematically dominant. If $f$ is dominant and $Y$ is reduced then $f$ is schematically dominant (EGA IV3 prop 11.10.4). </p> Answer 3: <p>Yes, the map is injective assuming the map is dominanat. Since <span class="math-container">$1\in\mathcal{O}_Y$</span> goes to <span class="math-container">$1\in f_*\mathcal{O}_X$</span>, you should be able to check this.</p> <p>The exact sequence above may not split in positive characteristic, but it does in zero characteristic (or characteristic not dividing <span class="math-container">$d$</span>), using the trace map.</p> <p>For the last part (char <span class="math-container">$\neq 2$</span>), one has <span class="math-container">$f_*\mathcal{O}_X=\mathcal{O}_Y\oplus L$</span> and checking the algebra structure on <span class="math-container">$f_*\mathcal{O}_X$</span>, one can show that <span class="math-container">$L^2=\mathcal{O}_Y(-E)$</span> where <span class="math-container">$E$</span> is the branch locus.</p> Answer 4: <p>For the first part of your question, a morphism of schemes $f : X \to Y$ such that $\mathcal{O}_Y \to f_*\mathcal{O}_X$ is injective in said to be schematically dominant. If $f$ is dominant and $Y$ is reduced then $f$ is schematically dominant (EGA IV3 prop 11.10.4). </p> Answer 5: <p>Yes, the map is injective assuming the map is dominanat. Since <span class="math-container">$1\in\mathcal{O}_Y$</span> goes to <span class="math-container">$1\in f_*\mathcal{O}_X$</span>, you should be able to check this.</p> <p>The exact sequence above may not split in positive characteristic, but it does in zero characteristic (or characteristic not dividing <span class="math-container">$d$</span>), using the trace map.</p> <p>For the last part (char <span class="math-container">$\neq 2$</span>), one has <span class="math-container">$f_*\mathcal{O}_X=\mathcal{O}_Y\oplus L$</span> and checking the algebra structure on <span class="math-container">$f_*\mathcal{O}_X$</span>, one can show that <span class="math-container">$L^2=\mathcal{O}_Y(-E)$</span> where <span class="math-container">$E$</span> is the branch locus.</p> Answer 6: <p>For the first part of your question, a morphism of schemes $f : X \to Y$ such that $\mathcal{O}_Y \to f_*\mathcal{O}_X$ is injective in said to be schematically dominant. If $f$ is dominant and $Y$ is reduced then $f$ is schematically dominant (EGA IV3 prop 11.10.4). </p> Answer 7: <p>Yes, the map is injective assuming the map is dominanat. Since <span class="math-container">$1\in\mathcal{O}_Y$</span> goes to <span class="math-container">$1\in f_*\mathcal{O}_X$</span>, you should be able to check this.</p> <p>The exact sequence above may not split in positive characteristic, but it does in zero characteristic (or characteristic not dividing <span class="math-container">$d$</span>), using the trace map.</p> <p>For the last part (char <span class="math-container">$\neq 2$</span>), one has <span class="math-container">$f_*\mathcal{O}_X=\mathcal{O}_Y\oplus L$</span> and checking the algebra structure on <span class="math-container">$f_*\mathcal{O}_X$</span>, one can show that <span class="math-container">$L^2=\mathcal{O}_Y(-E)$</span> where <span class="math-container">$E$</span> is the branch locus.</p> Answer 8: <p>For the first part of your question, a morphism of schemes $f : X \to Y$ such that $\mathcal{O}_Y \to f_*\mathcal{O}_X$ is injective in said to be schematically dominant. If $f$ is dominant and $Y$ is reduced then $f$ is schematically dominant (EGA IV3 prop 11.10.4). </p> Answer 9: <p>Yes, the map is injective assuming the map is dominanat. Since <span class="math-container">$1\in\mathcal{O}_Y$</span> goes to <span class="math-container">$1\in f_*\mathcal{O}_X$</span>, you should be able to check this.</p> <p>The exact sequence above may not split in positive characteristic, but it does in zero characteristic (or characteristic not dividing <span class="math-container">$d$</span>), using the trace map.</p> <p>For the last part (char <span class="math-container">$\neq 2$</span>), one has <span class="math-container">$f_*\mathcal{O}_X=\mathcal{O}_Y\oplus L$</span> and checking the algebra structure on <span class="math-container">$f_*\mathcal{O}_X$</span>, one can show that <span class="math-container">$L^2=\mathcal{O}_Y(-E)$</span> where <span class="math-container">$E$</span> is the branch locus.</p> Answer 10: <p>For the first part of your question, a morphism of schemes $f : X \to Y$ such that $\mathcal{O}_Y \to f_*\mathcal{O}_X$ is injective in said to be schematically dominant. If $f$ is dominant and $Y$ is reduced then $f$ is schematically dominant (EGA IV3 prop 11.10.4). </p> Answer 11: <p>Yes, the map is injective assuming the map is dominanat. Since <span class="math-container">$1\in\mathcal{O}_Y$</span> goes to <span class="math-container">$1\in f_*\mathcal{O}_X$</span>, you should be able to check this.</p> <p>The exact sequence above may not split in positive characteristic, but it does in zero characteristic (or characteristic not dividing <span class="math-container">$d$</span>), using the trace map.</p> <p>For the last part (char <span class="math-container">$\neq 2$</span>), one has <span class="math-container">$f_*\mathcal{O}_X=\mathcal{O}_Y\oplus L$</span> and checking the algebra structure on <span class="math-container">$f_*\mathcal{O}_X$</span>, one can show that <span class="math-container">$L^2=\mathcal{O}_Y(-E)$</span> where <span class="math-container">$E$</span> is the branch locus.</p> Answer 12: <p>For the first part of your question, a morphism of schemes $f : X \to Y$ such that $\mathcal{O}_Y \to f_*\mathcal{O}_X$ is injective in said to be schematically dominant. If $f$ is dominant and $Y$ is reduced then $f$ is schematically dominant (EGA IV3 prop 11.10.4). </p>
Evaluate $\lim _{x\to \infty }\left(\cos\sqrt{x}-\cos\sqrt{x-1}\right)$
2,107,253
5
https://math.stackexchange.com/questions/2107253/evaluate-lim-x-to-infty-left-cos-sqrtx-cos-sqrtx-1-right
<p>How should I determine the following limit?</p> <p>$\lim _{x\to \infty }\left(\cos\sqrt{x}-\cos\sqrt{x-1}\right)$</p>
[ { "answer_id": 2107258, "score": 7, "text": "<p>HINT</p>\n\n<p>Use $\\cos x - \\cos y = -2 \\sin(\\frac {(x - y)} 2 ) \\sin(\\frac {(x + y)} 2 )$</p>\n\n<p>Then $\\lim _{x\\to \\infty }\\sin(\\frac {(\\sqrt x - \\sqrt {x-1})} 2 )=\\lim _{x\\to \\infty }\\sin(\\frac 1 {2(\\sqrt x + \\sqrt {x-1})})=0$</p>\n\n<p>Because $\\sin(\\frac {(\\sqrt x + \\sqrt {x-1})} 2) $ is bounded, it follows the limit is zero</p>\n" }, { "answer_id": 2107266, "score": 5, "text": "<p>The difference between $\\sqrt{x}$ and $\\sqrt{x-1}$ becomes arbitrary small as $x$ goes to infinity. Since $\\cos$ is a <a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"nofollow noreferrer\" title=\"Uniform continuity\">uniformly continuous</a> function, the difference between the cosine of two arbitrary close numbers is 0. So the limit is 0.</p>\n\n<p>Uniform continuity is a stronger form of continuity. Having a bounded derivative is a sufficient condition for a function to be uniformly continuous.</p>\n\n<p>This method requires some extra work to make a rigid proof, but it is a general method that works in many cases and it complies with intuition.</p>\n" }, { "answer_id": 2107276, "score": 4, "text": "<p>$$\\cos\\sqrt{x}-\\cos\\sqrt{x-1}=2\\sin\\frac{\\sqrt{x}+\\sqrt{x-1}}{2}\\sin\\frac{\\sqrt{x-1}-\\sqrt{x}}{2}=$$\n$$=-2\\sin\\frac{\\sqrt{x}+\\sqrt{x-1}}{2}\\sin\\frac{1}{2(\\sqrt{x-1}+\\sqrt{x})}\\rightarrow0$$</p>\n" }, { "answer_id": 2107278, "score": 3, "text": "<p>$\\displaystyle \\lim_{x\\rightarrow \\infty}\\bigg|\\int^{\\sqrt{x}}_{\\sqrt{x-1}}\\sin xdx\\bigg| \\leq \\lim_{x\\rightarrow \\infty}\\int^{\\sqrt{x}}_{\\sqrt{x-1}}1dx = \\lim_{x\\rightarrow \\infty}\\left(\\sqrt{x}-\\sqrt{x-1}\\right) = \\lim_{x\\rightarrow \\infty}\\frac{1}{\\left(\\sqrt{x}+\\sqrt{x-1}\\right)}=0$</p>\n" }, { "answer_id": 2107300, "score": 1, "text": "<p>Direct approach:</p>\n\n<p>Let $f(x) := \\cos\\sqrt{x} - \\cos\\sqrt{x - 1}$.</p>\n\n<p>By <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem\" rel=\"nofollow noreferrer\">the mean value theorem</a>, for two values $a, b \\in \\mathbb R$, there exists $t\\in\\mathbb [a, b]$ such that $\\cos(b) - \\cos(a) = \\cos'(t)(b - a)$. Thus,$$\n|f(x)|=|\\cos\\sqrt{x} - \\cos\\sqrt{x - 1}| \\\\\n=|\\cos'(t)(\\sqrt{x} - \\sqrt{x - 1})| \\\\\n=|-\\sin(t)|\\cdot|\\sqrt{x} - \\sqrt{x - 1}|\\\\\n\\le|\\sqrt{x} - \\sqrt{x - 1}| = \\sqrt{x} - \\sqrt{x - 1} =: g(x)$$</p>\n\n<p>for all $x&gt;1$.</p>\n\n<hr>\n\n<p>Assertion: $\\lim_{x\\rightarrow\\infty} g(x) = 0$.</p>\n\n<p>Proof: Let $\\epsilon &gt; 0$. Then</p>\n\n<p>$$g(x) \\le \\epsilon\\\\\n\\Leftrightarrow\\sqrt{x} \\leq \\epsilon + \\sqrt{x - 1}\\\\\n\\Leftrightarrow x \\leq \\epsilon^2 + 2\\sqrt{x - 1} + x -1\\\\\n\\Leftrightarrow x \\geq 1+(1-\\epsilon^2)^2/4 =: x_0$$</p>\n\n<p>Thus, for all $x&gt; x_0$, we have $|g(x)| = g(x) \\le \\epsilon$.</p>\n\n<hr>\n\n<p>Together with $|f(x)|\\le g(x)$, this yields $\\lim_{x\\rightarrow\\infty} f(x) = 0$.</p>\n" }, { "answer_id": 2107258, "score": 7, "text": "<p>HINT</p>\n\n<p>Use $\\cos x - \\cos y = -2 \\sin(\\frac {(x - y)} 2 ) \\sin(\\frac {(x + y)} 2 )$</p>\n\n<p>Then $\\lim _{x\\to \\infty }\\sin(\\frac {(\\sqrt x - \\sqrt {x-1})} 2 )=\\lim _{x\\to \\infty }\\sin(\\frac 1 {2(\\sqrt x + \\sqrt {x-1})})=0$</p>\n\n<p>Because $\\sin(\\frac {(\\sqrt x + \\sqrt {x-1})} 2) $ is bounded, it follows the limit is zero</p>\n" }, { "answer_id": 2107266, "score": 5, "text": "<p>The difference between $\\sqrt{x}$ and $\\sqrt{x-1}$ becomes arbitrary small as $x$ goes to infinity. Since $\\cos$ is a <a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"nofollow noreferrer\" title=\"Uniform continuity\">uniformly continuous</a> function, the difference between the cosine of two arbitrary close numbers is 0. So the limit is 0.</p>\n\n<p>Uniform continuity is a stronger form of continuity. Having a bounded derivative is a sufficient condition for a function to be uniformly continuous.</p>\n\n<p>This method requires some extra work to make a rigid proof, but it is a general method that works in many cases and it complies with intuition.</p>\n" }, { "answer_id": 2107276, "score": 4, "text": "<p>$$\\cos\\sqrt{x}-\\cos\\sqrt{x-1}=2\\sin\\frac{\\sqrt{x}+\\sqrt{x-1}}{2}\\sin\\frac{\\sqrt{x-1}-\\sqrt{x}}{2}=$$\n$$=-2\\sin\\frac{\\sqrt{x}+\\sqrt{x-1}}{2}\\sin\\frac{1}{2(\\sqrt{x-1}+\\sqrt{x})}\\rightarrow0$$</p>\n" }, { "answer_id": 2107278, "score": 3, "text": "<p>$\\displaystyle \\lim_{x\\rightarrow \\infty}\\bigg|\\int^{\\sqrt{x}}_{\\sqrt{x-1}}\\sin xdx\\bigg| \\leq \\lim_{x\\rightarrow \\infty}\\int^{\\sqrt{x}}_{\\sqrt{x-1}}1dx = \\lim_{x\\rightarrow \\infty}\\left(\\sqrt{x}-\\sqrt{x-1}\\right) = \\lim_{x\\rightarrow \\infty}\\frac{1}{\\left(\\sqrt{x}+\\sqrt{x-1}\\right)}=0$</p>\n" }, { "answer_id": 2107300, "score": 1, "text": "<p>Direct approach:</p>\n\n<p>Let $f(x) := \\cos\\sqrt{x} - \\cos\\sqrt{x - 1}$.</p>\n\n<p>By <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem\" rel=\"nofollow noreferrer\">the mean value theorem</a>, for two values $a, b \\in \\mathbb R$, there exists $t\\in\\mathbb [a, b]$ such that $\\cos(b) - \\cos(a) = \\cos'(t)(b - a)$. Thus,$$\n|f(x)|=|\\cos\\sqrt{x} - \\cos\\sqrt{x - 1}| \\\\\n=|\\cos'(t)(\\sqrt{x} - \\sqrt{x - 1})| \\\\\n=|-\\sin(t)|\\cdot|\\sqrt{x} - \\sqrt{x - 1}|\\\\\n\\le|\\sqrt{x} - \\sqrt{x - 1}| = \\sqrt{x} - \\sqrt{x - 1} =: g(x)$$</p>\n\n<p>for all $x&gt;1$.</p>\n\n<hr>\n\n<p>Assertion: $\\lim_{x\\rightarrow\\infty} g(x) = 0$.</p>\n\n<p>Proof: Let $\\epsilon &gt; 0$. Then</p>\n\n<p>$$g(x) \\le \\epsilon\\\\\n\\Leftrightarrow\\sqrt{x} \\leq \\epsilon + \\sqrt{x - 1}\\\\\n\\Leftrightarrow x \\leq \\epsilon^2 + 2\\sqrt{x - 1} + x -1\\\\\n\\Leftrightarrow x \\geq 1+(1-\\epsilon^2)^2/4 =: x_0$$</p>\n\n<p>Thus, for all $x&gt; x_0$, we have $|g(x)| = g(x) \\le \\epsilon$.</p>\n\n<hr>\n\n<p>Together with $|f(x)|\\le g(x)$, this yields $\\lim_{x\\rightarrow\\infty} f(x) = 0$.</p>\n" }, { "answer_id": 2107258, "score": 7, "text": "<p>HINT</p>\n\n<p>Use $\\cos x - \\cos y = -2 \\sin(\\frac {(x - y)} 2 ) \\sin(\\frac {(x + y)} 2 )$</p>\n\n<p>Then $\\lim _{x\\to \\infty }\\sin(\\frac {(\\sqrt x - \\sqrt {x-1})} 2 )=\\lim _{x\\to \\infty }\\sin(\\frac 1 {2(\\sqrt x + \\sqrt {x-1})})=0$</p>\n\n<p>Because $\\sin(\\frac {(\\sqrt x + \\sqrt {x-1})} 2) $ is bounded, it follows the limit is zero</p>\n" }, { "answer_id": 2107266, "score": 5, "text": "<p>The difference between $\\sqrt{x}$ and $\\sqrt{x-1}$ becomes arbitrary small as $x$ goes to infinity. Since $\\cos$ is a <a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"nofollow noreferrer\" title=\"Uniform continuity\">uniformly continuous</a> function, the difference between the cosine of two arbitrary close numbers is 0. So the limit is 0.</p>\n\n<p>Uniform continuity is a stronger form of continuity. Having a bounded derivative is a sufficient condition for a function to be uniformly continuous.</p>\n\n<p>This method requires some extra work to make a rigid proof, but it is a general method that works in many cases and it complies with intuition.</p>\n" }, { "answer_id": 2107276, "score": 4, "text": "<p>$$\\cos\\sqrt{x}-\\cos\\sqrt{x-1}=2\\sin\\frac{\\sqrt{x}+\\sqrt{x-1}}{2}\\sin\\frac{\\sqrt{x-1}-\\sqrt{x}}{2}=$$\n$$=-2\\sin\\frac{\\sqrt{x}+\\sqrt{x-1}}{2}\\sin\\frac{1}{2(\\sqrt{x-1}+\\sqrt{x})}\\rightarrow0$$</p>\n" }, { "answer_id": 2107278, "score": 3, "text": "<p>$\\displaystyle \\lim_{x\\rightarrow \\infty}\\bigg|\\int^{\\sqrt{x}}_{\\sqrt{x-1}}\\sin xdx\\bigg| \\leq \\lim_{x\\rightarrow \\infty}\\int^{\\sqrt{x}}_{\\sqrt{x-1}}1dx = \\lim_{x\\rightarrow \\infty}\\left(\\sqrt{x}-\\sqrt{x-1}\\right) = \\lim_{x\\rightarrow \\infty}\\frac{1}{\\left(\\sqrt{x}+\\sqrt{x-1}\\right)}=0$</p>\n" }, { "answer_id": 2107300, "score": 1, "text": "<p>Direct approach:</p>\n\n<p>Let $f(x) := \\cos\\sqrt{x} - \\cos\\sqrt{x - 1}$.</p>\n\n<p>By <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem\" rel=\"nofollow noreferrer\">the mean value theorem</a>, for two values $a, b \\in \\mathbb R$, there exists $t\\in\\mathbb [a, b]$ such that $\\cos(b) - \\cos(a) = \\cos'(t)(b - a)$. Thus,$$\n|f(x)|=|\\cos\\sqrt{x} - \\cos\\sqrt{x - 1}| \\\\\n=|\\cos'(t)(\\sqrt{x} - \\sqrt{x - 1})| \\\\\n=|-\\sin(t)|\\cdot|\\sqrt{x} - \\sqrt{x - 1}|\\\\\n\\le|\\sqrt{x} - \\sqrt{x - 1}| = \\sqrt{x} - \\sqrt{x - 1} =: g(x)$$</p>\n\n<p>for all $x&gt;1$.</p>\n\n<hr>\n\n<p>Assertion: $\\lim_{x\\rightarrow\\infty} g(x) = 0$.</p>\n\n<p>Proof: Let $\\epsilon &gt; 0$. Then</p>\n\n<p>$$g(x) \\le \\epsilon\\\\\n\\Leftrightarrow\\sqrt{x} \\leq \\epsilon + \\sqrt{x - 1}\\\\\n\\Leftrightarrow x \\leq \\epsilon^2 + 2\\sqrt{x - 1} + x -1\\\\\n\\Leftrightarrow x \\geq 1+(1-\\epsilon^2)^2/4 =: x_0$$</p>\n\n<p>Thus, for all $x&gt; x_0$, we have $|g(x)| = g(x) \\le \\epsilon$.</p>\n\n<hr>\n\n<p>Together with $|f(x)|\\le g(x)$, this yields $\\lim_{x\\rightarrow\\infty} f(x) = 0$.</p>\n" }, { "answer_id": 2107258, "score": 7, "text": "<p>HINT</p>\n\n<p>Use $\\cos x - \\cos y = -2 \\sin(\\frac {(x - y)} 2 ) \\sin(\\frac {(x + y)} 2 )$</p>\n\n<p>Then $\\lim _{x\\to \\infty }\\sin(\\frac {(\\sqrt x - \\sqrt {x-1})} 2 )=\\lim _{x\\to \\infty }\\sin(\\frac 1 {2(\\sqrt x + \\sqrt {x-1})})=0$</p>\n\n<p>Because $\\sin(\\frac {(\\sqrt x + \\sqrt {x-1})} 2) $ is bounded, it follows the limit is zero</p>\n" }, { "answer_id": 2107266, "score": 5, "text": "<p>The difference between $\\sqrt{x}$ and $\\sqrt{x-1}$ becomes arbitrary small as $x$ goes to infinity. Since $\\cos$ is a <a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"nofollow noreferrer\" title=\"Uniform continuity\">uniformly continuous</a> function, the difference between the cosine of two arbitrary close numbers is 0. So the limit is 0.</p>\n\n<p>Uniform continuity is a stronger form of continuity. Having a bounded derivative is a sufficient condition for a function to be uniformly continuous.</p>\n\n<p>This method requires some extra work to make a rigid proof, but it is a general method that works in many cases and it complies with intuition.</p>\n" }, { "answer_id": 2107276, "score": 4, "text": "<p>$$\\cos\\sqrt{x}-\\cos\\sqrt{x-1}=2\\sin\\frac{\\sqrt{x}+\\sqrt{x-1}}{2}\\sin\\frac{\\sqrt{x-1}-\\sqrt{x}}{2}=$$\n$$=-2\\sin\\frac{\\sqrt{x}+\\sqrt{x-1}}{2}\\sin\\frac{1}{2(\\sqrt{x-1}+\\sqrt{x})}\\rightarrow0$$</p>\n" }, { "answer_id": 2107278, "score": 3, "text": "<p>$\\displaystyle \\lim_{x\\rightarrow \\infty}\\bigg|\\int^{\\sqrt{x}}_{\\sqrt{x-1}}\\sin xdx\\bigg| \\leq \\lim_{x\\rightarrow \\infty}\\int^{\\sqrt{x}}_{\\sqrt{x-1}}1dx = \\lim_{x\\rightarrow \\infty}\\left(\\sqrt{x}-\\sqrt{x-1}\\right) = \\lim_{x\\rightarrow \\infty}\\frac{1}{\\left(\\sqrt{x}+\\sqrt{x-1}\\right)}=0$</p>\n" }, { "answer_id": 2107300, "score": 1, "text": "<p>Direct approach:</p>\n\n<p>Let $f(x) := \\cos\\sqrt{x} - \\cos\\sqrt{x - 1}$.</p>\n\n<p>By <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem\" rel=\"nofollow noreferrer\">the mean value theorem</a>, for two values $a, b \\in \\mathbb R$, there exists $t\\in\\mathbb [a, b]$ such that $\\cos(b) - \\cos(a) = \\cos'(t)(b - a)$. Thus,$$\n|f(x)|=|\\cos\\sqrt{x} - \\cos\\sqrt{x - 1}| \\\\\n=|\\cos'(t)(\\sqrt{x} - \\sqrt{x - 1})| \\\\\n=|-\\sin(t)|\\cdot|\\sqrt{x} - \\sqrt{x - 1}|\\\\\n\\le|\\sqrt{x} - \\sqrt{x - 1}| = \\sqrt{x} - \\sqrt{x - 1} =: g(x)$$</p>\n\n<p>for all $x&gt;1$.</p>\n\n<hr>\n\n<p>Assertion: $\\lim_{x\\rightarrow\\infty} g(x) = 0$.</p>\n\n<p>Proof: Let $\\epsilon &gt; 0$. Then</p>\n\n<p>$$g(x) \\le \\epsilon\\\\\n\\Leftrightarrow\\sqrt{x} \\leq \\epsilon + \\sqrt{x - 1}\\\\\n\\Leftrightarrow x \\leq \\epsilon^2 + 2\\sqrt{x - 1} + x -1\\\\\n\\Leftrightarrow x \\geq 1+(1-\\epsilon^2)^2/4 =: x_0$$</p>\n\n<p>Thus, for all $x&gt; x_0$, we have $|g(x)| = g(x) \\le \\epsilon$.</p>\n\n<hr>\n\n<p>Together with $|f(x)|\\le g(x)$, this yields $\\lim_{x\\rightarrow\\infty} f(x) = 0$.</p>\n" }, { "answer_id": 2107258, "score": 7, "text": "<p>HINT</p>\n\n<p>Use $\\cos x - \\cos y = -2 \\sin(\\frac {(x - y)} 2 ) \\sin(\\frac {(x + y)} 2 )$</p>\n\n<p>Then $\\lim _{x\\to \\infty }\\sin(\\frac {(\\sqrt x - \\sqrt {x-1})} 2 )=\\lim _{x\\to \\infty }\\sin(\\frac 1 {2(\\sqrt x + \\sqrt {x-1})})=0$</p>\n\n<p>Because $\\sin(\\frac {(\\sqrt x + \\sqrt {x-1})} 2) $ is bounded, it follows the limit is zero</p>\n" }, { "answer_id": 2107266, "score": 5, "text": "<p>The difference between $\\sqrt{x}$ and $\\sqrt{x-1}$ becomes arbitrary small as $x$ goes to infinity. Since $\\cos$ is a <a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"nofollow noreferrer\" title=\"Uniform continuity\">uniformly continuous</a> function, the difference between the cosine of two arbitrary close numbers is 0. So the limit is 0.</p>\n\n<p>Uniform continuity is a stronger form of continuity. Having a bounded derivative is a sufficient condition for a function to be uniformly continuous.</p>\n\n<p>This method requires some extra work to make a rigid proof, but it is a general method that works in many cases and it complies with intuition.</p>\n" }, { "answer_id": 2107276, "score": 4, "text": "<p>$$\\cos\\sqrt{x}-\\cos\\sqrt{x-1}=2\\sin\\frac{\\sqrt{x}+\\sqrt{x-1}}{2}\\sin\\frac{\\sqrt{x-1}-\\sqrt{x}}{2}=$$\n$$=-2\\sin\\frac{\\sqrt{x}+\\sqrt{x-1}}{2}\\sin\\frac{1}{2(\\sqrt{x-1}+\\sqrt{x})}\\rightarrow0$$</p>\n" }, { "answer_id": 2107278, "score": 3, "text": "<p>$\\displaystyle \\lim_{x\\rightarrow \\infty}\\bigg|\\int^{\\sqrt{x}}_{\\sqrt{x-1}}\\sin xdx\\bigg| \\leq \\lim_{x\\rightarrow \\infty}\\int^{\\sqrt{x}}_{\\sqrt{x-1}}1dx = \\lim_{x\\rightarrow \\infty}\\left(\\sqrt{x}-\\sqrt{x-1}\\right) = \\lim_{x\\rightarrow \\infty}\\frac{1}{\\left(\\sqrt{x}+\\sqrt{x-1}\\right)}=0$</p>\n" }, { "answer_id": 2107300, "score": 1, "text": "<p>Direct approach:</p>\n\n<p>Let $f(x) := \\cos\\sqrt{x} - \\cos\\sqrt{x - 1}$.</p>\n\n<p>By <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem\" rel=\"nofollow noreferrer\">the mean value theorem</a>, for two values $a, b \\in \\mathbb R$, there exists $t\\in\\mathbb [a, b]$ such that $\\cos(b) - \\cos(a) = \\cos'(t)(b - a)$. Thus,$$\n|f(x)|=|\\cos\\sqrt{x} - \\cos\\sqrt{x - 1}| \\\\\n=|\\cos'(t)(\\sqrt{x} - \\sqrt{x - 1})| \\\\\n=|-\\sin(t)|\\cdot|\\sqrt{x} - \\sqrt{x - 1}|\\\\\n\\le|\\sqrt{x} - \\sqrt{x - 1}| = \\sqrt{x} - \\sqrt{x - 1} =: g(x)$$</p>\n\n<p>for all $x&gt;1$.</p>\n\n<hr>\n\n<p>Assertion: $\\lim_{x\\rightarrow\\infty} g(x) = 0$.</p>\n\n<p>Proof: Let $\\epsilon &gt; 0$. Then</p>\n\n<p>$$g(x) \\le \\epsilon\\\\\n\\Leftrightarrow\\sqrt{x} \\leq \\epsilon + \\sqrt{x - 1}\\\\\n\\Leftrightarrow x \\leq \\epsilon^2 + 2\\sqrt{x - 1} + x -1\\\\\n\\Leftrightarrow x \\geq 1+(1-\\epsilon^2)^2/4 =: x_0$$</p>\n\n<p>Thus, for all $x&gt; x_0$, we have $|g(x)| = g(x) \\le \\epsilon$.</p>\n\n<hr>\n\n<p>Together with $|f(x)|\\le g(x)$, this yields $\\lim_{x\\rightarrow\\infty} f(x) = 0$.</p>\n" }, { "answer_id": 2107258, "score": 7, "text": "<p>HINT</p>\n\n<p>Use $\\cos x - \\cos y = -2 \\sin(\\frac {(x - y)} 2 ) \\sin(\\frac {(x + y)} 2 )$</p>\n\n<p>Then $\\lim _{x\\to \\infty }\\sin(\\frac {(\\sqrt x - \\sqrt {x-1})} 2 )=\\lim _{x\\to \\infty }\\sin(\\frac 1 {2(\\sqrt x + \\sqrt {x-1})})=0$</p>\n\n<p>Because $\\sin(\\frac {(\\sqrt x + \\sqrt {x-1})} 2) $ is bounded, it follows the limit is zero</p>\n" }, { "answer_id": 2107266, "score": 5, "text": "<p>The difference between $\\sqrt{x}$ and $\\sqrt{x-1}$ becomes arbitrary small as $x$ goes to infinity. Since $\\cos$ is a <a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"nofollow noreferrer\" title=\"Uniform continuity\">uniformly continuous</a> function, the difference between the cosine of two arbitrary close numbers is 0. So the limit is 0.</p>\n\n<p>Uniform continuity is a stronger form of continuity. Having a bounded derivative is a sufficient condition for a function to be uniformly continuous.</p>\n\n<p>This method requires some extra work to make a rigid proof, but it is a general method that works in many cases and it complies with intuition.</p>\n" }, { "answer_id": 2107276, "score": 4, "text": "<p>$$\\cos\\sqrt{x}-\\cos\\sqrt{x-1}=2\\sin\\frac{\\sqrt{x}+\\sqrt{x-1}}{2}\\sin\\frac{\\sqrt{x-1}-\\sqrt{x}}{2}=$$\n$$=-2\\sin\\frac{\\sqrt{x}+\\sqrt{x-1}}{2}\\sin\\frac{1}{2(\\sqrt{x-1}+\\sqrt{x})}\\rightarrow0$$</p>\n" }, { "answer_id": 2107278, "score": 3, "text": "<p>$\\displaystyle \\lim_{x\\rightarrow \\infty}\\bigg|\\int^{\\sqrt{x}}_{\\sqrt{x-1}}\\sin xdx\\bigg| \\leq \\lim_{x\\rightarrow \\infty}\\int^{\\sqrt{x}}_{\\sqrt{x-1}}1dx = \\lim_{x\\rightarrow \\infty}\\left(\\sqrt{x}-\\sqrt{x-1}\\right) = \\lim_{x\\rightarrow \\infty}\\frac{1}{\\left(\\sqrt{x}+\\sqrt{x-1}\\right)}=0$</p>\n" }, { "answer_id": 2107300, "score": 1, "text": "<p>Direct approach:</p>\n\n<p>Let $f(x) := \\cos\\sqrt{x} - \\cos\\sqrt{x - 1}$.</p>\n\n<p>By <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem\" rel=\"nofollow noreferrer\">the mean value theorem</a>, for two values $a, b \\in \\mathbb R$, there exists $t\\in\\mathbb [a, b]$ such that $\\cos(b) - \\cos(a) = \\cos'(t)(b - a)$. Thus,$$\n|f(x)|=|\\cos\\sqrt{x} - \\cos\\sqrt{x - 1}| \\\\\n=|\\cos'(t)(\\sqrt{x} - \\sqrt{x - 1})| \\\\\n=|-\\sin(t)|\\cdot|\\sqrt{x} - \\sqrt{x - 1}|\\\\\n\\le|\\sqrt{x} - \\sqrt{x - 1}| = \\sqrt{x} - \\sqrt{x - 1} =: g(x)$$</p>\n\n<p>for all $x&gt;1$.</p>\n\n<hr>\n\n<p>Assertion: $\\lim_{x\\rightarrow\\infty} g(x) = 0$.</p>\n\n<p>Proof: Let $\\epsilon &gt; 0$. Then</p>\n\n<p>$$g(x) \\le \\epsilon\\\\\n\\Leftrightarrow\\sqrt{x} \\leq \\epsilon + \\sqrt{x - 1}\\\\\n\\Leftrightarrow x \\leq \\epsilon^2 + 2\\sqrt{x - 1} + x -1\\\\\n\\Leftrightarrow x \\geq 1+(1-\\epsilon^2)^2/4 =: x_0$$</p>\n\n<p>Thus, for all $x&gt; x_0$, we have $|g(x)| = g(x) \\le \\epsilon$.</p>\n\n<hr>\n\n<p>Together with $|f(x)|\\le g(x)$, this yields $\\lim_{x\\rightarrow\\infty} f(x) = 0$.</p>\n" } ]
Question: Evaluate $\lim _{x\to \infty }\left(\cos\sqrt{x}-\cos\sqrt{x-1}\right)$ Answer 1: <p>HINT</p> <p>Use $\cos x - \cos y = -2 \sin(\frac {(x - y)} 2 ) \sin(\frac {(x + y)} 2 )$</p> <p>Then $\lim _{x\to \infty }\sin(\frac {(\sqrt x - \sqrt {x-1})} 2 )=\lim _{x\to \infty }\sin(\frac 1 {2(\sqrt x + \sqrt {x-1})})=0$</p> <p>Because $\sin(\frac {(\sqrt x + \sqrt {x-1})} 2) $ is bounded, it follows the limit is zero</p> Answer 2: <p>The difference between $\sqrt{x}$ and $\sqrt{x-1}$ becomes arbitrary small as $x$ goes to infinity. Since $\cos$ is a <a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="nofollow noreferrer" title="Uniform continuity">uniformly continuous</a> function, the difference between the cosine of two arbitrary close numbers is 0. So the limit is 0.</p> <p>Uniform continuity is a stronger form of continuity. Having a bounded derivative is a sufficient condition for a function to be uniformly continuous.</p> <p>This method requires some extra work to make a rigid proof, but it is a general method that works in many cases and it complies with intuition.</p> Answer 3: <p>$$\cos\sqrt{x}-\cos\sqrt{x-1}=2\sin\frac{\sqrt{x}+\sqrt{x-1}}{2}\sin\frac{\sqrt{x-1}-\sqrt{x}}{2}=$$ $$=-2\sin\frac{\sqrt{x}+\sqrt{x-1}}{2}\sin\frac{1}{2(\sqrt{x-1}+\sqrt{x})}\rightarrow0$$</p> Answer 4: <p>$\displaystyle \lim_{x\rightarrow \infty}\bigg|\int^{\sqrt{x}}_{\sqrt{x-1}}\sin xdx\bigg| \leq \lim_{x\rightarrow \infty}\int^{\sqrt{x}}_{\sqrt{x-1}}1dx = \lim_{x\rightarrow \infty}\left(\sqrt{x}-\sqrt{x-1}\right) = \lim_{x\rightarrow \infty}\frac{1}{\left(\sqrt{x}+\sqrt{x-1}\right)}=0$</p> Answer 5: <p>Direct approach:</p> <p>Let $f(x) := \cos\sqrt{x} - \cos\sqrt{x - 1}$.</p> <p>By <a href="https://en.wikipedia.org/wiki/Mean_value_theorem" rel="nofollow noreferrer">the mean value theorem</a>, for two values $a, b \in \mathbb R$, there exists $t\in\mathbb [a, b]$ such that $\cos(b) - \cos(a) = \cos'(t)(b - a)$. Thus,$$ |f(x)|=|\cos\sqrt{x} - \cos\sqrt{x - 1}| \\ =|\cos'(t)(\sqrt{x} - \sqrt{x - 1})| \\ =|-\sin(t)|\cdot|\sqrt{x} - \sqrt{x - 1}|\\ \le|\sqrt{x} - \sqrt{x - 1}| = \sqrt{x} - \sqrt{x - 1} =: g(x)$$</p> <p>for all $x&gt;1$.</p> <hr> <p>Assertion: $\lim_{x\rightarrow\infty} g(x) = 0$.</p> <p>Proof: Let $\epsilon &gt; 0$. Then</p> <p>$$g(x) \le \epsilon\\ \Leftrightarrow\sqrt{x} \leq \epsilon + \sqrt{x - 1}\\ \Leftrightarrow x \leq \epsilon^2 + 2\sqrt{x - 1} + x -1\\ \Leftrightarrow x \geq 1+(1-\epsilon^2)^2/4 =: x_0$$</p> <p>Thus, for all $x&gt; x_0$, we have $|g(x)| = g(x) \le \epsilon$.</p> <hr> <p>Together with $|f(x)|\le g(x)$, this yields $\lim_{x\rightarrow\infty} f(x) = 0$.</p> Answer 6: <p>HINT</p> <p>Use $\cos x - \cos y = -2 \sin(\frac {(x - y)} 2 ) \sin(\frac {(x + y)} 2 )$</p> <p>Then $\lim _{x\to \infty }\sin(\frac {(\sqrt x - \sqrt {x-1})} 2 )=\lim _{x\to \infty }\sin(\frac 1 {2(\sqrt x + \sqrt {x-1})})=0$</p> <p>Because $\sin(\frac {(\sqrt x + \sqrt {x-1})} 2) $ is bounded, it follows the limit is zero</p> Answer 7: <p>The difference between $\sqrt{x}$ and $\sqrt{x-1}$ becomes arbitrary small as $x$ goes to infinity. Since $\cos$ is a <a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="nofollow noreferrer" title="Uniform continuity">uniformly continuous</a> function, the difference between the cosine of two arbitrary close numbers is 0. So the limit is 0.</p> <p>Uniform continuity is a stronger form of continuity. Having a bounded derivative is a sufficient condition for a function to be uniformly continuous.</p> <p>This method requires some extra work to make a rigid proof, but it is a general method that works in many cases and it complies with intuition.</p> Answer 8: <p>$$\cos\sqrt{x}-\cos\sqrt{x-1}=2\sin\frac{\sqrt{x}+\sqrt{x-1}}{2}\sin\frac{\sqrt{x-1}-\sqrt{x}}{2}=$$ $$=-2\sin\frac{\sqrt{x}+\sqrt{x-1}}{2}\sin\frac{1}{2(\sqrt{x-1}+\sqrt{x})}\rightarrow0$$</p> Answer 9: <p>$\displaystyle \lim_{x\rightarrow \infty}\bigg|\int^{\sqrt{x}}_{\sqrt{x-1}}\sin xdx\bigg| \leq \lim_{x\rightarrow \infty}\int^{\sqrt{x}}_{\sqrt{x-1}}1dx = \lim_{x\rightarrow \infty}\left(\sqrt{x}-\sqrt{x-1}\right) = \lim_{x\rightarrow \infty}\frac{1}{\left(\sqrt{x}+\sqrt{x-1}\right)}=0$</p> Answer 10: <p>Direct approach:</p> <p>Let $f(x) := \cos\sqrt{x} - \cos\sqrt{x - 1}$.</p> <p>By <a href="https://en.wikipedia.org/wiki/Mean_value_theorem" rel="nofollow noreferrer">the mean value theorem</a>, for two values $a, b \in \mathbb R$, there exists $t\in\mathbb [a, b]$ such that $\cos(b) - \cos(a) = \cos'(t)(b - a)$. Thus,$$ |f(x)|=|\cos\sqrt{x} - \cos\sqrt{x - 1}| \\ =|\cos'(t)(\sqrt{x} - \sqrt{x - 1})| \\ =|-\sin(t)|\cdot|\sqrt{x} - \sqrt{x - 1}|\\ \le|\sqrt{x} - \sqrt{x - 1}| = \sqrt{x} - \sqrt{x - 1} =: g(x)$$</p> <p>for all $x&gt;1$.</p> <hr> <p>Assertion: $\lim_{x\rightarrow\infty} g(x) = 0$.</p> <p>Proof: Let $\epsilon &gt; 0$. Then</p> <p>$$g(x) \le \epsilon\\ \Leftrightarrow\sqrt{x} \leq \epsilon + \sqrt{x - 1}\\ \Leftrightarrow x \leq \epsilon^2 + 2\sqrt{x - 1} + x -1\\ \Leftrightarrow x \geq 1+(1-\epsilon^2)^2/4 =: x_0$$</p> <p>Thus, for all $x&gt; x_0$, we have $|g(x)| = g(x) \le \epsilon$.</p> <hr> <p>Together with $|f(x)|\le g(x)$, this yields $\lim_{x\rightarrow\infty} f(x) = 0$.</p> Answer 11: <p>HINT</p> <p>Use $\cos x - \cos y = -2 \sin(\frac {(x - y)} 2 ) \sin(\frac {(x + y)} 2 )$</p> <p>Then $\lim _{x\to \infty }\sin(\frac {(\sqrt x - \sqrt {x-1})} 2 )=\lim _{x\to \infty }\sin(\frac 1 {2(\sqrt x + \sqrt {x-1})})=0$</p> <p>Because $\sin(\frac {(\sqrt x + \sqrt {x-1})} 2) $ is bounded, it follows the limit is zero</p> Answer 12: <p>The difference between $\sqrt{x}$ and $\sqrt{x-1}$ becomes arbitrary small as $x$ goes to infinity. Since $\cos$ is a <a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="nofollow noreferrer" title="Uniform continuity">uniformly continuous</a> function, the difference between the cosine of two arbitrary close numbers is 0. So the limit is 0.</p> <p>Uniform continuity is a stronger form of continuity. Having a bounded derivative is a sufficient condition for a function to be uniformly continuous.</p> <p>This method requires some extra work to make a rigid proof, but it is a general method that works in many cases and it complies with intuition.</p> Answer 13: <p>$$\cos\sqrt{x}-\cos\sqrt{x-1}=2\sin\frac{\sqrt{x}+\sqrt{x-1}}{2}\sin\frac{\sqrt{x-1}-\sqrt{x}}{2}=$$ $$=-2\sin\frac{\sqrt{x}+\sqrt{x-1}}{2}\sin\frac{1}{2(\sqrt{x-1}+\sqrt{x})}\rightarrow0$$</p> Answer 14: <p>$\displaystyle \lim_{x\rightarrow \infty}\bigg|\int^{\sqrt{x}}_{\sqrt{x-1}}\sin xdx\bigg| \leq \lim_{x\rightarrow \infty}\int^{\sqrt{x}}_{\sqrt{x-1}}1dx = \lim_{x\rightarrow \infty}\left(\sqrt{x}-\sqrt{x-1}\right) = \lim_{x\rightarrow \infty}\frac{1}{\left(\sqrt{x}+\sqrt{x-1}\right)}=0$</p> Answer 15: <p>Direct approach:</p> <p>Let $f(x) := \cos\sqrt{x} - \cos\sqrt{x - 1}$.</p> <p>By <a href="https://en.wikipedia.org/wiki/Mean_value_theorem" rel="nofollow noreferrer">the mean value theorem</a>, for two values $a, b \in \mathbb R$, there exists $t\in\mathbb [a, b]$ such that $\cos(b) - \cos(a) = \cos'(t)(b - a)$. Thus,$$ |f(x)|=|\cos\sqrt{x} - \cos\sqrt{x - 1}| \\ =|\cos'(t)(\sqrt{x} - \sqrt{x - 1})| \\ =|-\sin(t)|\cdot|\sqrt{x} - \sqrt{x - 1}|\\ \le|\sqrt{x} - \sqrt{x - 1}| = \sqrt{x} - \sqrt{x - 1} =: g(x)$$</p> <p>for all $x&gt;1$.</p> <hr> <p>Assertion: $\lim_{x\rightarrow\infty} g(x) = 0$.</p> <p>Proof: Let $\epsilon &gt; 0$. Then</p> <p>$$g(x) \le \epsilon\\ \Leftrightarrow\sqrt{x} \leq \epsilon + \sqrt{x - 1}\\ \Leftrightarrow x \leq \epsilon^2 + 2\sqrt{x - 1} + x -1\\ \Leftrightarrow x \geq 1+(1-\epsilon^2)^2/4 =: x_0$$</p> <p>Thus, for all $x&gt; x_0$, we have $|g(x)| = g(x) \le \epsilon$.</p> <hr> <p>Together with $|f(x)|\le g(x)$, this yields $\lim_{x\rightarrow\infty} f(x) = 0$.</p> Answer 16: <p>HINT</p> <p>Use $\cos x - \cos y = -2 \sin(\frac {(x - y)} 2 ) \sin(\frac {(x + y)} 2 )$</p> <p>Then $\lim _{x\to \infty }\sin(\frac {(\sqrt x - \sqrt {x-1})} 2 )=\lim _{x\to \infty }\sin(\frac 1 {2(\sqrt x + \sqrt {x-1})})=0$</p> <p>Because $\sin(\frac {(\sqrt x + \sqrt {x-1})} 2) $ is bounded, it follows the limit is zero</p> Answer 17: <p>The difference between $\sqrt{x}$ and $\sqrt{x-1}$ becomes arbitrary small as $x$ goes to infinity. Since $\cos$ is a <a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="nofollow noreferrer" title="Uniform continuity">uniformly continuous</a> function, the difference between the cosine of two arbitrary close numbers is 0. So the limit is 0.</p> <p>Uniform continuity is a stronger form of continuity. Having a bounded derivative is a sufficient condition for a function to be uniformly continuous.</p> <p>This method requires some extra work to make a rigid proof, but it is a general method that works in many cases and it complies with intuition.</p> Answer 18: <p>$$\cos\sqrt{x}-\cos\sqrt{x-1}=2\sin\frac{\sqrt{x}+\sqrt{x-1}}{2}\sin\frac{\sqrt{x-1}-\sqrt{x}}{2}=$$ $$=-2\sin\frac{\sqrt{x}+\sqrt{x-1}}{2}\sin\frac{1}{2(\sqrt{x-1}+\sqrt{x})}\rightarrow0$$</p> Answer 19: <p>$\displaystyle \lim_{x\rightarrow \infty}\bigg|\int^{\sqrt{x}}_{\sqrt{x-1}}\sin xdx\bigg| \leq \lim_{x\rightarrow \infty}\int^{\sqrt{x}}_{\sqrt{x-1}}1dx = \lim_{x\rightarrow \infty}\left(\sqrt{x}-\sqrt{x-1}\right) = \lim_{x\rightarrow \infty}\frac{1}{\left(\sqrt{x}+\sqrt{x-1}\right)}=0$</p> Answer 20: <p>Direct approach:</p> <p>Let $f(x) := \cos\sqrt{x} - \cos\sqrt{x - 1}$.</p> <p>By <a href="https://en.wikipedia.org/wiki/Mean_value_theorem" rel="nofollow noreferrer">the mean value theorem</a>, for two values $a, b \in \mathbb R$, there exists $t\in\mathbb [a, b]$ such that $\cos(b) - \cos(a) = \cos'(t)(b - a)$. Thus,$$ |f(x)|=|\cos\sqrt{x} - \cos\sqrt{x - 1}| \\ =|\cos'(t)(\sqrt{x} - \sqrt{x - 1})| \\ =|-\sin(t)|\cdot|\sqrt{x} - \sqrt{x - 1}|\\ \le|\sqrt{x} - \sqrt{x - 1}| = \sqrt{x} - \sqrt{x - 1} =: g(x)$$</p> <p>for all $x&gt;1$.</p> <hr> <p>Assertion: $\lim_{x\rightarrow\infty} g(x) = 0$.</p> <p>Proof: Let $\epsilon &gt; 0$. Then</p> <p>$$g(x) \le \epsilon\\ \Leftrightarrow\sqrt{x} \leq \epsilon + \sqrt{x - 1}\\ \Leftrightarrow x \leq \epsilon^2 + 2\sqrt{x - 1} + x -1\\ \Leftrightarrow x \geq 1+(1-\epsilon^2)^2/4 =: x_0$$</p> <p>Thus, for all $x&gt; x_0$, we have $|g(x)| = g(x) \le \epsilon$.</p> <hr> <p>Together with $|f(x)|\le g(x)$, this yields $\lim_{x\rightarrow\infty} f(x) = 0$.</p> Answer 21: <p>HINT</p> <p>Use $\cos x - \cos y = -2 \sin(\frac {(x - y)} 2 ) \sin(\frac {(x + y)} 2 )$</p> <p>Then $\lim _{x\to \infty }\sin(\frac {(\sqrt x - \sqrt {x-1})} 2 )=\lim _{x\to \infty }\sin(\frac 1 {2(\sqrt x + \sqrt {x-1})})=0$</p> <p>Because $\sin(\frac {(\sqrt x + \sqrt {x-1})} 2) $ is bounded, it follows the limit is zero</p> Answer 22: <p>The difference between $\sqrt{x}$ and $\sqrt{x-1}$ becomes arbitrary small as $x$ goes to infinity. Since $\cos$ is a <a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="nofollow noreferrer" title="Uniform continuity">uniformly continuous</a> function, the difference between the cosine of two arbitrary close numbers is 0. So the limit is 0.</p> <p>Uniform continuity is a stronger form of continuity. Having a bounded derivative is a sufficient condition for a function to be uniformly continuous.</p> <p>This method requires some extra work to make a rigid proof, but it is a general method that works in many cases and it complies with intuition.</p> Answer 23: <p>$$\cos\sqrt{x}-\cos\sqrt{x-1}=2\sin\frac{\sqrt{x}+\sqrt{x-1}}{2}\sin\frac{\sqrt{x-1}-\sqrt{x}}{2}=$$ $$=-2\sin\frac{\sqrt{x}+\sqrt{x-1}}{2}\sin\frac{1}{2(\sqrt{x-1}+\sqrt{x})}\rightarrow0$$</p> Answer 24: <p>$\displaystyle \lim_{x\rightarrow \infty}\bigg|\int^{\sqrt{x}}_{\sqrt{x-1}}\sin xdx\bigg| \leq \lim_{x\rightarrow \infty}\int^{\sqrt{x}}_{\sqrt{x-1}}1dx = \lim_{x\rightarrow \infty}\left(\sqrt{x}-\sqrt{x-1}\right) = \lim_{x\rightarrow \infty}\frac{1}{\left(\sqrt{x}+\sqrt{x-1}\right)}=0$</p> Answer 25: <p>Direct approach:</p> <p>Let $f(x) := \cos\sqrt{x} - \cos\sqrt{x - 1}$.</p> <p>By <a href="https://en.wikipedia.org/wiki/Mean_value_theorem" rel="nofollow noreferrer">the mean value theorem</a>, for two values $a, b \in \mathbb R$, there exists $t\in\mathbb [a, b]$ such that $\cos(b) - \cos(a) = \cos'(t)(b - a)$. Thus,$$ |f(x)|=|\cos\sqrt{x} - \cos\sqrt{x - 1}| \\ =|\cos'(t)(\sqrt{x} - \sqrt{x - 1})| \\ =|-\sin(t)|\cdot|\sqrt{x} - \sqrt{x - 1}|\\ \le|\sqrt{x} - \sqrt{x - 1}| = \sqrt{x} - \sqrt{x - 1} =: g(x)$$</p> <p>for all $x&gt;1$.</p> <hr> <p>Assertion: $\lim_{x\rightarrow\infty} g(x) = 0$.</p> <p>Proof: Let $\epsilon &gt; 0$. Then</p> <p>$$g(x) \le \epsilon\\ \Leftrightarrow\sqrt{x} \leq \epsilon + \sqrt{x - 1}\\ \Leftrightarrow x \leq \epsilon^2 + 2\sqrt{x - 1} + x -1\\ \Leftrightarrow x \geq 1+(1-\epsilon^2)^2/4 =: x_0$$</p> <p>Thus, for all $x&gt; x_0$, we have $|g(x)| = g(x) \le \epsilon$.</p> <hr> <p>Together with $|f(x)|\le g(x)$, this yields $\lim_{x\rightarrow\infty} f(x) = 0$.</p> Answer 26: <p>HINT</p> <p>Use $\cos x - \cos y = -2 \sin(\frac {(x - y)} 2 ) \sin(\frac {(x + y)} 2 )$</p> <p>Then $\lim _{x\to \infty }\sin(\frac {(\sqrt x - \sqrt {x-1})} 2 )=\lim _{x\to \infty }\sin(\frac 1 {2(\sqrt x + \sqrt {x-1})})=0$</p> <p>Because $\sin(\frac {(\sqrt x + \sqrt {x-1})} 2) $ is bounded, it follows the limit is zero</p> Answer 27: <p>The difference between $\sqrt{x}$ and $\sqrt{x-1}$ becomes arbitrary small as $x$ goes to infinity. Since $\cos$ is a <a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="nofollow noreferrer" title="Uniform continuity">uniformly continuous</a> function, the difference between the cosine of two arbitrary close numbers is 0. So the limit is 0.</p> <p>Uniform continuity is a stronger form of continuity. Having a bounded derivative is a sufficient condition for a function to be uniformly continuous.</p> <p>This method requires some extra work to make a rigid proof, but it is a general method that works in many cases and it complies with intuition.</p> Answer 28: <p>$$\cos\sqrt{x}-\cos\sqrt{x-1}=2\sin\frac{\sqrt{x}+\sqrt{x-1}}{2}\sin\frac{\sqrt{x-1}-\sqrt{x}}{2}=$$ $$=-2\sin\frac{\sqrt{x}+\sqrt{x-1}}{2}\sin\frac{1}{2(\sqrt{x-1}+\sqrt{x})}\rightarrow0$$</p> Answer 29: <p>$\displaystyle \lim_{x\rightarrow \infty}\bigg|\int^{\sqrt{x}}_{\sqrt{x-1}}\sin xdx\bigg| \leq \lim_{x\rightarrow \infty}\int^{\sqrt{x}}_{\sqrt{x-1}}1dx = \lim_{x\rightarrow \infty}\left(\sqrt{x}-\sqrt{x-1}\right) = \lim_{x\rightarrow \infty}\frac{1}{\left(\sqrt{x}+\sqrt{x-1}\right)}=0$</p> Answer 30: <p>Direct approach:</p> <p>Let $f(x) := \cos\sqrt{x} - \cos\sqrt{x - 1}$.</p> <p>By <a href="https://en.wikipedia.org/wiki/Mean_value_theorem" rel="nofollow noreferrer">the mean value theorem</a>, for two values $a, b \in \mathbb R$, there exists $t\in\mathbb [a, b]$ such that $\cos(b) - \cos(a) = \cos'(t)(b - a)$. Thus,$$ |f(x)|=|\cos\sqrt{x} - \cos\sqrt{x - 1}| \\ =|\cos'(t)(\sqrt{x} - \sqrt{x - 1})| \\ =|-\sin(t)|\cdot|\sqrt{x} - \sqrt{x - 1}|\\ \le|\sqrt{x} - \sqrt{x - 1}| = \sqrt{x} - \sqrt{x - 1} =: g(x)$$</p> <p>for all $x&gt;1$.</p> <hr> <p>Assertion: $\lim_{x\rightarrow\infty} g(x) = 0$.</p> <p>Proof: Let $\epsilon &gt; 0$. Then</p> <p>$$g(x) \le \epsilon\\ \Leftrightarrow\sqrt{x} \leq \epsilon + \sqrt{x - 1}\\ \Leftrightarrow x \leq \epsilon^2 + 2\sqrt{x - 1} + x -1\\ \Leftrightarrow x \geq 1+(1-\epsilon^2)^2/4 =: x_0$$</p> <p>Thus, for all $x&gt; x_0$, we have $|g(x)| = g(x) \le \epsilon$.</p> <hr> <p>Together with $|f(x)|\le g(x)$, this yields $\lim_{x\rightarrow\infty} f(x) = 0$.</p>
Why is $9 \times 11{\dots}12 = 100{\dots}08$?
2,107,382
5
https://math.stackexchange.com/questions/2107382/why-is-9-times-11-dots12-100-dots08
<p>While I was working on <a href="https://en.wikipedia.org/wiki/Luhn_algorithm" rel="nofollow noreferrer">Luhn algorithm</a> implementation, I discovered something unusual.</p> <p>$$ 9 \times 2 = 18 $$ $$ 9 \times 12 = 108 $$ $$ 9 \times 112 = 1008 $$ $$ 9 \times 1112 = 10008 $$</p> <p>Hope you can observe the pattern here.</p> <p>What to prove this? What is it's significance?</p>
[ { "answer_id": 2107386, "score": 7, "text": "<p>Here's an idea.</p>\n\n<p>This pattern is easier to understand:\n$$ 9 \\times 11 = 99$$</p>\n\n<p>$$9 \\times 111 = 999$$</p>\n\n<p>$$9 \\times 1111 = 9999$$</p>\n\n<p>Then see what happens when you add 9 to both sides.</p>\n" }, { "answer_id": 2107396, "score": 5, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Repunit\"><em>repunit</em></a>, $R_k = \\overbrace{111\\ldots 111}^{k \\text{ ones}}$ , can be written as $R_k = \\dfrac{10^k-1}{9}$</p>\n\n<p>Your nice pattern corresponds to $9\\times (R_k+1) = (10^k-1)+9 = 10^k+8$</p>\n" }, { "answer_id": 2107390, "score": 3, "text": "<p>One logic behind.</p>\n\n<p>$9 \\times (0 + 2) = 9 \\times 0 + 9 \\times 2 = 18$</p>\n\n<p>$9 \\times (10 + 2) = 9 \\times 10 + 9 \\times 2 = 108$</p>\n\n<p>$9 \\times (110 + 2) = 9 \\times 110 + 9 \\times 2 = 1008$</p>\n\n<p>So on.</p>\n" }, { "answer_id": 2107387, "score": 0, "text": "<p>$$9 \\times 11\\cdots12 = 9 \\times (11\\cdots 11+1) = 99\\cdots99 + 9.$$</p>\n" }, { "answer_id": 2107389, "score": 0, "text": "<p>Write $1...12$ as $$\\overline{1...(\\times\\ n)...1} +1=\\frac{10^n-1}{9}+1$$</p>\n\n<p>Hence $$9\\times (\\frac{10^n-1}{9}+1)=10^n-1+9=10^n+8$$</p>\n" }, { "answer_id": 2107386, "score": 7, "text": "<p>Here's an idea.</p>\n\n<p>This pattern is easier to understand:\n$$ 9 \\times 11 = 99$$</p>\n\n<p>$$9 \\times 111 = 999$$</p>\n\n<p>$$9 \\times 1111 = 9999$$</p>\n\n<p>Then see what happens when you add 9 to both sides.</p>\n" }, { "answer_id": 2107396, "score": 5, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Repunit\"><em>repunit</em></a>, $R_k = \\overbrace{111\\ldots 111}^{k \\text{ ones}}$ , can be written as $R_k = \\dfrac{10^k-1}{9}$</p>\n\n<p>Your nice pattern corresponds to $9\\times (R_k+1) = (10^k-1)+9 = 10^k+8$</p>\n" }, { "answer_id": 2107390, "score": 3, "text": "<p>One logic behind.</p>\n\n<p>$9 \\times (0 + 2) = 9 \\times 0 + 9 \\times 2 = 18$</p>\n\n<p>$9 \\times (10 + 2) = 9 \\times 10 + 9 \\times 2 = 108$</p>\n\n<p>$9 \\times (110 + 2) = 9 \\times 110 + 9 \\times 2 = 1008$</p>\n\n<p>So on.</p>\n" }, { "answer_id": 2107387, "score": 0, "text": "<p>$$9 \\times 11\\cdots12 = 9 \\times (11\\cdots 11+1) = 99\\cdots99 + 9.$$</p>\n" }, { "answer_id": 2107389, "score": 0, "text": "<p>Write $1...12$ as $$\\overline{1...(\\times\\ n)...1} +1=\\frac{10^n-1}{9}+1$$</p>\n\n<p>Hence $$9\\times (\\frac{10^n-1}{9}+1)=10^n-1+9=10^n+8$$</p>\n" }, { "answer_id": 2107386, "score": 7, "text": "<p>Here's an idea.</p>\n\n<p>This pattern is easier to understand:\n$$ 9 \\times 11 = 99$$</p>\n\n<p>$$9 \\times 111 = 999$$</p>\n\n<p>$$9 \\times 1111 = 9999$$</p>\n\n<p>Then see what happens when you add 9 to both sides.</p>\n" }, { "answer_id": 2107396, "score": 5, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Repunit\"><em>repunit</em></a>, $R_k = \\overbrace{111\\ldots 111}^{k \\text{ ones}}$ , can be written as $R_k = \\dfrac{10^k-1}{9}$</p>\n\n<p>Your nice pattern corresponds to $9\\times (R_k+1) = (10^k-1)+9 = 10^k+8$</p>\n" }, { "answer_id": 2107390, "score": 3, "text": "<p>One logic behind.</p>\n\n<p>$9 \\times (0 + 2) = 9 \\times 0 + 9 \\times 2 = 18$</p>\n\n<p>$9 \\times (10 + 2) = 9 \\times 10 + 9 \\times 2 = 108$</p>\n\n<p>$9 \\times (110 + 2) = 9 \\times 110 + 9 \\times 2 = 1008$</p>\n\n<p>So on.</p>\n" }, { "answer_id": 2107387, "score": 0, "text": "<p>$$9 \\times 11\\cdots12 = 9 \\times (11\\cdots 11+1) = 99\\cdots99 + 9.$$</p>\n" }, { "answer_id": 2107389, "score": 0, "text": "<p>Write $1...12$ as $$\\overline{1...(\\times\\ n)...1} +1=\\frac{10^n-1}{9}+1$$</p>\n\n<p>Hence $$9\\times (\\frac{10^n-1}{9}+1)=10^n-1+9=10^n+8$$</p>\n" }, { "answer_id": 2107386, "score": 7, "text": "<p>Here's an idea.</p>\n\n<p>This pattern is easier to understand:\n$$ 9 \\times 11 = 99$$</p>\n\n<p>$$9 \\times 111 = 999$$</p>\n\n<p>$$9 \\times 1111 = 9999$$</p>\n\n<p>Then see what happens when you add 9 to both sides.</p>\n" }, { "answer_id": 2107396, "score": 5, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Repunit\"><em>repunit</em></a>, $R_k = \\overbrace{111\\ldots 111}^{k \\text{ ones}}$ , can be written as $R_k = \\dfrac{10^k-1}{9}$</p>\n\n<p>Your nice pattern corresponds to $9\\times (R_k+1) = (10^k-1)+9 = 10^k+8$</p>\n" }, { "answer_id": 2107390, "score": 3, "text": "<p>One logic behind.</p>\n\n<p>$9 \\times (0 + 2) = 9 \\times 0 + 9 \\times 2 = 18$</p>\n\n<p>$9 \\times (10 + 2) = 9 \\times 10 + 9 \\times 2 = 108$</p>\n\n<p>$9 \\times (110 + 2) = 9 \\times 110 + 9 \\times 2 = 1008$</p>\n\n<p>So on.</p>\n" }, { "answer_id": 2107387, "score": 0, "text": "<p>$$9 \\times 11\\cdots12 = 9 \\times (11\\cdots 11+1) = 99\\cdots99 + 9.$$</p>\n" }, { "answer_id": 2107389, "score": 0, "text": "<p>Write $1...12$ as $$\\overline{1...(\\times\\ n)...1} +1=\\frac{10^n-1}{9}+1$$</p>\n\n<p>Hence $$9\\times (\\frac{10^n-1}{9}+1)=10^n-1+9=10^n+8$$</p>\n" }, { "answer_id": 2107386, "score": 7, "text": "<p>Here's an idea.</p>\n\n<p>This pattern is easier to understand:\n$$ 9 \\times 11 = 99$$</p>\n\n<p>$$9 \\times 111 = 999$$</p>\n\n<p>$$9 \\times 1111 = 9999$$</p>\n\n<p>Then see what happens when you add 9 to both sides.</p>\n" }, { "answer_id": 2107396, "score": 5, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Repunit\"><em>repunit</em></a>, $R_k = \\overbrace{111\\ldots 111}^{k \\text{ ones}}$ , can be written as $R_k = \\dfrac{10^k-1}{9}$</p>\n\n<p>Your nice pattern corresponds to $9\\times (R_k+1) = (10^k-1)+9 = 10^k+8$</p>\n" }, { "answer_id": 2107390, "score": 3, "text": "<p>One logic behind.</p>\n\n<p>$9 \\times (0 + 2) = 9 \\times 0 + 9 \\times 2 = 18$</p>\n\n<p>$9 \\times (10 + 2) = 9 \\times 10 + 9 \\times 2 = 108$</p>\n\n<p>$9 \\times (110 + 2) = 9 \\times 110 + 9 \\times 2 = 1008$</p>\n\n<p>So on.</p>\n" }, { "answer_id": 2107387, "score": 0, "text": "<p>$$9 \\times 11\\cdots12 = 9 \\times (11\\cdots 11+1) = 99\\cdots99 + 9.$$</p>\n" }, { "answer_id": 2107389, "score": 0, "text": "<p>Write $1...12$ as $$\\overline{1...(\\times\\ n)...1} +1=\\frac{10^n-1}{9}+1$$</p>\n\n<p>Hence $$9\\times (\\frac{10^n-1}{9}+1)=10^n-1+9=10^n+8$$</p>\n" }, { "answer_id": 2107386, "score": 7, "text": "<p>Here's an idea.</p>\n\n<p>This pattern is easier to understand:\n$$ 9 \\times 11 = 99$$</p>\n\n<p>$$9 \\times 111 = 999$$</p>\n\n<p>$$9 \\times 1111 = 9999$$</p>\n\n<p>Then see what happens when you add 9 to both sides.</p>\n" }, { "answer_id": 2107396, "score": 5, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Repunit\"><em>repunit</em></a>, $R_k = \\overbrace{111\\ldots 111}^{k \\text{ ones}}$ , can be written as $R_k = \\dfrac{10^k-1}{9}$</p>\n\n<p>Your nice pattern corresponds to $9\\times (R_k+1) = (10^k-1)+9 = 10^k+8$</p>\n" }, { "answer_id": 2107390, "score": 3, "text": "<p>One logic behind.</p>\n\n<p>$9 \\times (0 + 2) = 9 \\times 0 + 9 \\times 2 = 18$</p>\n\n<p>$9 \\times (10 + 2) = 9 \\times 10 + 9 \\times 2 = 108$</p>\n\n<p>$9 \\times (110 + 2) = 9 \\times 110 + 9 \\times 2 = 1008$</p>\n\n<p>So on.</p>\n" }, { "answer_id": 2107387, "score": 0, "text": "<p>$$9 \\times 11\\cdots12 = 9 \\times (11\\cdots 11+1) = 99\\cdots99 + 9.$$</p>\n" }, { "answer_id": 2107389, "score": 0, "text": "<p>Write $1...12$ as $$\\overline{1...(\\times\\ n)...1} +1=\\frac{10^n-1}{9}+1$$</p>\n\n<p>Hence $$9\\times (\\frac{10^n-1}{9}+1)=10^n-1+9=10^n+8$$</p>\n" } ]
Question: Why is $9 \times 11{\dots}12 = 100{\dots}08$? Answer 1: <p>Here's an idea.</p> <p>This pattern is easier to understand: $$ 9 \times 11 = 99$$</p> <p>$$9 \times 111 = 999$$</p> <p>$$9 \times 1111 = 9999$$</p> <p>Then see what happens when you add 9 to both sides.</p> Answer 2: <p>The <a href="https://en.wikipedia.org/wiki/Repunit"><em>repunit</em></a>, $R_k = \overbrace{111\ldots 111}^{k \text{ ones}}$ , can be written as $R_k = \dfrac{10^k-1}{9}$</p> <p>Your nice pattern corresponds to $9\times (R_k+1) = (10^k-1)+9 = 10^k+8$</p> Answer 3: <p>One logic behind.</p> <p>$9 \times (0 + 2) = 9 \times 0 + 9 \times 2 = 18$</p> <p>$9 \times (10 + 2) = 9 \times 10 + 9 \times 2 = 108$</p> <p>$9 \times (110 + 2) = 9 \times 110 + 9 \times 2 = 1008$</p> <p>So on.</p> Answer 4: <p>$$9 \times 11\cdots12 = 9 \times (11\cdots 11+1) = 99\cdots99 + 9.$$</p> Answer 5: <p>Write $1...12$ as $$\overline{1...(\times\ n)...1} +1=\frac{10^n-1}{9}+1$$</p> <p>Hence $$9\times (\frac{10^n-1}{9}+1)=10^n-1+9=10^n+8$$</p> Answer 6: <p>Here's an idea.</p> <p>This pattern is easier to understand: $$ 9 \times 11 = 99$$</p> <p>$$9 \times 111 = 999$$</p> <p>$$9 \times 1111 = 9999$$</p> <p>Then see what happens when you add 9 to both sides.</p> Answer 7: <p>The <a href="https://en.wikipedia.org/wiki/Repunit"><em>repunit</em></a>, $R_k = \overbrace{111\ldots 111}^{k \text{ ones}}$ , can be written as $R_k = \dfrac{10^k-1}{9}$</p> <p>Your nice pattern corresponds to $9\times (R_k+1) = (10^k-1)+9 = 10^k+8$</p> Answer 8: <p>One logic behind.</p> <p>$9 \times (0 + 2) = 9 \times 0 + 9 \times 2 = 18$</p> <p>$9 \times (10 + 2) = 9 \times 10 + 9 \times 2 = 108$</p> <p>$9 \times (110 + 2) = 9 \times 110 + 9 \times 2 = 1008$</p> <p>So on.</p> Answer 9: <p>$$9 \times 11\cdots12 = 9 \times (11\cdots 11+1) = 99\cdots99 + 9.$$</p> Answer 10: <p>Write $1...12$ as $$\overline{1...(\times\ n)...1} +1=\frac{10^n-1}{9}+1$$</p> <p>Hence $$9\times (\frac{10^n-1}{9}+1)=10^n-1+9=10^n+8$$</p> Answer 11: <p>Here's an idea.</p> <p>This pattern is easier to understand: $$ 9 \times 11 = 99$$</p> <p>$$9 \times 111 = 999$$</p> <p>$$9 \times 1111 = 9999$$</p> <p>Then see what happens when you add 9 to both sides.</p> Answer 12: <p>The <a href="https://en.wikipedia.org/wiki/Repunit"><em>repunit</em></a>, $R_k = \overbrace{111\ldots 111}^{k \text{ ones}}$ , can be written as $R_k = \dfrac{10^k-1}{9}$</p> <p>Your nice pattern corresponds to $9\times (R_k+1) = (10^k-1)+9 = 10^k+8$</p> Answer 13: <p>One logic behind.</p> <p>$9 \times (0 + 2) = 9 \times 0 + 9 \times 2 = 18$</p> <p>$9 \times (10 + 2) = 9 \times 10 + 9 \times 2 = 108$</p> <p>$9 \times (110 + 2) = 9 \times 110 + 9 \times 2 = 1008$</p> <p>So on.</p> Answer 14: <p>$$9 \times 11\cdots12 = 9 \times (11\cdots 11+1) = 99\cdots99 + 9.$$</p> Answer 15: <p>Write $1...12$ as $$\overline{1...(\times\ n)...1} +1=\frac{10^n-1}{9}+1$$</p> <p>Hence $$9\times (\frac{10^n-1}{9}+1)=10^n-1+9=10^n+8$$</p> Answer 16: <p>Here's an idea.</p> <p>This pattern is easier to understand: $$ 9 \times 11 = 99$$</p> <p>$$9 \times 111 = 999$$</p> <p>$$9 \times 1111 = 9999$$</p> <p>Then see what happens when you add 9 to both sides.</p> Answer 17: <p>The <a href="https://en.wikipedia.org/wiki/Repunit"><em>repunit</em></a>, $R_k = \overbrace{111\ldots 111}^{k \text{ ones}}$ , can be written as $R_k = \dfrac{10^k-1}{9}$</p> <p>Your nice pattern corresponds to $9\times (R_k+1) = (10^k-1)+9 = 10^k+8$</p> Answer 18: <p>One logic behind.</p> <p>$9 \times (0 + 2) = 9 \times 0 + 9 \times 2 = 18$</p> <p>$9 \times (10 + 2) = 9 \times 10 + 9 \times 2 = 108$</p> <p>$9 \times (110 + 2) = 9 \times 110 + 9 \times 2 = 1008$</p> <p>So on.</p> Answer 19: <p>$$9 \times 11\cdots12 = 9 \times (11\cdots 11+1) = 99\cdots99 + 9.$$</p> Answer 20: <p>Write $1...12$ as $$\overline{1...(\times\ n)...1} +1=\frac{10^n-1}{9}+1$$</p> <p>Hence $$9\times (\frac{10^n-1}{9}+1)=10^n-1+9=10^n+8$$</p> Answer 21: <p>Here's an idea.</p> <p>This pattern is easier to understand: $$ 9 \times 11 = 99$$</p> <p>$$9 \times 111 = 999$$</p> <p>$$9 \times 1111 = 9999$$</p> <p>Then see what happens when you add 9 to both sides.</p> Answer 22: <p>The <a href="https://en.wikipedia.org/wiki/Repunit"><em>repunit</em></a>, $R_k = \overbrace{111\ldots 111}^{k \text{ ones}}$ , can be written as $R_k = \dfrac{10^k-1}{9}$</p> <p>Your nice pattern corresponds to $9\times (R_k+1) = (10^k-1)+9 = 10^k+8$</p> Answer 23: <p>One logic behind.</p> <p>$9 \times (0 + 2) = 9 \times 0 + 9 \times 2 = 18$</p> <p>$9 \times (10 + 2) = 9 \times 10 + 9 \times 2 = 108$</p> <p>$9 \times (110 + 2) = 9 \times 110 + 9 \times 2 = 1008$</p> <p>So on.</p> Answer 24: <p>$$9 \times 11\cdots12 = 9 \times (11\cdots 11+1) = 99\cdots99 + 9.$$</p> Answer 25: <p>Write $1...12$ as $$\overline{1...(\times\ n)...1} +1=\frac{10^n-1}{9}+1$$</p> <p>Hence $$9\times (\frac{10^n-1}{9}+1)=10^n-1+9=10^n+8$$</p> Answer 26: <p>Here's an idea.</p> <p>This pattern is easier to understand: $$ 9 \times 11 = 99$$</p> <p>$$9 \times 111 = 999$$</p> <p>$$9 \times 1111 = 9999$$</p> <p>Then see what happens when you add 9 to both sides.</p> Answer 27: <p>The <a href="https://en.wikipedia.org/wiki/Repunit"><em>repunit</em></a>, $R_k = \overbrace{111\ldots 111}^{k \text{ ones}}$ , can be written as $R_k = \dfrac{10^k-1}{9}$</p> <p>Your nice pattern corresponds to $9\times (R_k+1) = (10^k-1)+9 = 10^k+8$</p> Answer 28: <p>One logic behind.</p> <p>$9 \times (0 + 2) = 9 \times 0 + 9 \times 2 = 18$</p> <p>$9 \times (10 + 2) = 9 \times 10 + 9 \times 2 = 108$</p> <p>$9 \times (110 + 2) = 9 \times 110 + 9 \times 2 = 1008$</p> <p>So on.</p> Answer 29: <p>$$9 \times 11\cdots12 = 9 \times (11\cdots 11+1) = 99\cdots99 + 9.$$</p> Answer 30: <p>Write $1...12$ as $$\overline{1...(\times\ n)...1} +1=\frac{10^n-1}{9}+1$$</p> <p>Hence $$9\times (\frac{10^n-1}{9}+1)=10^n-1+9=10^n+8$$</p>
Proving that $\int_{-1}^0 \frac{e^x+e^{1/x}-1}{x}dx=\gamma$
2,107,666
5
https://math.stackexchange.com/questions/2107666/proving-that-int-10-fracexe1-x-1xdx-gamma
<p>This came up while I was looking at the asymptotic behavior of $f(x)=\int_0^x \frac{e^t-1}{t}dt=\sum_{h=1}^\infty\frac{x^h}{h\cdot h!}$, which is a nice and entire function, as $x\to -\infty$, namely that I think $f(x)\sim -\ln(-x)-\gamma$ as $x\to -\infty$. I've reduced that to the problem of proving $$\int_{-1}^0 \frac{e^x+e^{1/x}-1}{x}dx=\gamma$$ where $\gamma\approx 0.577$ is the Euler-Mascheroni constant.</p>
[ { "answer_id": 2107718, "score": 4, "text": "<p>First, enforce the substitution $x\\to -1/x$, we obtain </p>\n\n<p>$$\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx=-\\int_1^\\infty \\frac{e^{-x}}{x}\\,dx \\tag 1$$</p>\n\n<hr>\n\n<p>Second, integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x}$ and $v=\\log(x)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx&amp;=-\\int_1^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\int_0^\\infty e^{-x}\\log(x)\\,dx+\\int_0^1 e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=\\gamma +\\int_0^1 e^{-x}\\log(x)\\,dx\\tag 2\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, integrating by parts the integral on the right-hand side of $(2)$ with $u=\\log(x)$ and $v=-(e^{-x}-1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx&amp;=\\gamma +\\int_0^1 \\frac{e^{-x}-1}{x}\\,dx\\\\\\\\\n&amp;=\\gamma -\\int_{-1}^0\\frac{e^x-1}{x}\\,dx \\tag 3\n\\end{align}$$</p>\n\n<p>whence solving $(3)$ for $\\gamma$ yields the coveted relationship</p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\gamma=\\int_{-1}^0 \\frac{e^x+e^{1/x}-1}{x}\\,dx}$$</p>\n</blockquote>\n\n<p>And we are done!</p>\n\n<hr>\n\n<p>At the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that $\\gamma$ as given by $-\\int_0^\\infty \\log(x)e^{-x}\\,dx=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n \\frac1k\\right)$, the latter of which is often given as the definition of $\\gamma$.</p>\n" }, { "answer_id": 2107830, "score": 3, "text": "<p>I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\\displaystyle \\lim_{\\epsilon\\to 0^-}\\int_{-1}^\\epsilon \\frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\\int\\frac{e^x+e^{1/x}-1}x dx=\\mathrm{Ei}(x)-\\mathrm{Ei}\\left(\\frac1x\\right)-\\ln{x}+C\\tag{1}$$\nIt's easy to see that the value at $x=-1$ is $-i\\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\\lim_{x\\to 0^-}\\mathrm{Ei}\\left(\\frac1x\\right)=-\\lim_{t\\to\\infty}\\mathrm{Ei}(-t)=0.$$\nThis leaves us with $\\displaystyle\\lim_{x\\to 0^-}\\left[\\mathrm{Ei}(x)-\\ln{x}\\right]$. For $x&lt;0$, we can use <a href=\"http://dlmf.nist.gov/6.6#E2\" rel=\"nofollow noreferrer\">the series expansion for $E_1(z)$</a> to write $$\\mathrm{Ei}(x)=\\gamma+\\ln(-x)+\\sum_{n=1}^\\infty\\frac{x^n}{n\\,n!}.\\tag{2}$$ We now have \n$$\\lim_{x\\to 0^-}\\left[\\mathrm{Ei}(x)-\\ln{x}\\right]=\\lim_{x\\to 0^-}\\left[\\gamma+\\ln(-x)-\\ln{x}+\\sum_{n=1}^\\infty\\frac{x^n}{n\\,n!}\\right]=\\gamma-i\\pi\\tag{3}$$ and may conclude that $$\\int_{-1}^0\\frac{e^x+e^{1/x}-1}x dx=\\gamma+i\\pi-i\\pi=\\gamma.\\tag{4}$$ Q.E.D.</p>\n" }, { "answer_id": 2107718, "score": 4, "text": "<p>First, enforce the substitution $x\\to -1/x$, we obtain </p>\n\n<p>$$\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx=-\\int_1^\\infty \\frac{e^{-x}}{x}\\,dx \\tag 1$$</p>\n\n<hr>\n\n<p>Second, integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x}$ and $v=\\log(x)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx&amp;=-\\int_1^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\int_0^\\infty e^{-x}\\log(x)\\,dx+\\int_0^1 e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=\\gamma +\\int_0^1 e^{-x}\\log(x)\\,dx\\tag 2\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, integrating by parts the integral on the right-hand side of $(2)$ with $u=\\log(x)$ and $v=-(e^{-x}-1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx&amp;=\\gamma +\\int_0^1 \\frac{e^{-x}-1}{x}\\,dx\\\\\\\\\n&amp;=\\gamma -\\int_{-1}^0\\frac{e^x-1}{x}\\,dx \\tag 3\n\\end{align}$$</p>\n\n<p>whence solving $(3)$ for $\\gamma$ yields the coveted relationship</p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\gamma=\\int_{-1}^0 \\frac{e^x+e^{1/x}-1}{x}\\,dx}$$</p>\n</blockquote>\n\n<p>And we are done!</p>\n\n<hr>\n\n<p>At the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that $\\gamma$ as given by $-\\int_0^\\infty \\log(x)e^{-x}\\,dx=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n \\frac1k\\right)$, the latter of which is often given as the definition of $\\gamma$.</p>\n" }, { "answer_id": 2107830, "score": 3, "text": "<p>I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\\displaystyle \\lim_{\\epsilon\\to 0^-}\\int_{-1}^\\epsilon \\frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\\int\\frac{e^x+e^{1/x}-1}x dx=\\mathrm{Ei}(x)-\\mathrm{Ei}\\left(\\frac1x\\right)-\\ln{x}+C\\tag{1}$$\nIt's easy to see that the value at $x=-1$ is $-i\\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\\lim_{x\\to 0^-}\\mathrm{Ei}\\left(\\frac1x\\right)=-\\lim_{t\\to\\infty}\\mathrm{Ei}(-t)=0.$$\nThis leaves us with $\\displaystyle\\lim_{x\\to 0^-}\\left[\\mathrm{Ei}(x)-\\ln{x}\\right]$. For $x&lt;0$, we can use <a href=\"http://dlmf.nist.gov/6.6#E2\" rel=\"nofollow noreferrer\">the series expansion for $E_1(z)$</a> to write $$\\mathrm{Ei}(x)=\\gamma+\\ln(-x)+\\sum_{n=1}^\\infty\\frac{x^n}{n\\,n!}.\\tag{2}$$ We now have \n$$\\lim_{x\\to 0^-}\\left[\\mathrm{Ei}(x)-\\ln{x}\\right]=\\lim_{x\\to 0^-}\\left[\\gamma+\\ln(-x)-\\ln{x}+\\sum_{n=1}^\\infty\\frac{x^n}{n\\,n!}\\right]=\\gamma-i\\pi\\tag{3}$$ and may conclude that $$\\int_{-1}^0\\frac{e^x+e^{1/x}-1}x dx=\\gamma+i\\pi-i\\pi=\\gamma.\\tag{4}$$ Q.E.D.</p>\n" }, { "answer_id": 2107718, "score": 4, "text": "<p>First, enforce the substitution $x\\to -1/x$, we obtain </p>\n\n<p>$$\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx=-\\int_1^\\infty \\frac{e^{-x}}{x}\\,dx \\tag 1$$</p>\n\n<hr>\n\n<p>Second, integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x}$ and $v=\\log(x)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx&amp;=-\\int_1^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\int_0^\\infty e^{-x}\\log(x)\\,dx+\\int_0^1 e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=\\gamma +\\int_0^1 e^{-x}\\log(x)\\,dx\\tag 2\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, integrating by parts the integral on the right-hand side of $(2)$ with $u=\\log(x)$ and $v=-(e^{-x}-1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx&amp;=\\gamma +\\int_0^1 \\frac{e^{-x}-1}{x}\\,dx\\\\\\\\\n&amp;=\\gamma -\\int_{-1}^0\\frac{e^x-1}{x}\\,dx \\tag 3\n\\end{align}$$</p>\n\n<p>whence solving $(3)$ for $\\gamma$ yields the coveted relationship</p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\gamma=\\int_{-1}^0 \\frac{e^x+e^{1/x}-1}{x}\\,dx}$$</p>\n</blockquote>\n\n<p>And we are done!</p>\n\n<hr>\n\n<p>At the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that $\\gamma$ as given by $-\\int_0^\\infty \\log(x)e^{-x}\\,dx=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n \\frac1k\\right)$, the latter of which is often given as the definition of $\\gamma$.</p>\n" }, { "answer_id": 2107830, "score": 3, "text": "<p>I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\\displaystyle \\lim_{\\epsilon\\to 0^-}\\int_{-1}^\\epsilon \\frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\\int\\frac{e^x+e^{1/x}-1}x dx=\\mathrm{Ei}(x)-\\mathrm{Ei}\\left(\\frac1x\\right)-\\ln{x}+C\\tag{1}$$\nIt's easy to see that the value at $x=-1$ is $-i\\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\\lim_{x\\to 0^-}\\mathrm{Ei}\\left(\\frac1x\\right)=-\\lim_{t\\to\\infty}\\mathrm{Ei}(-t)=0.$$\nThis leaves us with $\\displaystyle\\lim_{x\\to 0^-}\\left[\\mathrm{Ei}(x)-\\ln{x}\\right]$. For $x&lt;0$, we can use <a href=\"http://dlmf.nist.gov/6.6#E2\" rel=\"nofollow noreferrer\">the series expansion for $E_1(z)$</a> to write $$\\mathrm{Ei}(x)=\\gamma+\\ln(-x)+\\sum_{n=1}^\\infty\\frac{x^n}{n\\,n!}.\\tag{2}$$ We now have \n$$\\lim_{x\\to 0^-}\\left[\\mathrm{Ei}(x)-\\ln{x}\\right]=\\lim_{x\\to 0^-}\\left[\\gamma+\\ln(-x)-\\ln{x}+\\sum_{n=1}^\\infty\\frac{x^n}{n\\,n!}\\right]=\\gamma-i\\pi\\tag{3}$$ and may conclude that $$\\int_{-1}^0\\frac{e^x+e^{1/x}-1}x dx=\\gamma+i\\pi-i\\pi=\\gamma.\\tag{4}$$ Q.E.D.</p>\n" }, { "answer_id": 2107718, "score": 4, "text": "<p>First, enforce the substitution $x\\to -1/x$, we obtain </p>\n\n<p>$$\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx=-\\int_1^\\infty \\frac{e^{-x}}{x}\\,dx \\tag 1$$</p>\n\n<hr>\n\n<p>Second, integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x}$ and $v=\\log(x)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx&amp;=-\\int_1^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\int_0^\\infty e^{-x}\\log(x)\\,dx+\\int_0^1 e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=\\gamma +\\int_0^1 e^{-x}\\log(x)\\,dx\\tag 2\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, integrating by parts the integral on the right-hand side of $(2)$ with $u=\\log(x)$ and $v=-(e^{-x}-1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx&amp;=\\gamma +\\int_0^1 \\frac{e^{-x}-1}{x}\\,dx\\\\\\\\\n&amp;=\\gamma -\\int_{-1}^0\\frac{e^x-1}{x}\\,dx \\tag 3\n\\end{align}$$</p>\n\n<p>whence solving $(3)$ for $\\gamma$ yields the coveted relationship</p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\gamma=\\int_{-1}^0 \\frac{e^x+e^{1/x}-1}{x}\\,dx}$$</p>\n</blockquote>\n\n<p>And we are done!</p>\n\n<hr>\n\n<p>At the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that $\\gamma$ as given by $-\\int_0^\\infty \\log(x)e^{-x}\\,dx=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n \\frac1k\\right)$, the latter of which is often given as the definition of $\\gamma$.</p>\n" }, { "answer_id": 2107830, "score": 3, "text": "<p>I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\\displaystyle \\lim_{\\epsilon\\to 0^-}\\int_{-1}^\\epsilon \\frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\\int\\frac{e^x+e^{1/x}-1}x dx=\\mathrm{Ei}(x)-\\mathrm{Ei}\\left(\\frac1x\\right)-\\ln{x}+C\\tag{1}$$\nIt's easy to see that the value at $x=-1$ is $-i\\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\\lim_{x\\to 0^-}\\mathrm{Ei}\\left(\\frac1x\\right)=-\\lim_{t\\to\\infty}\\mathrm{Ei}(-t)=0.$$\nThis leaves us with $\\displaystyle\\lim_{x\\to 0^-}\\left[\\mathrm{Ei}(x)-\\ln{x}\\right]$. For $x&lt;0$, we can use <a href=\"http://dlmf.nist.gov/6.6#E2\" rel=\"nofollow noreferrer\">the series expansion for $E_1(z)$</a> to write $$\\mathrm{Ei}(x)=\\gamma+\\ln(-x)+\\sum_{n=1}^\\infty\\frac{x^n}{n\\,n!}.\\tag{2}$$ We now have \n$$\\lim_{x\\to 0^-}\\left[\\mathrm{Ei}(x)-\\ln{x}\\right]=\\lim_{x\\to 0^-}\\left[\\gamma+\\ln(-x)-\\ln{x}+\\sum_{n=1}^\\infty\\frac{x^n}{n\\,n!}\\right]=\\gamma-i\\pi\\tag{3}$$ and may conclude that $$\\int_{-1}^0\\frac{e^x+e^{1/x}-1}x dx=\\gamma+i\\pi-i\\pi=\\gamma.\\tag{4}$$ Q.E.D.</p>\n" }, { "answer_id": 2107718, "score": 4, "text": "<p>First, enforce the substitution $x\\to -1/x$, we obtain </p>\n\n<p>$$\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx=-\\int_1^\\infty \\frac{e^{-x}}{x}\\,dx \\tag 1$$</p>\n\n<hr>\n\n<p>Second, integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x}$ and $v=\\log(x)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx&amp;=-\\int_1^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\int_0^\\infty e^{-x}\\log(x)\\,dx+\\int_0^1 e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=\\gamma +\\int_0^1 e^{-x}\\log(x)\\,dx\\tag 2\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, integrating by parts the integral on the right-hand side of $(2)$ with $u=\\log(x)$ and $v=-(e^{-x}-1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx&amp;=\\gamma +\\int_0^1 \\frac{e^{-x}-1}{x}\\,dx\\\\\\\\\n&amp;=\\gamma -\\int_{-1}^0\\frac{e^x-1}{x}\\,dx \\tag 3\n\\end{align}$$</p>\n\n<p>whence solving $(3)$ for $\\gamma$ yields the coveted relationship</p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\gamma=\\int_{-1}^0 \\frac{e^x+e^{1/x}-1}{x}\\,dx}$$</p>\n</blockquote>\n\n<p>And we are done!</p>\n\n<hr>\n\n<p>At the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that $\\gamma$ as given by $-\\int_0^\\infty \\log(x)e^{-x}\\,dx=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n \\frac1k\\right)$, the latter of which is often given as the definition of $\\gamma$.</p>\n" }, { "answer_id": 2107830, "score": 3, "text": "<p>I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\\displaystyle \\lim_{\\epsilon\\to 0^-}\\int_{-1}^\\epsilon \\frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\\int\\frac{e^x+e^{1/x}-1}x dx=\\mathrm{Ei}(x)-\\mathrm{Ei}\\left(\\frac1x\\right)-\\ln{x}+C\\tag{1}$$\nIt's easy to see that the value at $x=-1$ is $-i\\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\\lim_{x\\to 0^-}\\mathrm{Ei}\\left(\\frac1x\\right)=-\\lim_{t\\to\\infty}\\mathrm{Ei}(-t)=0.$$\nThis leaves us with $\\displaystyle\\lim_{x\\to 0^-}\\left[\\mathrm{Ei}(x)-\\ln{x}\\right]$. For $x&lt;0$, we can use <a href=\"http://dlmf.nist.gov/6.6#E2\" rel=\"nofollow noreferrer\">the series expansion for $E_1(z)$</a> to write $$\\mathrm{Ei}(x)=\\gamma+\\ln(-x)+\\sum_{n=1}^\\infty\\frac{x^n}{n\\,n!}.\\tag{2}$$ We now have \n$$\\lim_{x\\to 0^-}\\left[\\mathrm{Ei}(x)-\\ln{x}\\right]=\\lim_{x\\to 0^-}\\left[\\gamma+\\ln(-x)-\\ln{x}+\\sum_{n=1}^\\infty\\frac{x^n}{n\\,n!}\\right]=\\gamma-i\\pi\\tag{3}$$ and may conclude that $$\\int_{-1}^0\\frac{e^x+e^{1/x}-1}x dx=\\gamma+i\\pi-i\\pi=\\gamma.\\tag{4}$$ Q.E.D.</p>\n" }, { "answer_id": 2107718, "score": 4, "text": "<p>First, enforce the substitution $x\\to -1/x$, we obtain </p>\n\n<p>$$\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx=-\\int_1^\\infty \\frac{e^{-x}}{x}\\,dx \\tag 1$$</p>\n\n<hr>\n\n<p>Second, integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x}$ and $v=\\log(x)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx&amp;=-\\int_1^\\infty e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=-\\int_0^\\infty e^{-x}\\log(x)\\,dx+\\int_0^1 e^{-x}\\log(x)\\,dx\\\\\\\\\n&amp;=\\gamma +\\int_0^1 e^{-x}\\log(x)\\,dx\\tag 2\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, integrating by parts the integral on the right-hand side of $(2)$ with $u=\\log(x)$ and $v=-(e^{-x}-1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\int_{-1}^0\\frac{e^{1/x}}{x}\\,dx&amp;=\\gamma +\\int_0^1 \\frac{e^{-x}-1}{x}\\,dx\\\\\\\\\n&amp;=\\gamma -\\int_{-1}^0\\frac{e^x-1}{x}\\,dx \\tag 3\n\\end{align}$$</p>\n\n<p>whence solving $(3)$ for $\\gamma$ yields the coveted relationship</p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\gamma=\\int_{-1}^0 \\frac{e^x+e^{1/x}-1}{x}\\,dx}$$</p>\n</blockquote>\n\n<p>And we are done!</p>\n\n<hr>\n\n<p>At the end of <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that $\\gamma$ as given by $-\\int_0^\\infty \\log(x)e^{-x}\\,dx=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n \\frac1k\\right)$, the latter of which is often given as the definition of $\\gamma$.</p>\n" }, { "answer_id": 2107830, "score": 3, "text": "<p>I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\\displaystyle \\lim_{\\epsilon\\to 0^-}\\int_{-1}^\\epsilon \\frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\\int\\frac{e^x+e^{1/x}-1}x dx=\\mathrm{Ei}(x)-\\mathrm{Ei}\\left(\\frac1x\\right)-\\ln{x}+C\\tag{1}$$\nIt's easy to see that the value at $x=-1$ is $-i\\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\\lim_{x\\to 0^-}\\mathrm{Ei}\\left(\\frac1x\\right)=-\\lim_{t\\to\\infty}\\mathrm{Ei}(-t)=0.$$\nThis leaves us with $\\displaystyle\\lim_{x\\to 0^-}\\left[\\mathrm{Ei}(x)-\\ln{x}\\right]$. For $x&lt;0$, we can use <a href=\"http://dlmf.nist.gov/6.6#E2\" rel=\"nofollow noreferrer\">the series expansion for $E_1(z)$</a> to write $$\\mathrm{Ei}(x)=\\gamma+\\ln(-x)+\\sum_{n=1}^\\infty\\frac{x^n}{n\\,n!}.\\tag{2}$$ We now have \n$$\\lim_{x\\to 0^-}\\left[\\mathrm{Ei}(x)-\\ln{x}\\right]=\\lim_{x\\to 0^-}\\left[\\gamma+\\ln(-x)-\\ln{x}+\\sum_{n=1}^\\infty\\frac{x^n}{n\\,n!}\\right]=\\gamma-i\\pi\\tag{3}$$ and may conclude that $$\\int_{-1}^0\\frac{e^x+e^{1/x}-1}x dx=\\gamma+i\\pi-i\\pi=\\gamma.\\tag{4}$$ Q.E.D.</p>\n" } ]
Question: Proving that $\int_{-1}^0 \frac{e^x+e^{1/x}-1}{x}dx=\gamma$ Answer 1: <p>First, enforce the substitution $x\to -1/x$, we obtain </p> <p>$$\int_{-1}^0\frac{e^{1/x}}{x}\,dx=-\int_1^\infty \frac{e^{-x}}{x}\,dx \tag 1$$</p> <hr> <p>Second, integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x}$ and $v=\log(x)$ reveals</p> <p>$$\begin{align} \int_{-1}^0\frac{e^{1/x}}{x}\,dx&amp;=-\int_1^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\int_0^\infty e^{-x}\log(x)\,dx+\int_0^1 e^{-x}\log(x)\,dx\\\\ &amp;=\gamma +\int_0^1 e^{-x}\log(x)\,dx\tag 2 \end{align}$$</p> <hr> <p>Finally, integrating by parts the integral on the right-hand side of $(2)$ with $u=\log(x)$ and $v=-(e^{-x}-1)$ reveals</p> <p>$$\begin{align} \int_{-1}^0\frac{e^{1/x}}{x}\,dx&amp;=\gamma +\int_0^1 \frac{e^{-x}-1}{x}\,dx\\\\ &amp;=\gamma -\int_{-1}^0\frac{e^x-1}{x}\,dx \tag 3 \end{align}$$</p> <p>whence solving $(3)$ for $\gamma$ yields the coveted relationship</p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\gamma=\int_{-1}^0 \frac{e^x+e^{1/x}-1}{x}\,dx}$$</p> </blockquote> <p>And we are done!</p> <hr> <p>At the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that $\gamma$ as given by $-\int_0^\infty \log(x)e^{-x}\,dx=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n \frac1k\right)$, the latter of which is often given as the definition of $\gamma$.</p> Answer 2: <p>I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\displaystyle \lim_{\epsilon\to 0^-}\int_{-1}^\epsilon \frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\int\frac{e^x+e^{1/x}-1}x dx=\mathrm{Ei}(x)-\mathrm{Ei}\left(\frac1x\right)-\ln{x}+C\tag{1}$$ It's easy to see that the value at $x=-1$ is $-i\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\lim_{x\to 0^-}\mathrm{Ei}\left(\frac1x\right)=-\lim_{t\to\infty}\mathrm{Ei}(-t)=0.$$ This leaves us with $\displaystyle\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]$. For $x&lt;0$, we can use <a href="http://dlmf.nist.gov/6.6#E2" rel="nofollow noreferrer">the series expansion for $E_1(z)$</a> to write $$\mathrm{Ei}(x)=\gamma+\ln(-x)+\sum_{n=1}^\infty\frac{x^n}{n\,n!}.\tag{2}$$ We now have $$\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]=\lim_{x\to 0^-}\left[\gamma+\ln(-x)-\ln{x}+\sum_{n=1}^\infty\frac{x^n}{n\,n!}\right]=\gamma-i\pi\tag{3}$$ and may conclude that $$\int_{-1}^0\frac{e^x+e^{1/x}-1}x dx=\gamma+i\pi-i\pi=\gamma.\tag{4}$$ Q.E.D.</p> Answer 3: <p>First, enforce the substitution $x\to -1/x$, we obtain </p> <p>$$\int_{-1}^0\frac{e^{1/x}}{x}\,dx=-\int_1^\infty \frac{e^{-x}}{x}\,dx \tag 1$$</p> <hr> <p>Second, integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x}$ and $v=\log(x)$ reveals</p> <p>$$\begin{align} \int_{-1}^0\frac{e^{1/x}}{x}\,dx&amp;=-\int_1^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\int_0^\infty e^{-x}\log(x)\,dx+\int_0^1 e^{-x}\log(x)\,dx\\\\ &amp;=\gamma +\int_0^1 e^{-x}\log(x)\,dx\tag 2 \end{align}$$</p> <hr> <p>Finally, integrating by parts the integral on the right-hand side of $(2)$ with $u=\log(x)$ and $v=-(e^{-x}-1)$ reveals</p> <p>$$\begin{align} \int_{-1}^0\frac{e^{1/x}}{x}\,dx&amp;=\gamma +\int_0^1 \frac{e^{-x}-1}{x}\,dx\\\\ &amp;=\gamma -\int_{-1}^0\frac{e^x-1}{x}\,dx \tag 3 \end{align}$$</p> <p>whence solving $(3)$ for $\gamma$ yields the coveted relationship</p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\gamma=\int_{-1}^0 \frac{e^x+e^{1/x}-1}{x}\,dx}$$</p> </blockquote> <p>And we are done!</p> <hr> <p>At the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that $\gamma$ as given by $-\int_0^\infty \log(x)e^{-x}\,dx=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n \frac1k\right)$, the latter of which is often given as the definition of $\gamma$.</p> Answer 4: <p>I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\displaystyle \lim_{\epsilon\to 0^-}\int_{-1}^\epsilon \frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\int\frac{e^x+e^{1/x}-1}x dx=\mathrm{Ei}(x)-\mathrm{Ei}\left(\frac1x\right)-\ln{x}+C\tag{1}$$ It's easy to see that the value at $x=-1$ is $-i\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\lim_{x\to 0^-}\mathrm{Ei}\left(\frac1x\right)=-\lim_{t\to\infty}\mathrm{Ei}(-t)=0.$$ This leaves us with $\displaystyle\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]$. For $x&lt;0$, we can use <a href="http://dlmf.nist.gov/6.6#E2" rel="nofollow noreferrer">the series expansion for $E_1(z)$</a> to write $$\mathrm{Ei}(x)=\gamma+\ln(-x)+\sum_{n=1}^\infty\frac{x^n}{n\,n!}.\tag{2}$$ We now have $$\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]=\lim_{x\to 0^-}\left[\gamma+\ln(-x)-\ln{x}+\sum_{n=1}^\infty\frac{x^n}{n\,n!}\right]=\gamma-i\pi\tag{3}$$ and may conclude that $$\int_{-1}^0\frac{e^x+e^{1/x}-1}x dx=\gamma+i\pi-i\pi=\gamma.\tag{4}$$ Q.E.D.</p> Answer 5: <p>First, enforce the substitution $x\to -1/x$, we obtain </p> <p>$$\int_{-1}^0\frac{e^{1/x}}{x}\,dx=-\int_1^\infty \frac{e^{-x}}{x}\,dx \tag 1$$</p> <hr> <p>Second, integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x}$ and $v=\log(x)$ reveals</p> <p>$$\begin{align} \int_{-1}^0\frac{e^{1/x}}{x}\,dx&amp;=-\int_1^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\int_0^\infty e^{-x}\log(x)\,dx+\int_0^1 e^{-x}\log(x)\,dx\\\\ &amp;=\gamma +\int_0^1 e^{-x}\log(x)\,dx\tag 2 \end{align}$$</p> <hr> <p>Finally, integrating by parts the integral on the right-hand side of $(2)$ with $u=\log(x)$ and $v=-(e^{-x}-1)$ reveals</p> <p>$$\begin{align} \int_{-1}^0\frac{e^{1/x}}{x}\,dx&amp;=\gamma +\int_0^1 \frac{e^{-x}-1}{x}\,dx\\\\ &amp;=\gamma -\int_{-1}^0\frac{e^x-1}{x}\,dx \tag 3 \end{align}$$</p> <p>whence solving $(3)$ for $\gamma$ yields the coveted relationship</p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\gamma=\int_{-1}^0 \frac{e^x+e^{1/x}-1}{x}\,dx}$$</p> </blockquote> <p>And we are done!</p> <hr> <p>At the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that $\gamma$ as given by $-\int_0^\infty \log(x)e^{-x}\,dx=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n \frac1k\right)$, the latter of which is often given as the definition of $\gamma$.</p> Answer 6: <p>I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\displaystyle \lim_{\epsilon\to 0^-}\int_{-1}^\epsilon \frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\int\frac{e^x+e^{1/x}-1}x dx=\mathrm{Ei}(x)-\mathrm{Ei}\left(\frac1x\right)-\ln{x}+C\tag{1}$$ It's easy to see that the value at $x=-1$ is $-i\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\lim_{x\to 0^-}\mathrm{Ei}\left(\frac1x\right)=-\lim_{t\to\infty}\mathrm{Ei}(-t)=0.$$ This leaves us with $\displaystyle\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]$. For $x&lt;0$, we can use <a href="http://dlmf.nist.gov/6.6#E2" rel="nofollow noreferrer">the series expansion for $E_1(z)$</a> to write $$\mathrm{Ei}(x)=\gamma+\ln(-x)+\sum_{n=1}^\infty\frac{x^n}{n\,n!}.\tag{2}$$ We now have $$\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]=\lim_{x\to 0^-}\left[\gamma+\ln(-x)-\ln{x}+\sum_{n=1}^\infty\frac{x^n}{n\,n!}\right]=\gamma-i\pi\tag{3}$$ and may conclude that $$\int_{-1}^0\frac{e^x+e^{1/x}-1}x dx=\gamma+i\pi-i\pi=\gamma.\tag{4}$$ Q.E.D.</p> Answer 7: <p>First, enforce the substitution $x\to -1/x$, we obtain </p> <p>$$\int_{-1}^0\frac{e^{1/x}}{x}\,dx=-\int_1^\infty \frac{e^{-x}}{x}\,dx \tag 1$$</p> <hr> <p>Second, integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x}$ and $v=\log(x)$ reveals</p> <p>$$\begin{align} \int_{-1}^0\frac{e^{1/x}}{x}\,dx&amp;=-\int_1^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\int_0^\infty e^{-x}\log(x)\,dx+\int_0^1 e^{-x}\log(x)\,dx\\\\ &amp;=\gamma +\int_0^1 e^{-x}\log(x)\,dx\tag 2 \end{align}$$</p> <hr> <p>Finally, integrating by parts the integral on the right-hand side of $(2)$ with $u=\log(x)$ and $v=-(e^{-x}-1)$ reveals</p> <p>$$\begin{align} \int_{-1}^0\frac{e^{1/x}}{x}\,dx&amp;=\gamma +\int_0^1 \frac{e^{-x}-1}{x}\,dx\\\\ &amp;=\gamma -\int_{-1}^0\frac{e^x-1}{x}\,dx \tag 3 \end{align}$$</p> <p>whence solving $(3)$ for $\gamma$ yields the coveted relationship</p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\gamma=\int_{-1}^0 \frac{e^x+e^{1/x}-1}{x}\,dx}$$</p> </blockquote> <p>And we are done!</p> <hr> <p>At the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that $\gamma$ as given by $-\int_0^\infty \log(x)e^{-x}\,dx=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n \frac1k\right)$, the latter of which is often given as the definition of $\gamma$.</p> Answer 8: <p>I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\displaystyle \lim_{\epsilon\to 0^-}\int_{-1}^\epsilon \frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\int\frac{e^x+e^{1/x}-1}x dx=\mathrm{Ei}(x)-\mathrm{Ei}\left(\frac1x\right)-\ln{x}+C\tag{1}$$ It's easy to see that the value at $x=-1$ is $-i\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\lim_{x\to 0^-}\mathrm{Ei}\left(\frac1x\right)=-\lim_{t\to\infty}\mathrm{Ei}(-t)=0.$$ This leaves us with $\displaystyle\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]$. For $x&lt;0$, we can use <a href="http://dlmf.nist.gov/6.6#E2" rel="nofollow noreferrer">the series expansion for $E_1(z)$</a> to write $$\mathrm{Ei}(x)=\gamma+\ln(-x)+\sum_{n=1}^\infty\frac{x^n}{n\,n!}.\tag{2}$$ We now have $$\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]=\lim_{x\to 0^-}\left[\gamma+\ln(-x)-\ln{x}+\sum_{n=1}^\infty\frac{x^n}{n\,n!}\right]=\gamma-i\pi\tag{3}$$ and may conclude that $$\int_{-1}^0\frac{e^x+e^{1/x}-1}x dx=\gamma+i\pi-i\pi=\gamma.\tag{4}$$ Q.E.D.</p> Answer 9: <p>First, enforce the substitution $x\to -1/x$, we obtain </p> <p>$$\int_{-1}^0\frac{e^{1/x}}{x}\,dx=-\int_1^\infty \frac{e^{-x}}{x}\,dx \tag 1$$</p> <hr> <p>Second, integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x}$ and $v=\log(x)$ reveals</p> <p>$$\begin{align} \int_{-1}^0\frac{e^{1/x}}{x}\,dx&amp;=-\int_1^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\int_0^\infty e^{-x}\log(x)\,dx+\int_0^1 e^{-x}\log(x)\,dx\\\\ &amp;=\gamma +\int_0^1 e^{-x}\log(x)\,dx\tag 2 \end{align}$$</p> <hr> <p>Finally, integrating by parts the integral on the right-hand side of $(2)$ with $u=\log(x)$ and $v=-(e^{-x}-1)$ reveals</p> <p>$$\begin{align} \int_{-1}^0\frac{e^{1/x}}{x}\,dx&amp;=\gamma +\int_0^1 \frac{e^{-x}-1}{x}\,dx\\\\ &amp;=\gamma -\int_{-1}^0\frac{e^x-1}{x}\,dx \tag 3 \end{align}$$</p> <p>whence solving $(3)$ for $\gamma$ yields the coveted relationship</p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\gamma=\int_{-1}^0 \frac{e^x+e^{1/x}-1}{x}\,dx}$$</p> </blockquote> <p>And we are done!</p> <hr> <p>At the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that $\gamma$ as given by $-\int_0^\infty \log(x)e^{-x}\,dx=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n \frac1k\right)$, the latter of which is often given as the definition of $\gamma$.</p> Answer 10: <p>I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\displaystyle \lim_{\epsilon\to 0^-}\int_{-1}^\epsilon \frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\int\frac{e^x+e^{1/x}-1}x dx=\mathrm{Ei}(x)-\mathrm{Ei}\left(\frac1x\right)-\ln{x}+C\tag{1}$$ It's easy to see that the value at $x=-1$ is $-i\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\lim_{x\to 0^-}\mathrm{Ei}\left(\frac1x\right)=-\lim_{t\to\infty}\mathrm{Ei}(-t)=0.$$ This leaves us with $\displaystyle\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]$. For $x&lt;0$, we can use <a href="http://dlmf.nist.gov/6.6#E2" rel="nofollow noreferrer">the series expansion for $E_1(z)$</a> to write $$\mathrm{Ei}(x)=\gamma+\ln(-x)+\sum_{n=1}^\infty\frac{x^n}{n\,n!}.\tag{2}$$ We now have $$\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]=\lim_{x\to 0^-}\left[\gamma+\ln(-x)-\ln{x}+\sum_{n=1}^\infty\frac{x^n}{n\,n!}\right]=\gamma-i\pi\tag{3}$$ and may conclude that $$\int_{-1}^0\frac{e^x+e^{1/x}-1}x dx=\gamma+i\pi-i\pi=\gamma.\tag{4}$$ Q.E.D.</p> Answer 11: <p>First, enforce the substitution $x\to -1/x$, we obtain </p> <p>$$\int_{-1}^0\frac{e^{1/x}}{x}\,dx=-\int_1^\infty \frac{e^{-x}}{x}\,dx \tag 1$$</p> <hr> <p>Second, integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x}$ and $v=\log(x)$ reveals</p> <p>$$\begin{align} \int_{-1}^0\frac{e^{1/x}}{x}\,dx&amp;=-\int_1^\infty e^{-x}\log(x)\,dx\\\\ &amp;=-\int_0^\infty e^{-x}\log(x)\,dx+\int_0^1 e^{-x}\log(x)\,dx\\\\ &amp;=\gamma +\int_0^1 e^{-x}\log(x)\,dx\tag 2 \end{align}$$</p> <hr> <p>Finally, integrating by parts the integral on the right-hand side of $(2)$ with $u=\log(x)$ and $v=-(e^{-x}-1)$ reveals</p> <p>$$\begin{align} \int_{-1}^0\frac{e^{1/x}}{x}\,dx&amp;=\gamma +\int_0^1 \frac{e^{-x}-1}{x}\,dx\\\\ &amp;=\gamma -\int_{-1}^0\frac{e^x-1}{x}\,dx \tag 3 \end{align}$$</p> <p>whence solving $(3)$ for $\gamma$ yields the coveted relationship</p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\gamma=\int_{-1}^0 \frac{e^x+e^{1/x}-1}{x}\,dx}$$</p> </blockquote> <p>And we are done!</p> <hr> <p>At the end of <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that $\gamma$ as given by $-\int_0^\infty \log(x)e^{-x}\,dx=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n \frac1k\right)$, the latter of which is often given as the definition of $\gamma$.</p> Answer 12: <p>I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\displaystyle \lim_{\epsilon\to 0^-}\int_{-1}^\epsilon \frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\int\frac{e^x+e^{1/x}-1}x dx=\mathrm{Ei}(x)-\mathrm{Ei}\left(\frac1x\right)-\ln{x}+C\tag{1}$$ It's easy to see that the value at $x=-1$ is $-i\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\lim_{x\to 0^-}\mathrm{Ei}\left(\frac1x\right)=-\lim_{t\to\infty}\mathrm{Ei}(-t)=0.$$ This leaves us with $\displaystyle\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]$. For $x&lt;0$, we can use <a href="http://dlmf.nist.gov/6.6#E2" rel="nofollow noreferrer">the series expansion for $E_1(z)$</a> to write $$\mathrm{Ei}(x)=\gamma+\ln(-x)+\sum_{n=1}^\infty\frac{x^n}{n\,n!}.\tag{2}$$ We now have $$\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]=\lim_{x\to 0^-}\left[\gamma+\ln(-x)-\ln{x}+\sum_{n=1}^\infty\frac{x^n}{n\,n!}\right]=\gamma-i\pi\tag{3}$$ and may conclude that $$\int_{-1}^0\frac{e^x+e^{1/x}-1}x dx=\gamma+i\pi-i\pi=\gamma.\tag{4}$$ Q.E.D.</p>
How to symbolically write &quot;there exists at most&quot;
2,107,668
5
https://math.stackexchange.com/questions/2107668/how-to-symbolically-write-there-exists-at-most
<p>For example, an injective function can be defined as</p> <p>$\forall y \in Y$ <strong>there exists at most one</strong> $x \, : \, \forall f(x) = y$</p> <p>How could I represent the bold part of the above expression? For "there exists at least one" we have $\exists$, for "there exists a unique" we have $\exists!$. Is there a symbolic notation for "...at most one"? The classic definition of injective, i.e., $\forall \, a,b\in X, \, f(a) = f(b) \rightarrow a = b$ seems to side-step this, though.</p>
[ { "answer_id": 2107677, "score": 6, "text": "<p>If you will be using it repeatedly, you could invent a notation...\n$$\n\\forall y \\in Y\\;\\exists^{\\le 1} x \n$$\nOf course you will have to define it, say like this\n$$\n\\exists^{\\le 1} x\\;\\;\\Phi(x)\\qquad\\text{means}\\qquad\n\\forall x_1 \\forall x_2 \\big(\\Phi(x_1) \\wedge \\Phi(x_2) \\longrightarrow x_1=x_2\\big)\n$$</p>\n" }, { "answer_id": 2107675, "score": 3, "text": "<p>I have never heard of such a symbol, but if you really want to you can write $\\exists !\\, x \\lor \\nexists\\, x, \\,\\dots $</p>\n\n<p>However, this is not easily readable and I would not advise its use at all. </p>\n" }, { "answer_id": 2107677, "score": 6, "text": "<p>If you will be using it repeatedly, you could invent a notation...\n$$\n\\forall y \\in Y\\;\\exists^{\\le 1} x \n$$\nOf course you will have to define it, say like this\n$$\n\\exists^{\\le 1} x\\;\\;\\Phi(x)\\qquad\\text{means}\\qquad\n\\forall x_1 \\forall x_2 \\big(\\Phi(x_1) \\wedge \\Phi(x_2) \\longrightarrow x_1=x_2\\big)\n$$</p>\n" }, { "answer_id": 2107675, "score": 3, "text": "<p>I have never heard of such a symbol, but if you really want to you can write $\\exists !\\, x \\lor \\nexists\\, x, \\,\\dots $</p>\n\n<p>However, this is not easily readable and I would not advise its use at all. </p>\n" }, { "answer_id": 2107677, "score": 6, "text": "<p>If you will be using it repeatedly, you could invent a notation...\n$$\n\\forall y \\in Y\\;\\exists^{\\le 1} x \n$$\nOf course you will have to define it, say like this\n$$\n\\exists^{\\le 1} x\\;\\;\\Phi(x)\\qquad\\text{means}\\qquad\n\\forall x_1 \\forall x_2 \\big(\\Phi(x_1) \\wedge \\Phi(x_2) \\longrightarrow x_1=x_2\\big)\n$$</p>\n" }, { "answer_id": 2107675, "score": 3, "text": "<p>I have never heard of such a symbol, but if you really want to you can write $\\exists !\\, x \\lor \\nexists\\, x, \\,\\dots $</p>\n\n<p>However, this is not easily readable and I would not advise its use at all. </p>\n" }, { "answer_id": 2107677, "score": 6, "text": "<p>If you will be using it repeatedly, you could invent a notation...\n$$\n\\forall y \\in Y\\;\\exists^{\\le 1} x \n$$\nOf course you will have to define it, say like this\n$$\n\\exists^{\\le 1} x\\;\\;\\Phi(x)\\qquad\\text{means}\\qquad\n\\forall x_1 \\forall x_2 \\big(\\Phi(x_1) \\wedge \\Phi(x_2) \\longrightarrow x_1=x_2\\big)\n$$</p>\n" }, { "answer_id": 2107675, "score": 3, "text": "<p>I have never heard of such a symbol, but if you really want to you can write $\\exists !\\, x \\lor \\nexists\\, x, \\,\\dots $</p>\n\n<p>However, this is not easily readable and I would not advise its use at all. </p>\n" }, { "answer_id": 2107677, "score": 6, "text": "<p>If you will be using it repeatedly, you could invent a notation...\n$$\n\\forall y \\in Y\\;\\exists^{\\le 1} x \n$$\nOf course you will have to define it, say like this\n$$\n\\exists^{\\le 1} x\\;\\;\\Phi(x)\\qquad\\text{means}\\qquad\n\\forall x_1 \\forall x_2 \\big(\\Phi(x_1) \\wedge \\Phi(x_2) \\longrightarrow x_1=x_2\\big)\n$$</p>\n" }, { "answer_id": 2107675, "score": 3, "text": "<p>I have never heard of such a symbol, but if you really want to you can write $\\exists !\\, x \\lor \\nexists\\, x, \\,\\dots $</p>\n\n<p>However, this is not easily readable and I would not advise its use at all. </p>\n" }, { "answer_id": 2107677, "score": 6, "text": "<p>If you will be using it repeatedly, you could invent a notation...\n$$\n\\forall y \\in Y\\;\\exists^{\\le 1} x \n$$\nOf course you will have to define it, say like this\n$$\n\\exists^{\\le 1} x\\;\\;\\Phi(x)\\qquad\\text{means}\\qquad\n\\forall x_1 \\forall x_2 \\big(\\Phi(x_1) \\wedge \\Phi(x_2) \\longrightarrow x_1=x_2\\big)\n$$</p>\n" }, { "answer_id": 2107675, "score": 3, "text": "<p>I have never heard of such a symbol, but if you really want to you can write $\\exists !\\, x \\lor \\nexists\\, x, \\,\\dots $</p>\n\n<p>However, this is not easily readable and I would not advise its use at all. </p>\n" } ]
Question: How to symbolically write &quot;there exists at most&quot; Answer 1: <p>If you will be using it repeatedly, you could invent a notation... $$ \forall y \in Y\;\exists^{\le 1} x $$ Of course you will have to define it, say like this $$ \exists^{\le 1} x\;\;\Phi(x)\qquad\text{means}\qquad \forall x_1 \forall x_2 \big(\Phi(x_1) \wedge \Phi(x_2) \longrightarrow x_1=x_2\big) $$</p> Answer 2: <p>I have never heard of such a symbol, but if you really want to you can write $\exists !\, x \lor \nexists\, x, \,\dots $</p> <p>However, this is not easily readable and I would not advise its use at all. </p> Answer 3: <p>If you will be using it repeatedly, you could invent a notation... $$ \forall y \in Y\;\exists^{\le 1} x $$ Of course you will have to define it, say like this $$ \exists^{\le 1} x\;\;\Phi(x)\qquad\text{means}\qquad \forall x_1 \forall x_2 \big(\Phi(x_1) \wedge \Phi(x_2) \longrightarrow x_1=x_2\big) $$</p> Answer 4: <p>I have never heard of such a symbol, but if you really want to you can write $\exists !\, x \lor \nexists\, x, \,\dots $</p> <p>However, this is not easily readable and I would not advise its use at all. </p> Answer 5: <p>If you will be using it repeatedly, you could invent a notation... $$ \forall y \in Y\;\exists^{\le 1} x $$ Of course you will have to define it, say like this $$ \exists^{\le 1} x\;\;\Phi(x)\qquad\text{means}\qquad \forall x_1 \forall x_2 \big(\Phi(x_1) \wedge \Phi(x_2) \longrightarrow x_1=x_2\big) $$</p> Answer 6: <p>I have never heard of such a symbol, but if you really want to you can write $\exists !\, x \lor \nexists\, x, \,\dots $</p> <p>However, this is not easily readable and I would not advise its use at all. </p> Answer 7: <p>If you will be using it repeatedly, you could invent a notation... $$ \forall y \in Y\;\exists^{\le 1} x $$ Of course you will have to define it, say like this $$ \exists^{\le 1} x\;\;\Phi(x)\qquad\text{means}\qquad \forall x_1 \forall x_2 \big(\Phi(x_1) \wedge \Phi(x_2) \longrightarrow x_1=x_2\big) $$</p> Answer 8: <p>I have never heard of such a symbol, but if you really want to you can write $\exists !\, x \lor \nexists\, x, \,\dots $</p> <p>However, this is not easily readable and I would not advise its use at all. </p> Answer 9: <p>If you will be using it repeatedly, you could invent a notation... $$ \forall y \in Y\;\exists^{\le 1} x $$ Of course you will have to define it, say like this $$ \exists^{\le 1} x\;\;\Phi(x)\qquad\text{means}\qquad \forall x_1 \forall x_2 \big(\Phi(x_1) \wedge \Phi(x_2) \longrightarrow x_1=x_2\big) $$</p> Answer 10: <p>I have never heard of such a symbol, but if you really want to you can write $\exists !\, x \lor \nexists\, x, \,\dots $</p> <p>However, this is not easily readable and I would not advise its use at all. </p> Answer 11: <p>If you will be using it repeatedly, you could invent a notation... $$ \forall y \in Y\;\exists^{\le 1} x $$ Of course you will have to define it, say like this $$ \exists^{\le 1} x\;\;\Phi(x)\qquad\text{means}\qquad \forall x_1 \forall x_2 \big(\Phi(x_1) \wedge \Phi(x_2) \longrightarrow x_1=x_2\big) $$</p> Answer 12: <p>I have never heard of such a symbol, but if you really want to you can write $\exists !\, x \lor \nexists\, x, \,\dots $</p> <p>However, this is not easily readable and I would not advise its use at all. </p>
How to compute $e^A$?
2,108,103
5
https://math.stackexchange.com/questions/2108103/how-to-compute-ea
<p>Let $\omega&gt;0$. Then how to compute matrix $e^A$, where $$A= \begin{bmatrix} 0 &amp; \omega \\ -\omega &amp; 0 \end{bmatrix}. $$ I am excited to see matrix as powers. I don't know how it can be power but any way it seems good stuff to learn. How to compute it?</p>
[ { "answer_id": 2108121, "score": 11, "text": "<p>Take the definition:</p>\n\n<p>$$exp(A)=I+A+A^2/2!+A^3/3!+\\cdots$$</p>\n\n<p>and group even and odd terms:</p>\n\n<p>$$exp(A)=(I+A^2/2!+A^4/4!+\\cdots) + (A/1!+A^3/3!+A^5/5!+ \\cdots)$$</p>\n\n<p>Using the fact that $A^2=-\\omega^2I$, </p>\n\n<p>$$exp(A)=(I-w^2I/2!+w^4I/4!-\\cdots)+(I-\\omega^2I/3!+\\omega^4I/5!+\\cdots)A$$</p>\n\n<p>$$exp(A)=(1-w^2/2!+w^4/4!-\\cdots)I+(1-\\omega^2/3!+\\omega^4/5!+\\cdots)A$$</p>\n\n<blockquote>\n <p>$$exp(A)=\\cos(\\omega)I+\\dfrac{\\sin{\\omega}}{\\omega}A$$</p>\n</blockquote>\n\n<p>But there is another important way to write it:</p>\n\n<blockquote>\n <p>$$exp(A)=\\pmatrix{\\cos(\\omega)&amp;\\sin(\\omega)\\\\ -\\sin(\\omega)&amp;\\cos(\\omega)}$$</p>\n</blockquote>\n\n<p>which is the rotation with angle $-\\omega$.</p>\n\n<p>It is a classical (and deep) result that the exponential of an antisymmetric matrix is an orthogonal matrix. Have a small look at (<a href=\"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205\" rel=\"noreferrer\">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205</a>) that recalls what exists in dimension 2 and 3 (important Rodrigues formula) and then considers computations in higher dimensions.</p>\n" }, { "answer_id": 2108109, "score": 8, "text": "<p>$e^A$ is not really a <em>power</em>, but rather the <em>exponential map</em>. There is no such thing as \"multiplying $e$ by itself $A$ times\" when $A$ is a matrix. Instead, it's a notational shorthand for\n$$\nI + A + \\frac{A^2}2 + \\frac{A^3}{6} + \\frac{A^4}{24} + \\cdots + \\frac{A^n}{n!} + \\cdots\n$$\nIn order to calculate this efficiently, I suggest you try to find the general form of $A^n$, and from there, the general form of <em>each separate component</em> of $e^A$, keeping in mind the power series definition of $\\sin$ and $\\cos$</p>\n" }, { "answer_id": 2108120, "score": 4, "text": "<p>As mentioned in the answer by @Arthur, the <em>definition</em> of $e^A$ is the power series\n$$e^A:=\\sum_{n\\ge0}\\frac{1}{n!}A^n,$$\nwhere $A^0=I$. Now for some matrices (yours, for example) one can try to find some regularities in the sequence $A,A^2,A^3,...$ and work from there. However, for matrices of reasonable size, one can proceed as follows:</p>\n\n<ol>\n<li>If $A$ is <a href=\"https://en.wikipedia.org/wiki/Diagonalizable_matrix\" rel=\"nofollow noreferrer\">diagonalizable</a>, write it as $A=PDP^{-1}$ for a diagonal matrix $D$. Else (a bit harder), find its <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan normal form</a> $A = PJP^{-1}$.</li>\n<li>It is easy to compute $D^n$ (respectively, $J^n$), and thus\n$$A^n = (PDP^{-1})^n = PD^nP^{-1},$$\nso that one can easily find $e^A$. In particular, for a diagonal matrix $D$ we have that $e^D$ is given by the diagonal matrix with the exponential of the entries of $D$ as elements, making things extremely simple.</li>\n</ol>\n\n<p>Of course, all the hard work here is to diagonalize $A$ (resp. take it into its Jordan normal form). For large matrices, often some probabilistic methods are used to approximate solutions. It is a very interesting (and very active) field of research, but I don't know much about it.</p>\n" }, { "answer_id": 2108438, "score": 1, "text": "<p>The characteristic equation for matrix $A$ is $t^2+\\omega^2=0$ so $t=\\pm \\omega i$ will be eigenvalues. On finding eigenvectors $\\begin{pmatrix}1\\\\i\\\\\\end{pmatrix}$ and $\\begin{pmatrix}i\\\\1\\\\\\end{pmatrix}$, construct the modal matrix $P =\\begin{pmatrix}1&amp;i\\\\i&amp;1\\\\\\end{pmatrix}$\nand obtain $P^{-1}=\\frac{1}{2}\\begin{pmatrix}1&amp;-i\\\\-i&amp;1\\\\\\end{pmatrix}$.</p>\n\n<p>Using $A=PDP^{-1}\\implies e^A=Pe^DP^{-1}$</p>\n\n<p>$\\implies e^A=\\begin{pmatrix}1&amp;i\\\\i&amp;1\\\\\\end{pmatrix}\\begin{pmatrix}e^{\\omega i}&amp;0\\\\0&amp;e^{-\\omega i}\\\\\\end{pmatrix}\\frac{1}{2}\\begin{pmatrix}1&amp;-i\\\\-i&amp;1\\\\\\end{pmatrix}$.</p>\n\n<p>$\\implies e^A=\\begin{pmatrix}cos(\\omega)&amp;sin(\\omega)\\\\-sin(\\omega)&amp;cos(\\omega)\\\\\\end{pmatrix}$</p>\n" }, { "answer_id": 2108121, "score": 11, "text": "<p>Take the definition:</p>\n\n<p>$$exp(A)=I+A+A^2/2!+A^3/3!+\\cdots$$</p>\n\n<p>and group even and odd terms:</p>\n\n<p>$$exp(A)=(I+A^2/2!+A^4/4!+\\cdots) + (A/1!+A^3/3!+A^5/5!+ \\cdots)$$</p>\n\n<p>Using the fact that $A^2=-\\omega^2I$, </p>\n\n<p>$$exp(A)=(I-w^2I/2!+w^4I/4!-\\cdots)+(I-\\omega^2I/3!+\\omega^4I/5!+\\cdots)A$$</p>\n\n<p>$$exp(A)=(1-w^2/2!+w^4/4!-\\cdots)I+(1-\\omega^2/3!+\\omega^4/5!+\\cdots)A$$</p>\n\n<blockquote>\n <p>$$exp(A)=\\cos(\\omega)I+\\dfrac{\\sin{\\omega}}{\\omega}A$$</p>\n</blockquote>\n\n<p>But there is another important way to write it:</p>\n\n<blockquote>\n <p>$$exp(A)=\\pmatrix{\\cos(\\omega)&amp;\\sin(\\omega)\\\\ -\\sin(\\omega)&amp;\\cos(\\omega)}$$</p>\n</blockquote>\n\n<p>which is the rotation with angle $-\\omega$.</p>\n\n<p>It is a classical (and deep) result that the exponential of an antisymmetric matrix is an orthogonal matrix. Have a small look at (<a href=\"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205\" rel=\"noreferrer\">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205</a>) that recalls what exists in dimension 2 and 3 (important Rodrigues formula) and then considers computations in higher dimensions.</p>\n" }, { "answer_id": 2108109, "score": 8, "text": "<p>$e^A$ is not really a <em>power</em>, but rather the <em>exponential map</em>. There is no such thing as \"multiplying $e$ by itself $A$ times\" when $A$ is a matrix. Instead, it's a notational shorthand for\n$$\nI + A + \\frac{A^2}2 + \\frac{A^3}{6} + \\frac{A^4}{24} + \\cdots + \\frac{A^n}{n!} + \\cdots\n$$\nIn order to calculate this efficiently, I suggest you try to find the general form of $A^n$, and from there, the general form of <em>each separate component</em> of $e^A$, keeping in mind the power series definition of $\\sin$ and $\\cos$</p>\n" }, { "answer_id": 2108120, "score": 4, "text": "<p>As mentioned in the answer by @Arthur, the <em>definition</em> of $e^A$ is the power series\n$$e^A:=\\sum_{n\\ge0}\\frac{1}{n!}A^n,$$\nwhere $A^0=I$. Now for some matrices (yours, for example) one can try to find some regularities in the sequence $A,A^2,A^3,...$ and work from there. However, for matrices of reasonable size, one can proceed as follows:</p>\n\n<ol>\n<li>If $A$ is <a href=\"https://en.wikipedia.org/wiki/Diagonalizable_matrix\" rel=\"nofollow noreferrer\">diagonalizable</a>, write it as $A=PDP^{-1}$ for a diagonal matrix $D$. Else (a bit harder), find its <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan normal form</a> $A = PJP^{-1}$.</li>\n<li>It is easy to compute $D^n$ (respectively, $J^n$), and thus\n$$A^n = (PDP^{-1})^n = PD^nP^{-1},$$\nso that one can easily find $e^A$. In particular, for a diagonal matrix $D$ we have that $e^D$ is given by the diagonal matrix with the exponential of the entries of $D$ as elements, making things extremely simple.</li>\n</ol>\n\n<p>Of course, all the hard work here is to diagonalize $A$ (resp. take it into its Jordan normal form). For large matrices, often some probabilistic methods are used to approximate solutions. It is a very interesting (and very active) field of research, but I don't know much about it.</p>\n" }, { "answer_id": 2108438, "score": 1, "text": "<p>The characteristic equation for matrix $A$ is $t^2+\\omega^2=0$ so $t=\\pm \\omega i$ will be eigenvalues. On finding eigenvectors $\\begin{pmatrix}1\\\\i\\\\\\end{pmatrix}$ and $\\begin{pmatrix}i\\\\1\\\\\\end{pmatrix}$, construct the modal matrix $P =\\begin{pmatrix}1&amp;i\\\\i&amp;1\\\\\\end{pmatrix}$\nand obtain $P^{-1}=\\frac{1}{2}\\begin{pmatrix}1&amp;-i\\\\-i&amp;1\\\\\\end{pmatrix}$.</p>\n\n<p>Using $A=PDP^{-1}\\implies e^A=Pe^DP^{-1}$</p>\n\n<p>$\\implies e^A=\\begin{pmatrix}1&amp;i\\\\i&amp;1\\\\\\end{pmatrix}\\begin{pmatrix}e^{\\omega i}&amp;0\\\\0&amp;e^{-\\omega i}\\\\\\end{pmatrix}\\frac{1}{2}\\begin{pmatrix}1&amp;-i\\\\-i&amp;1\\\\\\end{pmatrix}$.</p>\n\n<p>$\\implies e^A=\\begin{pmatrix}cos(\\omega)&amp;sin(\\omega)\\\\-sin(\\omega)&amp;cos(\\omega)\\\\\\end{pmatrix}$</p>\n" }, { "answer_id": 2108121, "score": 11, "text": "<p>Take the definition:</p>\n\n<p>$$exp(A)=I+A+A^2/2!+A^3/3!+\\cdots$$</p>\n\n<p>and group even and odd terms:</p>\n\n<p>$$exp(A)=(I+A^2/2!+A^4/4!+\\cdots) + (A/1!+A^3/3!+A^5/5!+ \\cdots)$$</p>\n\n<p>Using the fact that $A^2=-\\omega^2I$, </p>\n\n<p>$$exp(A)=(I-w^2I/2!+w^4I/4!-\\cdots)+(I-\\omega^2I/3!+\\omega^4I/5!+\\cdots)A$$</p>\n\n<p>$$exp(A)=(1-w^2/2!+w^4/4!-\\cdots)I+(1-\\omega^2/3!+\\omega^4/5!+\\cdots)A$$</p>\n\n<blockquote>\n <p>$$exp(A)=\\cos(\\omega)I+\\dfrac{\\sin{\\omega}}{\\omega}A$$</p>\n</blockquote>\n\n<p>But there is another important way to write it:</p>\n\n<blockquote>\n <p>$$exp(A)=\\pmatrix{\\cos(\\omega)&amp;\\sin(\\omega)\\\\ -\\sin(\\omega)&amp;\\cos(\\omega)}$$</p>\n</blockquote>\n\n<p>which is the rotation with angle $-\\omega$.</p>\n\n<p>It is a classical (and deep) result that the exponential of an antisymmetric matrix is an orthogonal matrix. Have a small look at (<a href=\"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205\" rel=\"noreferrer\">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205</a>) that recalls what exists in dimension 2 and 3 (important Rodrigues formula) and then considers computations in higher dimensions.</p>\n" }, { "answer_id": 2108109, "score": 8, "text": "<p>$e^A$ is not really a <em>power</em>, but rather the <em>exponential map</em>. There is no such thing as \"multiplying $e$ by itself $A$ times\" when $A$ is a matrix. Instead, it's a notational shorthand for\n$$\nI + A + \\frac{A^2}2 + \\frac{A^3}{6} + \\frac{A^4}{24} + \\cdots + \\frac{A^n}{n!} + \\cdots\n$$\nIn order to calculate this efficiently, I suggest you try to find the general form of $A^n$, and from there, the general form of <em>each separate component</em> of $e^A$, keeping in mind the power series definition of $\\sin$ and $\\cos$</p>\n" }, { "answer_id": 2108120, "score": 4, "text": "<p>As mentioned in the answer by @Arthur, the <em>definition</em> of $e^A$ is the power series\n$$e^A:=\\sum_{n\\ge0}\\frac{1}{n!}A^n,$$\nwhere $A^0=I$. Now for some matrices (yours, for example) one can try to find some regularities in the sequence $A,A^2,A^3,...$ and work from there. However, for matrices of reasonable size, one can proceed as follows:</p>\n\n<ol>\n<li>If $A$ is <a href=\"https://en.wikipedia.org/wiki/Diagonalizable_matrix\" rel=\"nofollow noreferrer\">diagonalizable</a>, write it as $A=PDP^{-1}$ for a diagonal matrix $D$. Else (a bit harder), find its <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan normal form</a> $A = PJP^{-1}$.</li>\n<li>It is easy to compute $D^n$ (respectively, $J^n$), and thus\n$$A^n = (PDP^{-1})^n = PD^nP^{-1},$$\nso that one can easily find $e^A$. In particular, for a diagonal matrix $D$ we have that $e^D$ is given by the diagonal matrix with the exponential of the entries of $D$ as elements, making things extremely simple.</li>\n</ol>\n\n<p>Of course, all the hard work here is to diagonalize $A$ (resp. take it into its Jordan normal form). For large matrices, often some probabilistic methods are used to approximate solutions. It is a very interesting (and very active) field of research, but I don't know much about it.</p>\n" }, { "answer_id": 2108438, "score": 1, "text": "<p>The characteristic equation for matrix $A$ is $t^2+\\omega^2=0$ so $t=\\pm \\omega i$ will be eigenvalues. On finding eigenvectors $\\begin{pmatrix}1\\\\i\\\\\\end{pmatrix}$ and $\\begin{pmatrix}i\\\\1\\\\\\end{pmatrix}$, construct the modal matrix $P =\\begin{pmatrix}1&amp;i\\\\i&amp;1\\\\\\end{pmatrix}$\nand obtain $P^{-1}=\\frac{1}{2}\\begin{pmatrix}1&amp;-i\\\\-i&amp;1\\\\\\end{pmatrix}$.</p>\n\n<p>Using $A=PDP^{-1}\\implies e^A=Pe^DP^{-1}$</p>\n\n<p>$\\implies e^A=\\begin{pmatrix}1&amp;i\\\\i&amp;1\\\\\\end{pmatrix}\\begin{pmatrix}e^{\\omega i}&amp;0\\\\0&amp;e^{-\\omega i}\\\\\\end{pmatrix}\\frac{1}{2}\\begin{pmatrix}1&amp;-i\\\\-i&amp;1\\\\\\end{pmatrix}$.</p>\n\n<p>$\\implies e^A=\\begin{pmatrix}cos(\\omega)&amp;sin(\\omega)\\\\-sin(\\omega)&amp;cos(\\omega)\\\\\\end{pmatrix}$</p>\n" }, { "answer_id": 2108121, "score": 11, "text": "<p>Take the definition:</p>\n\n<p>$$exp(A)=I+A+A^2/2!+A^3/3!+\\cdots$$</p>\n\n<p>and group even and odd terms:</p>\n\n<p>$$exp(A)=(I+A^2/2!+A^4/4!+\\cdots) + (A/1!+A^3/3!+A^5/5!+ \\cdots)$$</p>\n\n<p>Using the fact that $A^2=-\\omega^2I$, </p>\n\n<p>$$exp(A)=(I-w^2I/2!+w^4I/4!-\\cdots)+(I-\\omega^2I/3!+\\omega^4I/5!+\\cdots)A$$</p>\n\n<p>$$exp(A)=(1-w^2/2!+w^4/4!-\\cdots)I+(1-\\omega^2/3!+\\omega^4/5!+\\cdots)A$$</p>\n\n<blockquote>\n <p>$$exp(A)=\\cos(\\omega)I+\\dfrac{\\sin{\\omega}}{\\omega}A$$</p>\n</blockquote>\n\n<p>But there is another important way to write it:</p>\n\n<blockquote>\n <p>$$exp(A)=\\pmatrix{\\cos(\\omega)&amp;\\sin(\\omega)\\\\ -\\sin(\\omega)&amp;\\cos(\\omega)}$$</p>\n</blockquote>\n\n<p>which is the rotation with angle $-\\omega$.</p>\n\n<p>It is a classical (and deep) result that the exponential of an antisymmetric matrix is an orthogonal matrix. Have a small look at (<a href=\"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205\" rel=\"noreferrer\">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205</a>) that recalls what exists in dimension 2 and 3 (important Rodrigues formula) and then considers computations in higher dimensions.</p>\n" }, { "answer_id": 2108109, "score": 8, "text": "<p>$e^A$ is not really a <em>power</em>, but rather the <em>exponential map</em>. There is no such thing as \"multiplying $e$ by itself $A$ times\" when $A$ is a matrix. Instead, it's a notational shorthand for\n$$\nI + A + \\frac{A^2}2 + \\frac{A^3}{6} + \\frac{A^4}{24} + \\cdots + \\frac{A^n}{n!} + \\cdots\n$$\nIn order to calculate this efficiently, I suggest you try to find the general form of $A^n$, and from there, the general form of <em>each separate component</em> of $e^A$, keeping in mind the power series definition of $\\sin$ and $\\cos$</p>\n" }, { "answer_id": 2108120, "score": 4, "text": "<p>As mentioned in the answer by @Arthur, the <em>definition</em> of $e^A$ is the power series\n$$e^A:=\\sum_{n\\ge0}\\frac{1}{n!}A^n,$$\nwhere $A^0=I$. Now for some matrices (yours, for example) one can try to find some regularities in the sequence $A,A^2,A^3,...$ and work from there. However, for matrices of reasonable size, one can proceed as follows:</p>\n\n<ol>\n<li>If $A$ is <a href=\"https://en.wikipedia.org/wiki/Diagonalizable_matrix\" rel=\"nofollow noreferrer\">diagonalizable</a>, write it as $A=PDP^{-1}$ for a diagonal matrix $D$. Else (a bit harder), find its <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan normal form</a> $A = PJP^{-1}$.</li>\n<li>It is easy to compute $D^n$ (respectively, $J^n$), and thus\n$$A^n = (PDP^{-1})^n = PD^nP^{-1},$$\nso that one can easily find $e^A$. In particular, for a diagonal matrix $D$ we have that $e^D$ is given by the diagonal matrix with the exponential of the entries of $D$ as elements, making things extremely simple.</li>\n</ol>\n\n<p>Of course, all the hard work here is to diagonalize $A$ (resp. take it into its Jordan normal form). For large matrices, often some probabilistic methods are used to approximate solutions. It is a very interesting (and very active) field of research, but I don't know much about it.</p>\n" }, { "answer_id": 2108438, "score": 1, "text": "<p>The characteristic equation for matrix $A$ is $t^2+\\omega^2=0$ so $t=\\pm \\omega i$ will be eigenvalues. On finding eigenvectors $\\begin{pmatrix}1\\\\i\\\\\\end{pmatrix}$ and $\\begin{pmatrix}i\\\\1\\\\\\end{pmatrix}$, construct the modal matrix $P =\\begin{pmatrix}1&amp;i\\\\i&amp;1\\\\\\end{pmatrix}$\nand obtain $P^{-1}=\\frac{1}{2}\\begin{pmatrix}1&amp;-i\\\\-i&amp;1\\\\\\end{pmatrix}$.</p>\n\n<p>Using $A=PDP^{-1}\\implies e^A=Pe^DP^{-1}$</p>\n\n<p>$\\implies e^A=\\begin{pmatrix}1&amp;i\\\\i&amp;1\\\\\\end{pmatrix}\\begin{pmatrix}e^{\\omega i}&amp;0\\\\0&amp;e^{-\\omega i}\\\\\\end{pmatrix}\\frac{1}{2}\\begin{pmatrix}1&amp;-i\\\\-i&amp;1\\\\\\end{pmatrix}$.</p>\n\n<p>$\\implies e^A=\\begin{pmatrix}cos(\\omega)&amp;sin(\\omega)\\\\-sin(\\omega)&amp;cos(\\omega)\\\\\\end{pmatrix}$</p>\n" }, { "answer_id": 2108121, "score": 11, "text": "<p>Take the definition:</p>\n\n<p>$$exp(A)=I+A+A^2/2!+A^3/3!+\\cdots$$</p>\n\n<p>and group even and odd terms:</p>\n\n<p>$$exp(A)=(I+A^2/2!+A^4/4!+\\cdots) + (A/1!+A^3/3!+A^5/5!+ \\cdots)$$</p>\n\n<p>Using the fact that $A^2=-\\omega^2I$, </p>\n\n<p>$$exp(A)=(I-w^2I/2!+w^4I/4!-\\cdots)+(I-\\omega^2I/3!+\\omega^4I/5!+\\cdots)A$$</p>\n\n<p>$$exp(A)=(1-w^2/2!+w^4/4!-\\cdots)I+(1-\\omega^2/3!+\\omega^4/5!+\\cdots)A$$</p>\n\n<blockquote>\n <p>$$exp(A)=\\cos(\\omega)I+\\dfrac{\\sin{\\omega}}{\\omega}A$$</p>\n</blockquote>\n\n<p>But there is another important way to write it:</p>\n\n<blockquote>\n <p>$$exp(A)=\\pmatrix{\\cos(\\omega)&amp;\\sin(\\omega)\\\\ -\\sin(\\omega)&amp;\\cos(\\omega)}$$</p>\n</blockquote>\n\n<p>which is the rotation with angle $-\\omega$.</p>\n\n<p>It is a classical (and deep) result that the exponential of an antisymmetric matrix is an orthogonal matrix. Have a small look at (<a href=\"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205\" rel=\"noreferrer\">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205</a>) that recalls what exists in dimension 2 and 3 (important Rodrigues formula) and then considers computations in higher dimensions.</p>\n" }, { "answer_id": 2108109, "score": 8, "text": "<p>$e^A$ is not really a <em>power</em>, but rather the <em>exponential map</em>. There is no such thing as \"multiplying $e$ by itself $A$ times\" when $A$ is a matrix. Instead, it's a notational shorthand for\n$$\nI + A + \\frac{A^2}2 + \\frac{A^3}{6} + \\frac{A^4}{24} + \\cdots + \\frac{A^n}{n!} + \\cdots\n$$\nIn order to calculate this efficiently, I suggest you try to find the general form of $A^n$, and from there, the general form of <em>each separate component</em> of $e^A$, keeping in mind the power series definition of $\\sin$ and $\\cos$</p>\n" }, { "answer_id": 2108120, "score": 4, "text": "<p>As mentioned in the answer by @Arthur, the <em>definition</em> of $e^A$ is the power series\n$$e^A:=\\sum_{n\\ge0}\\frac{1}{n!}A^n,$$\nwhere $A^0=I$. Now for some matrices (yours, for example) one can try to find some regularities in the sequence $A,A^2,A^3,...$ and work from there. However, for matrices of reasonable size, one can proceed as follows:</p>\n\n<ol>\n<li>If $A$ is <a href=\"https://en.wikipedia.org/wiki/Diagonalizable_matrix\" rel=\"nofollow noreferrer\">diagonalizable</a>, write it as $A=PDP^{-1}$ for a diagonal matrix $D$. Else (a bit harder), find its <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan normal form</a> $A = PJP^{-1}$.</li>\n<li>It is easy to compute $D^n$ (respectively, $J^n$), and thus\n$$A^n = (PDP^{-1})^n = PD^nP^{-1},$$\nso that one can easily find $e^A$. In particular, for a diagonal matrix $D$ we have that $e^D$ is given by the diagonal matrix with the exponential of the entries of $D$ as elements, making things extremely simple.</li>\n</ol>\n\n<p>Of course, all the hard work here is to diagonalize $A$ (resp. take it into its Jordan normal form). For large matrices, often some probabilistic methods are used to approximate solutions. It is a very interesting (and very active) field of research, but I don't know much about it.</p>\n" }, { "answer_id": 2108438, "score": 1, "text": "<p>The characteristic equation for matrix $A$ is $t^2+\\omega^2=0$ so $t=\\pm \\omega i$ will be eigenvalues. On finding eigenvectors $\\begin{pmatrix}1\\\\i\\\\\\end{pmatrix}$ and $\\begin{pmatrix}i\\\\1\\\\\\end{pmatrix}$, construct the modal matrix $P =\\begin{pmatrix}1&amp;i\\\\i&amp;1\\\\\\end{pmatrix}$\nand obtain $P^{-1}=\\frac{1}{2}\\begin{pmatrix}1&amp;-i\\\\-i&amp;1\\\\\\end{pmatrix}$.</p>\n\n<p>Using $A=PDP^{-1}\\implies e^A=Pe^DP^{-1}$</p>\n\n<p>$\\implies e^A=\\begin{pmatrix}1&amp;i\\\\i&amp;1\\\\\\end{pmatrix}\\begin{pmatrix}e^{\\omega i}&amp;0\\\\0&amp;e^{-\\omega i}\\\\\\end{pmatrix}\\frac{1}{2}\\begin{pmatrix}1&amp;-i\\\\-i&amp;1\\\\\\end{pmatrix}$.</p>\n\n<p>$\\implies e^A=\\begin{pmatrix}cos(\\omega)&amp;sin(\\omega)\\\\-sin(\\omega)&amp;cos(\\omega)\\\\\\end{pmatrix}$</p>\n" }, { "answer_id": 2108121, "score": 11, "text": "<p>Take the definition:</p>\n\n<p>$$exp(A)=I+A+A^2/2!+A^3/3!+\\cdots$$</p>\n\n<p>and group even and odd terms:</p>\n\n<p>$$exp(A)=(I+A^2/2!+A^4/4!+\\cdots) + (A/1!+A^3/3!+A^5/5!+ \\cdots)$$</p>\n\n<p>Using the fact that $A^2=-\\omega^2I$, </p>\n\n<p>$$exp(A)=(I-w^2I/2!+w^4I/4!-\\cdots)+(I-\\omega^2I/3!+\\omega^4I/5!+\\cdots)A$$</p>\n\n<p>$$exp(A)=(1-w^2/2!+w^4/4!-\\cdots)I+(1-\\omega^2/3!+\\omega^4/5!+\\cdots)A$$</p>\n\n<blockquote>\n <p>$$exp(A)=\\cos(\\omega)I+\\dfrac{\\sin{\\omega}}{\\omega}A$$</p>\n</blockquote>\n\n<p>But there is another important way to write it:</p>\n\n<blockquote>\n <p>$$exp(A)=\\pmatrix{\\cos(\\omega)&amp;\\sin(\\omega)\\\\ -\\sin(\\omega)&amp;\\cos(\\omega)}$$</p>\n</blockquote>\n\n<p>which is the rotation with angle $-\\omega$.</p>\n\n<p>It is a classical (and deep) result that the exponential of an antisymmetric matrix is an orthogonal matrix. Have a small look at (<a href=\"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205\" rel=\"noreferrer\">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205</a>) that recalls what exists in dimension 2 and 3 (important Rodrigues formula) and then considers computations in higher dimensions.</p>\n" }, { "answer_id": 2108109, "score": 8, "text": "<p>$e^A$ is not really a <em>power</em>, but rather the <em>exponential map</em>. There is no such thing as \"multiplying $e$ by itself $A$ times\" when $A$ is a matrix. Instead, it's a notational shorthand for\n$$\nI + A + \\frac{A^2}2 + \\frac{A^3}{6} + \\frac{A^4}{24} + \\cdots + \\frac{A^n}{n!} + \\cdots\n$$\nIn order to calculate this efficiently, I suggest you try to find the general form of $A^n$, and from there, the general form of <em>each separate component</em> of $e^A$, keeping in mind the power series definition of $\\sin$ and $\\cos$</p>\n" }, { "answer_id": 2108120, "score": 4, "text": "<p>As mentioned in the answer by @Arthur, the <em>definition</em> of $e^A$ is the power series\n$$e^A:=\\sum_{n\\ge0}\\frac{1}{n!}A^n,$$\nwhere $A^0=I$. Now for some matrices (yours, for example) one can try to find some regularities in the sequence $A,A^2,A^3,...$ and work from there. However, for matrices of reasonable size, one can proceed as follows:</p>\n\n<ol>\n<li>If $A$ is <a href=\"https://en.wikipedia.org/wiki/Diagonalizable_matrix\" rel=\"nofollow noreferrer\">diagonalizable</a>, write it as $A=PDP^{-1}$ for a diagonal matrix $D$. Else (a bit harder), find its <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan normal form</a> $A = PJP^{-1}$.</li>\n<li>It is easy to compute $D^n$ (respectively, $J^n$), and thus\n$$A^n = (PDP^{-1})^n = PD^nP^{-1},$$\nso that one can easily find $e^A$. In particular, for a diagonal matrix $D$ we have that $e^D$ is given by the diagonal matrix with the exponential of the entries of $D$ as elements, making things extremely simple.</li>\n</ol>\n\n<p>Of course, all the hard work here is to diagonalize $A$ (resp. take it into its Jordan normal form). For large matrices, often some probabilistic methods are used to approximate solutions. It is a very interesting (and very active) field of research, but I don't know much about it.</p>\n" }, { "answer_id": 2108438, "score": 1, "text": "<p>The characteristic equation for matrix $A$ is $t^2+\\omega^2=0$ so $t=\\pm \\omega i$ will be eigenvalues. On finding eigenvectors $\\begin{pmatrix}1\\\\i\\\\\\end{pmatrix}$ and $\\begin{pmatrix}i\\\\1\\\\\\end{pmatrix}$, construct the modal matrix $P =\\begin{pmatrix}1&amp;i\\\\i&amp;1\\\\\\end{pmatrix}$\nand obtain $P^{-1}=\\frac{1}{2}\\begin{pmatrix}1&amp;-i\\\\-i&amp;1\\\\\\end{pmatrix}$.</p>\n\n<p>Using $A=PDP^{-1}\\implies e^A=Pe^DP^{-1}$</p>\n\n<p>$\\implies e^A=\\begin{pmatrix}1&amp;i\\\\i&amp;1\\\\\\end{pmatrix}\\begin{pmatrix}e^{\\omega i}&amp;0\\\\0&amp;e^{-\\omega i}\\\\\\end{pmatrix}\\frac{1}{2}\\begin{pmatrix}1&amp;-i\\\\-i&amp;1\\\\\\end{pmatrix}$.</p>\n\n<p>$\\implies e^A=\\begin{pmatrix}cos(\\omega)&amp;sin(\\omega)\\\\-sin(\\omega)&amp;cos(\\omega)\\\\\\end{pmatrix}$</p>\n" } ]
Question: How to compute $e^A$? Answer 1: <p>Take the definition:</p> <p>$$exp(A)=I+A+A^2/2!+A^3/3!+\cdots$$</p> <p>and group even and odd terms:</p> <p>$$exp(A)=(I+A^2/2!+A^4/4!+\cdots) + (A/1!+A^3/3!+A^5/5!+ \cdots)$$</p> <p>Using the fact that $A^2=-\omega^2I$, </p> <p>$$exp(A)=(I-w^2I/2!+w^4I/4!-\cdots)+(I-\omega^2I/3!+\omega^4I/5!+\cdots)A$$</p> <p>$$exp(A)=(1-w^2/2!+w^4/4!-\cdots)I+(1-\omega^2/3!+\omega^4/5!+\cdots)A$$</p> <blockquote> <p>$$exp(A)=\cos(\omega)I+\dfrac{\sin{\omega}}{\omega}A$$</p> </blockquote> <p>But there is another important way to write it:</p> <blockquote> <p>$$exp(A)=\pmatrix{\cos(\omega)&amp;\sin(\omega)\\ -\sin(\omega)&amp;\cos(\omega)}$$</p> </blockquote> <p>which is the rotation with angle $-\omega$.</p> <p>It is a classical (and deep) result that the exponential of an antisymmetric matrix is an orthogonal matrix. Have a small look at (<a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205" rel="noreferrer">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205</a>) that recalls what exists in dimension 2 and 3 (important Rodrigues formula) and then considers computations in higher dimensions.</p> Answer 2: <p>$e^A$ is not really a <em>power</em>, but rather the <em>exponential map</em>. There is no such thing as "multiplying $e$ by itself $A$ times" when $A$ is a matrix. Instead, it's a notational shorthand for $$ I + A + \frac{A^2}2 + \frac{A^3}{6} + \frac{A^4}{24} + \cdots + \frac{A^n}{n!} + \cdots $$ In order to calculate this efficiently, I suggest you try to find the general form of $A^n$, and from there, the general form of <em>each separate component</em> of $e^A$, keeping in mind the power series definition of $\sin$ and $\cos$</p> Answer 3: <p>As mentioned in the answer by @Arthur, the <em>definition</em> of $e^A$ is the power series $$e^A:=\sum_{n\ge0}\frac{1}{n!}A^n,$$ where $A^0=I$. Now for some matrices (yours, for example) one can try to find some regularities in the sequence $A,A^2,A^3,...$ and work from there. However, for matrices of reasonable size, one can proceed as follows:</p> <ol> <li>If $A$ is <a href="https://en.wikipedia.org/wiki/Diagonalizable_matrix" rel="nofollow noreferrer">diagonalizable</a>, write it as $A=PDP^{-1}$ for a diagonal matrix $D$. Else (a bit harder), find its <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan normal form</a> $A = PJP^{-1}$.</li> <li>It is easy to compute $D^n$ (respectively, $J^n$), and thus $$A^n = (PDP^{-1})^n = PD^nP^{-1},$$ so that one can easily find $e^A$. In particular, for a diagonal matrix $D$ we have that $e^D$ is given by the diagonal matrix with the exponential of the entries of $D$ as elements, making things extremely simple.</li> </ol> <p>Of course, all the hard work here is to diagonalize $A$ (resp. take it into its Jordan normal form). For large matrices, often some probabilistic methods are used to approximate solutions. It is a very interesting (and very active) field of research, but I don't know much about it.</p> Answer 4: <p>The characteristic equation for matrix $A$ is $t^2+\omega^2=0$ so $t=\pm \omega i$ will be eigenvalues. On finding eigenvectors $\begin{pmatrix}1\\i\\\end{pmatrix}$ and $\begin{pmatrix}i\\1\\\end{pmatrix}$, construct the modal matrix $P =\begin{pmatrix}1&amp;i\\i&amp;1\\\end{pmatrix}$ and obtain $P^{-1}=\frac{1}{2}\begin{pmatrix}1&amp;-i\\-i&amp;1\\\end{pmatrix}$.</p> <p>Using $A=PDP^{-1}\implies e^A=Pe^DP^{-1}$</p> <p>$\implies e^A=\begin{pmatrix}1&amp;i\\i&amp;1\\\end{pmatrix}\begin{pmatrix}e^{\omega i}&amp;0\\0&amp;e^{-\omega i}\\\end{pmatrix}\frac{1}{2}\begin{pmatrix}1&amp;-i\\-i&amp;1\\\end{pmatrix}$.</p> <p>$\implies e^A=\begin{pmatrix}cos(\omega)&amp;sin(\omega)\\-sin(\omega)&amp;cos(\omega)\\\end{pmatrix}$</p> Answer 5: <p>Take the definition:</p> <p>$$exp(A)=I+A+A^2/2!+A^3/3!+\cdots$$</p> <p>and group even and odd terms:</p> <p>$$exp(A)=(I+A^2/2!+A^4/4!+\cdots) + (A/1!+A^3/3!+A^5/5!+ \cdots)$$</p> <p>Using the fact that $A^2=-\omega^2I$, </p> <p>$$exp(A)=(I-w^2I/2!+w^4I/4!-\cdots)+(I-\omega^2I/3!+\omega^4I/5!+\cdots)A$$</p> <p>$$exp(A)=(1-w^2/2!+w^4/4!-\cdots)I+(1-\omega^2/3!+\omega^4/5!+\cdots)A$$</p> <blockquote> <p>$$exp(A)=\cos(\omega)I+\dfrac{\sin{\omega}}{\omega}A$$</p> </blockquote> <p>But there is another important way to write it:</p> <blockquote> <p>$$exp(A)=\pmatrix{\cos(\omega)&amp;\sin(\omega)\\ -\sin(\omega)&amp;\cos(\omega)}$$</p> </blockquote> <p>which is the rotation with angle $-\omega$.</p> <p>It is a classical (and deep) result that the exponential of an antisymmetric matrix is an orthogonal matrix. Have a small look at (<a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205" rel="noreferrer">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205</a>) that recalls what exists in dimension 2 and 3 (important Rodrigues formula) and then considers computations in higher dimensions.</p> Answer 6: <p>$e^A$ is not really a <em>power</em>, but rather the <em>exponential map</em>. There is no such thing as "multiplying $e$ by itself $A$ times" when $A$ is a matrix. Instead, it's a notational shorthand for $$ I + A + \frac{A^2}2 + \frac{A^3}{6} + \frac{A^4}{24} + \cdots + \frac{A^n}{n!} + \cdots $$ In order to calculate this efficiently, I suggest you try to find the general form of $A^n$, and from there, the general form of <em>each separate component</em> of $e^A$, keeping in mind the power series definition of $\sin$ and $\cos$</p> Answer 7: <p>As mentioned in the answer by @Arthur, the <em>definition</em> of $e^A$ is the power series $$e^A:=\sum_{n\ge0}\frac{1}{n!}A^n,$$ where $A^0=I$. Now for some matrices (yours, for example) one can try to find some regularities in the sequence $A,A^2,A^3,...$ and work from there. However, for matrices of reasonable size, one can proceed as follows:</p> <ol> <li>If $A$ is <a href="https://en.wikipedia.org/wiki/Diagonalizable_matrix" rel="nofollow noreferrer">diagonalizable</a>, write it as $A=PDP^{-1}$ for a diagonal matrix $D$. Else (a bit harder), find its <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan normal form</a> $A = PJP^{-1}$.</li> <li>It is easy to compute $D^n$ (respectively, $J^n$), and thus $$A^n = (PDP^{-1})^n = PD^nP^{-1},$$ so that one can easily find $e^A$. In particular, for a diagonal matrix $D$ we have that $e^D$ is given by the diagonal matrix with the exponential of the entries of $D$ as elements, making things extremely simple.</li> </ol> <p>Of course, all the hard work here is to diagonalize $A$ (resp. take it into its Jordan normal form). For large matrices, often some probabilistic methods are used to approximate solutions. It is a very interesting (and very active) field of research, but I don't know much about it.</p> Answer 8: <p>The characteristic equation for matrix $A$ is $t^2+\omega^2=0$ so $t=\pm \omega i$ will be eigenvalues. On finding eigenvectors $\begin{pmatrix}1\\i\\\end{pmatrix}$ and $\begin{pmatrix}i\\1\\\end{pmatrix}$, construct the modal matrix $P =\begin{pmatrix}1&amp;i\\i&amp;1\\\end{pmatrix}$ and obtain $P^{-1}=\frac{1}{2}\begin{pmatrix}1&amp;-i\\-i&amp;1\\\end{pmatrix}$.</p> <p>Using $A=PDP^{-1}\implies e^A=Pe^DP^{-1}$</p> <p>$\implies e^A=\begin{pmatrix}1&amp;i\\i&amp;1\\\end{pmatrix}\begin{pmatrix}e^{\omega i}&amp;0\\0&amp;e^{-\omega i}\\\end{pmatrix}\frac{1}{2}\begin{pmatrix}1&amp;-i\\-i&amp;1\\\end{pmatrix}$.</p> <p>$\implies e^A=\begin{pmatrix}cos(\omega)&amp;sin(\omega)\\-sin(\omega)&amp;cos(\omega)\\\end{pmatrix}$</p> Answer 9: <p>Take the definition:</p> <p>$$exp(A)=I+A+A^2/2!+A^3/3!+\cdots$$</p> <p>and group even and odd terms:</p> <p>$$exp(A)=(I+A^2/2!+A^4/4!+\cdots) + (A/1!+A^3/3!+A^5/5!+ \cdots)$$</p> <p>Using the fact that $A^2=-\omega^2I$, </p> <p>$$exp(A)=(I-w^2I/2!+w^4I/4!-\cdots)+(I-\omega^2I/3!+\omega^4I/5!+\cdots)A$$</p> <p>$$exp(A)=(1-w^2/2!+w^4/4!-\cdots)I+(1-\omega^2/3!+\omega^4/5!+\cdots)A$$</p> <blockquote> <p>$$exp(A)=\cos(\omega)I+\dfrac{\sin{\omega}}{\omega}A$$</p> </blockquote> <p>But there is another important way to write it:</p> <blockquote> <p>$$exp(A)=\pmatrix{\cos(\omega)&amp;\sin(\omega)\\ -\sin(\omega)&amp;\cos(\omega)}$$</p> </blockquote> <p>which is the rotation with angle $-\omega$.</p> <p>It is a classical (and deep) result that the exponential of an antisymmetric matrix is an orthogonal matrix. Have a small look at (<a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205" rel="noreferrer">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205</a>) that recalls what exists in dimension 2 and 3 (important Rodrigues formula) and then considers computations in higher dimensions.</p> Answer 10: <p>$e^A$ is not really a <em>power</em>, but rather the <em>exponential map</em>. There is no such thing as "multiplying $e$ by itself $A$ times" when $A$ is a matrix. Instead, it's a notational shorthand for $$ I + A + \frac{A^2}2 + \frac{A^3}{6} + \frac{A^4}{24} + \cdots + \frac{A^n}{n!} + \cdots $$ In order to calculate this efficiently, I suggest you try to find the general form of $A^n$, and from there, the general form of <em>each separate component</em> of $e^A$, keeping in mind the power series definition of $\sin$ and $\cos$</p> Answer 11: <p>As mentioned in the answer by @Arthur, the <em>definition</em> of $e^A$ is the power series $$e^A:=\sum_{n\ge0}\frac{1}{n!}A^n,$$ where $A^0=I$. Now for some matrices (yours, for example) one can try to find some regularities in the sequence $A,A^2,A^3,...$ and work from there. However, for matrices of reasonable size, one can proceed as follows:</p> <ol> <li>If $A$ is <a href="https://en.wikipedia.org/wiki/Diagonalizable_matrix" rel="nofollow noreferrer">diagonalizable</a>, write it as $A=PDP^{-1}$ for a diagonal matrix $D$. Else (a bit harder), find its <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan normal form</a> $A = PJP^{-1}$.</li> <li>It is easy to compute $D^n$ (respectively, $J^n$), and thus $$A^n = (PDP^{-1})^n = PD^nP^{-1},$$ so that one can easily find $e^A$. In particular, for a diagonal matrix $D$ we have that $e^D$ is given by the diagonal matrix with the exponential of the entries of $D$ as elements, making things extremely simple.</li> </ol> <p>Of course, all the hard work here is to diagonalize $A$ (resp. take it into its Jordan normal form). For large matrices, often some probabilistic methods are used to approximate solutions. It is a very interesting (and very active) field of research, but I don't know much about it.</p> Answer 12: <p>The characteristic equation for matrix $A$ is $t^2+\omega^2=0$ so $t=\pm \omega i$ will be eigenvalues. On finding eigenvectors $\begin{pmatrix}1\\i\\\end{pmatrix}$ and $\begin{pmatrix}i\\1\\\end{pmatrix}$, construct the modal matrix $P =\begin{pmatrix}1&amp;i\\i&amp;1\\\end{pmatrix}$ and obtain $P^{-1}=\frac{1}{2}\begin{pmatrix}1&amp;-i\\-i&amp;1\\\end{pmatrix}$.</p> <p>Using $A=PDP^{-1}\implies e^A=Pe^DP^{-1}$</p> <p>$\implies e^A=\begin{pmatrix}1&amp;i\\i&amp;1\\\end{pmatrix}\begin{pmatrix}e^{\omega i}&amp;0\\0&amp;e^{-\omega i}\\\end{pmatrix}\frac{1}{2}\begin{pmatrix}1&amp;-i\\-i&amp;1\\\end{pmatrix}$.</p> <p>$\implies e^A=\begin{pmatrix}cos(\omega)&amp;sin(\omega)\\-sin(\omega)&amp;cos(\omega)\\\end{pmatrix}$</p> Answer 13: <p>Take the definition:</p> <p>$$exp(A)=I+A+A^2/2!+A^3/3!+\cdots$$</p> <p>and group even and odd terms:</p> <p>$$exp(A)=(I+A^2/2!+A^4/4!+\cdots) + (A/1!+A^3/3!+A^5/5!+ \cdots)$$</p> <p>Using the fact that $A^2=-\omega^2I$, </p> <p>$$exp(A)=(I-w^2I/2!+w^4I/4!-\cdots)+(I-\omega^2I/3!+\omega^4I/5!+\cdots)A$$</p> <p>$$exp(A)=(1-w^2/2!+w^4/4!-\cdots)I+(1-\omega^2/3!+\omega^4/5!+\cdots)A$$</p> <blockquote> <p>$$exp(A)=\cos(\omega)I+\dfrac{\sin{\omega}}{\omega}A$$</p> </blockquote> <p>But there is another important way to write it:</p> <blockquote> <p>$$exp(A)=\pmatrix{\cos(\omega)&amp;\sin(\omega)\\ -\sin(\omega)&amp;\cos(\omega)}$$</p> </blockquote> <p>which is the rotation with angle $-\omega$.</p> <p>It is a classical (and deep) result that the exponential of an antisymmetric matrix is an orthogonal matrix. Have a small look at (<a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205" rel="noreferrer">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205</a>) that recalls what exists in dimension 2 and 3 (important Rodrigues formula) and then considers computations in higher dimensions.</p> Answer 14: <p>$e^A$ is not really a <em>power</em>, but rather the <em>exponential map</em>. There is no such thing as "multiplying $e$ by itself $A$ times" when $A$ is a matrix. Instead, it's a notational shorthand for $$ I + A + \frac{A^2}2 + \frac{A^3}{6} + \frac{A^4}{24} + \cdots + \frac{A^n}{n!} + \cdots $$ In order to calculate this efficiently, I suggest you try to find the general form of $A^n$, and from there, the general form of <em>each separate component</em> of $e^A$, keeping in mind the power series definition of $\sin$ and $\cos$</p> Answer 15: <p>As mentioned in the answer by @Arthur, the <em>definition</em> of $e^A$ is the power series $$e^A:=\sum_{n\ge0}\frac{1}{n!}A^n,$$ where $A^0=I$. Now for some matrices (yours, for example) one can try to find some regularities in the sequence $A,A^2,A^3,...$ and work from there. However, for matrices of reasonable size, one can proceed as follows:</p> <ol> <li>If $A$ is <a href="https://en.wikipedia.org/wiki/Diagonalizable_matrix" rel="nofollow noreferrer">diagonalizable</a>, write it as $A=PDP^{-1}$ for a diagonal matrix $D$. Else (a bit harder), find its <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan normal form</a> $A = PJP^{-1}$.</li> <li>It is easy to compute $D^n$ (respectively, $J^n$), and thus $$A^n = (PDP^{-1})^n = PD^nP^{-1},$$ so that one can easily find $e^A$. In particular, for a diagonal matrix $D$ we have that $e^D$ is given by the diagonal matrix with the exponential of the entries of $D$ as elements, making things extremely simple.</li> </ol> <p>Of course, all the hard work here is to diagonalize $A$ (resp. take it into its Jordan normal form). For large matrices, often some probabilistic methods are used to approximate solutions. It is a very interesting (and very active) field of research, but I don't know much about it.</p> Answer 16: <p>The characteristic equation for matrix $A$ is $t^2+\omega^2=0$ so $t=\pm \omega i$ will be eigenvalues. On finding eigenvectors $\begin{pmatrix}1\\i\\\end{pmatrix}$ and $\begin{pmatrix}i\\1\\\end{pmatrix}$, construct the modal matrix $P =\begin{pmatrix}1&amp;i\\i&amp;1\\\end{pmatrix}$ and obtain $P^{-1}=\frac{1}{2}\begin{pmatrix}1&amp;-i\\-i&amp;1\\\end{pmatrix}$.</p> <p>Using $A=PDP^{-1}\implies e^A=Pe^DP^{-1}$</p> <p>$\implies e^A=\begin{pmatrix}1&amp;i\\i&amp;1\\\end{pmatrix}\begin{pmatrix}e^{\omega i}&amp;0\\0&amp;e^{-\omega i}\\\end{pmatrix}\frac{1}{2}\begin{pmatrix}1&amp;-i\\-i&amp;1\\\end{pmatrix}$.</p> <p>$\implies e^A=\begin{pmatrix}cos(\omega)&amp;sin(\omega)\\-sin(\omega)&amp;cos(\omega)\\\end{pmatrix}$</p> Answer 17: <p>Take the definition:</p> <p>$$exp(A)=I+A+A^2/2!+A^3/3!+\cdots$$</p> <p>and group even and odd terms:</p> <p>$$exp(A)=(I+A^2/2!+A^4/4!+\cdots) + (A/1!+A^3/3!+A^5/5!+ \cdots)$$</p> <p>Using the fact that $A^2=-\omega^2I$, </p> <p>$$exp(A)=(I-w^2I/2!+w^4I/4!-\cdots)+(I-\omega^2I/3!+\omega^4I/5!+\cdots)A$$</p> <p>$$exp(A)=(1-w^2/2!+w^4/4!-\cdots)I+(1-\omega^2/3!+\omega^4/5!+\cdots)A$$</p> <blockquote> <p>$$exp(A)=\cos(\omega)I+\dfrac{\sin{\omega}}{\omega}A$$</p> </blockquote> <p>But there is another important way to write it:</p> <blockquote> <p>$$exp(A)=\pmatrix{\cos(\omega)&amp;\sin(\omega)\\ -\sin(\omega)&amp;\cos(\omega)}$$</p> </blockquote> <p>which is the rotation with angle $-\omega$.</p> <p>It is a classical (and deep) result that the exponential of an antisymmetric matrix is an orthogonal matrix. Have a small look at (<a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205" rel="noreferrer">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205</a>) that recalls what exists in dimension 2 and 3 (important Rodrigues formula) and then considers computations in higher dimensions.</p> Answer 18: <p>$e^A$ is not really a <em>power</em>, but rather the <em>exponential map</em>. There is no such thing as "multiplying $e$ by itself $A$ times" when $A$ is a matrix. Instead, it's a notational shorthand for $$ I + A + \frac{A^2}2 + \frac{A^3}{6} + \frac{A^4}{24} + \cdots + \frac{A^n}{n!} + \cdots $$ In order to calculate this efficiently, I suggest you try to find the general form of $A^n$, and from there, the general form of <em>each separate component</em> of $e^A$, keeping in mind the power series definition of $\sin$ and $\cos$</p> Answer 19: <p>As mentioned in the answer by @Arthur, the <em>definition</em> of $e^A$ is the power series $$e^A:=\sum_{n\ge0}\frac{1}{n!}A^n,$$ where $A^0=I$. Now for some matrices (yours, for example) one can try to find some regularities in the sequence $A,A^2,A^3,...$ and work from there. However, for matrices of reasonable size, one can proceed as follows:</p> <ol> <li>If $A$ is <a href="https://en.wikipedia.org/wiki/Diagonalizable_matrix" rel="nofollow noreferrer">diagonalizable</a>, write it as $A=PDP^{-1}$ for a diagonal matrix $D$. Else (a bit harder), find its <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan normal form</a> $A = PJP^{-1}$.</li> <li>It is easy to compute $D^n$ (respectively, $J^n$), and thus $$A^n = (PDP^{-1})^n = PD^nP^{-1},$$ so that one can easily find $e^A$. In particular, for a diagonal matrix $D$ we have that $e^D$ is given by the diagonal matrix with the exponential of the entries of $D$ as elements, making things extremely simple.</li> </ol> <p>Of course, all the hard work here is to diagonalize $A$ (resp. take it into its Jordan normal form). For large matrices, often some probabilistic methods are used to approximate solutions. It is a very interesting (and very active) field of research, but I don't know much about it.</p> Answer 20: <p>The characteristic equation for matrix $A$ is $t^2+\omega^2=0$ so $t=\pm \omega i$ will be eigenvalues. On finding eigenvectors $\begin{pmatrix}1\\i\\\end{pmatrix}$ and $\begin{pmatrix}i\\1\\\end{pmatrix}$, construct the modal matrix $P =\begin{pmatrix}1&amp;i\\i&amp;1\\\end{pmatrix}$ and obtain $P^{-1}=\frac{1}{2}\begin{pmatrix}1&amp;-i\\-i&amp;1\\\end{pmatrix}$.</p> <p>Using $A=PDP^{-1}\implies e^A=Pe^DP^{-1}$</p> <p>$\implies e^A=\begin{pmatrix}1&amp;i\\i&amp;1\\\end{pmatrix}\begin{pmatrix}e^{\omega i}&amp;0\\0&amp;e^{-\omega i}\\\end{pmatrix}\frac{1}{2}\begin{pmatrix}1&amp;-i\\-i&amp;1\\\end{pmatrix}$.</p> <p>$\implies e^A=\begin{pmatrix}cos(\omega)&amp;sin(\omega)\\-sin(\omega)&amp;cos(\omega)\\\end{pmatrix}$</p> Answer 21: <p>Take the definition:</p> <p>$$exp(A)=I+A+A^2/2!+A^3/3!+\cdots$$</p> <p>and group even and odd terms:</p> <p>$$exp(A)=(I+A^2/2!+A^4/4!+\cdots) + (A/1!+A^3/3!+A^5/5!+ \cdots)$$</p> <p>Using the fact that $A^2=-\omega^2I$, </p> <p>$$exp(A)=(I-w^2I/2!+w^4I/4!-\cdots)+(I-\omega^2I/3!+\omega^4I/5!+\cdots)A$$</p> <p>$$exp(A)=(1-w^2/2!+w^4/4!-\cdots)I+(1-\omega^2/3!+\omega^4/5!+\cdots)A$$</p> <blockquote> <p>$$exp(A)=\cos(\omega)I+\dfrac{\sin{\omega}}{\omega}A$$</p> </blockquote> <p>But there is another important way to write it:</p> <blockquote> <p>$$exp(A)=\pmatrix{\cos(\omega)&amp;\sin(\omega)\\ -\sin(\omega)&amp;\cos(\omega)}$$</p> </blockquote> <p>which is the rotation with angle $-\omega$.</p> <p>It is a classical (and deep) result that the exponential of an antisymmetric matrix is an orthogonal matrix. Have a small look at (<a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205" rel="noreferrer">http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3205</a>) that recalls what exists in dimension 2 and 3 (important Rodrigues formula) and then considers computations in higher dimensions.</p> Answer 22: <p>$e^A$ is not really a <em>power</em>, but rather the <em>exponential map</em>. There is no such thing as "multiplying $e$ by itself $A$ times" when $A$ is a matrix. Instead, it's a notational shorthand for $$ I + A + \frac{A^2}2 + \frac{A^3}{6} + \frac{A^4}{24} + \cdots + \frac{A^n}{n!} + \cdots $$ In order to calculate this efficiently, I suggest you try to find the general form of $A^n$, and from there, the general form of <em>each separate component</em> of $e^A$, keeping in mind the power series definition of $\sin$ and $\cos$</p> Answer 23: <p>As mentioned in the answer by @Arthur, the <em>definition</em> of $e^A$ is the power series $$e^A:=\sum_{n\ge0}\frac{1}{n!}A^n,$$ where $A^0=I$. Now for some matrices (yours, for example) one can try to find some regularities in the sequence $A,A^2,A^3,...$ and work from there. However, for matrices of reasonable size, one can proceed as follows:</p> <ol> <li>If $A$ is <a href="https://en.wikipedia.org/wiki/Diagonalizable_matrix" rel="nofollow noreferrer">diagonalizable</a>, write it as $A=PDP^{-1}$ for a diagonal matrix $D$. Else (a bit harder), find its <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan normal form</a> $A = PJP^{-1}$.</li> <li>It is easy to compute $D^n$ (respectively, $J^n$), and thus $$A^n = (PDP^{-1})^n = PD^nP^{-1},$$ so that one can easily find $e^A$. In particular, for a diagonal matrix $D$ we have that $e^D$ is given by the diagonal matrix with the exponential of the entries of $D$ as elements, making things extremely simple.</li> </ol> <p>Of course, all the hard work here is to diagonalize $A$ (resp. take it into its Jordan normal form). For large matrices, often some probabilistic methods are used to approximate solutions. It is a very interesting (and very active) field of research, but I don't know much about it.</p> Answer 24: <p>The characteristic equation for matrix $A$ is $t^2+\omega^2=0$ so $t=\pm \omega i$ will be eigenvalues. On finding eigenvectors $\begin{pmatrix}1\\i\\\end{pmatrix}$ and $\begin{pmatrix}i\\1\\\end{pmatrix}$, construct the modal matrix $P =\begin{pmatrix}1&amp;i\\i&amp;1\\\end{pmatrix}$ and obtain $P^{-1}=\frac{1}{2}\begin{pmatrix}1&amp;-i\\-i&amp;1\\\end{pmatrix}$.</p> <p>Using $A=PDP^{-1}\implies e^A=Pe^DP^{-1}$</p> <p>$\implies e^A=\begin{pmatrix}1&amp;i\\i&amp;1\\\end{pmatrix}\begin{pmatrix}e^{\omega i}&amp;0\\0&amp;e^{-\omega i}\\\end{pmatrix}\frac{1}{2}\begin{pmatrix}1&amp;-i\\-i&amp;1\\\end{pmatrix}$.</p> <p>$\implies e^A=\begin{pmatrix}cos(\omega)&amp;sin(\omega)\\-sin(\omega)&amp;cos(\omega)\\\end{pmatrix}$</p>
How to find sides of parallelogram given centroid and two vertices?
2,108,607
5
https://math.stackexchange.com/questions/2108607/how-to-find-sides-of-parallelogram-given-centroid-and-two-vertices
<p>I've been given a parallelogram with two of its vertices, $A$ and $B$, being equal to (-5, -8, 3) and (4, 7, -5) respectively, and a centroid $S$ at (-10, 4, 6).</p> <p>How do I go around finding remaining coordinates of points $D$ and $C$? </p>
[ { "answer_id": 2108612, "score": 1, "text": "<p>Let $C = (x_1, y_1, z_1)$</p>\n\n<p>Then mid point of AC is $(-10, 4, 6)$</p>\n\n<p>$\\left(\\frac{-5 + x_1}{2}, \\frac{-8 + y_1}{2}, \\frac{3 + z_1}{2}\\right) = (-10, 4, 6)$</p>\n\n<p>$\\frac{-5 + x_1}{2} = -10, \\frac{-8 + y_1}{2} = 4, \\frac{3 + z_1}{2} = 6$</p>\n\n<p>$x_1 = -15, y_1 = 16, z_1 = 9$</p>\n\n<p>Compare and find values. Similarly for BD.</p>\n" }, { "answer_id": 2108611, "score": 0, "text": "<p>The centroid is the midpoint between each pair of oposite vertices of the parallelogram. So if one vertex is at $(2,3,0)$ and the opposite vertex is at $(0,1,2)$, the centroid would be halfway between at $(1,2,1)$. If instead you are given $(0,1,2)$ for a vertex and $(1,2,1)$ for the centroid, the opposite vertex must be at $(2,3,0)$ to get the midpoint in the right place.</p>\n\n<p>Now what do you get with your given coordinates?</p>\n" }, { "answer_id": 2108612, "score": 1, "text": "<p>Let $C = (x_1, y_1, z_1)$</p>\n\n<p>Then mid point of AC is $(-10, 4, 6)$</p>\n\n<p>$\\left(\\frac{-5 + x_1}{2}, \\frac{-8 + y_1}{2}, \\frac{3 + z_1}{2}\\right) = (-10, 4, 6)$</p>\n\n<p>$\\frac{-5 + x_1}{2} = -10, \\frac{-8 + y_1}{2} = 4, \\frac{3 + z_1}{2} = 6$</p>\n\n<p>$x_1 = -15, y_1 = 16, z_1 = 9$</p>\n\n<p>Compare and find values. Similarly for BD.</p>\n" }, { "answer_id": 2108611, "score": 0, "text": "<p>The centroid is the midpoint between each pair of oposite vertices of the parallelogram. So if one vertex is at $(2,3,0)$ and the opposite vertex is at $(0,1,2)$, the centroid would be halfway between at $(1,2,1)$. If instead you are given $(0,1,2)$ for a vertex and $(1,2,1)$ for the centroid, the opposite vertex must be at $(2,3,0)$ to get the midpoint in the right place.</p>\n\n<p>Now what do you get with your given coordinates?</p>\n" }, { "answer_id": 2108612, "score": 1, "text": "<p>Let $C = (x_1, y_1, z_1)$</p>\n\n<p>Then mid point of AC is $(-10, 4, 6)$</p>\n\n<p>$\\left(\\frac{-5 + x_1}{2}, \\frac{-8 + y_1}{2}, \\frac{3 + z_1}{2}\\right) = (-10, 4, 6)$</p>\n\n<p>$\\frac{-5 + x_1}{2} = -10, \\frac{-8 + y_1}{2} = 4, \\frac{3 + z_1}{2} = 6$</p>\n\n<p>$x_1 = -15, y_1 = 16, z_1 = 9$</p>\n\n<p>Compare and find values. Similarly for BD.</p>\n" }, { "answer_id": 2108611, "score": 0, "text": "<p>The centroid is the midpoint between each pair of oposite vertices of the parallelogram. So if one vertex is at $(2,3,0)$ and the opposite vertex is at $(0,1,2)$, the centroid would be halfway between at $(1,2,1)$. If instead you are given $(0,1,2)$ for a vertex and $(1,2,1)$ for the centroid, the opposite vertex must be at $(2,3,0)$ to get the midpoint in the right place.</p>\n\n<p>Now what do you get with your given coordinates?</p>\n" }, { "answer_id": 2108612, "score": 1, "text": "<p>Let $C = (x_1, y_1, z_1)$</p>\n\n<p>Then mid point of AC is $(-10, 4, 6)$</p>\n\n<p>$\\left(\\frac{-5 + x_1}{2}, \\frac{-8 + y_1}{2}, \\frac{3 + z_1}{2}\\right) = (-10, 4, 6)$</p>\n\n<p>$\\frac{-5 + x_1}{2} = -10, \\frac{-8 + y_1}{2} = 4, \\frac{3 + z_1}{2} = 6$</p>\n\n<p>$x_1 = -15, y_1 = 16, z_1 = 9$</p>\n\n<p>Compare and find values. Similarly for BD.</p>\n" }, { "answer_id": 2108611, "score": 0, "text": "<p>The centroid is the midpoint between each pair of oposite vertices of the parallelogram. So if one vertex is at $(2,3,0)$ and the opposite vertex is at $(0,1,2)$, the centroid would be halfway between at $(1,2,1)$. If instead you are given $(0,1,2)$ for a vertex and $(1,2,1)$ for the centroid, the opposite vertex must be at $(2,3,0)$ to get the midpoint in the right place.</p>\n\n<p>Now what do you get with your given coordinates?</p>\n" }, { "answer_id": 2108612, "score": 1, "text": "<p>Let $C = (x_1, y_1, z_1)$</p>\n\n<p>Then mid point of AC is $(-10, 4, 6)$</p>\n\n<p>$\\left(\\frac{-5 + x_1}{2}, \\frac{-8 + y_1}{2}, \\frac{3 + z_1}{2}\\right) = (-10, 4, 6)$</p>\n\n<p>$\\frac{-5 + x_1}{2} = -10, \\frac{-8 + y_1}{2} = 4, \\frac{3 + z_1}{2} = 6$</p>\n\n<p>$x_1 = -15, y_1 = 16, z_1 = 9$</p>\n\n<p>Compare and find values. Similarly for BD.</p>\n" }, { "answer_id": 2108611, "score": 0, "text": "<p>The centroid is the midpoint between each pair of oposite vertices of the parallelogram. So if one vertex is at $(2,3,0)$ and the opposite vertex is at $(0,1,2)$, the centroid would be halfway between at $(1,2,1)$. If instead you are given $(0,1,2)$ for a vertex and $(1,2,1)$ for the centroid, the opposite vertex must be at $(2,3,0)$ to get the midpoint in the right place.</p>\n\n<p>Now what do you get with your given coordinates?</p>\n" }, { "answer_id": 2108612, "score": 1, "text": "<p>Let $C = (x_1, y_1, z_1)$</p>\n\n<p>Then mid point of AC is $(-10, 4, 6)$</p>\n\n<p>$\\left(\\frac{-5 + x_1}{2}, \\frac{-8 + y_1}{2}, \\frac{3 + z_1}{2}\\right) = (-10, 4, 6)$</p>\n\n<p>$\\frac{-5 + x_1}{2} = -10, \\frac{-8 + y_1}{2} = 4, \\frac{3 + z_1}{2} = 6$</p>\n\n<p>$x_1 = -15, y_1 = 16, z_1 = 9$</p>\n\n<p>Compare and find values. Similarly for BD.</p>\n" }, { "answer_id": 2108611, "score": 0, "text": "<p>The centroid is the midpoint between each pair of oposite vertices of the parallelogram. So if one vertex is at $(2,3,0)$ and the opposite vertex is at $(0,1,2)$, the centroid would be halfway between at $(1,2,1)$. If instead you are given $(0,1,2)$ for a vertex and $(1,2,1)$ for the centroid, the opposite vertex must be at $(2,3,0)$ to get the midpoint in the right place.</p>\n\n<p>Now what do you get with your given coordinates?</p>\n" } ]
Question: How to find sides of parallelogram given centroid and two vertices? Answer 1: <p>Let $C = (x_1, y_1, z_1)$</p> <p>Then mid point of AC is $(-10, 4, 6)$</p> <p>$\left(\frac{-5 + x_1}{2}, \frac{-8 + y_1}{2}, \frac{3 + z_1}{2}\right) = (-10, 4, 6)$</p> <p>$\frac{-5 + x_1}{2} = -10, \frac{-8 + y_1}{2} = 4, \frac{3 + z_1}{2} = 6$</p> <p>$x_1 = -15, y_1 = 16, z_1 = 9$</p> <p>Compare and find values. Similarly for BD.</p> Answer 2: <p>The centroid is the midpoint between each pair of oposite vertices of the parallelogram. So if one vertex is at $(2,3,0)$ and the opposite vertex is at $(0,1,2)$, the centroid would be halfway between at $(1,2,1)$. If instead you are given $(0,1,2)$ for a vertex and $(1,2,1)$ for the centroid, the opposite vertex must be at $(2,3,0)$ to get the midpoint in the right place.</p> <p>Now what do you get with your given coordinates?</p> Answer 3: <p>Let $C = (x_1, y_1, z_1)$</p> <p>Then mid point of AC is $(-10, 4, 6)$</p> <p>$\left(\frac{-5 + x_1}{2}, \frac{-8 + y_1}{2}, \frac{3 + z_1}{2}\right) = (-10, 4, 6)$</p> <p>$\frac{-5 + x_1}{2} = -10, \frac{-8 + y_1}{2} = 4, \frac{3 + z_1}{2} = 6$</p> <p>$x_1 = -15, y_1 = 16, z_1 = 9$</p> <p>Compare and find values. Similarly for BD.</p> Answer 4: <p>The centroid is the midpoint between each pair of oposite vertices of the parallelogram. So if one vertex is at $(2,3,0)$ and the opposite vertex is at $(0,1,2)$, the centroid would be halfway between at $(1,2,1)$. If instead you are given $(0,1,2)$ for a vertex and $(1,2,1)$ for the centroid, the opposite vertex must be at $(2,3,0)$ to get the midpoint in the right place.</p> <p>Now what do you get with your given coordinates?</p> Answer 5: <p>Let $C = (x_1, y_1, z_1)$</p> <p>Then mid point of AC is $(-10, 4, 6)$</p> <p>$\left(\frac{-5 + x_1}{2}, \frac{-8 + y_1}{2}, \frac{3 + z_1}{2}\right) = (-10, 4, 6)$</p> <p>$\frac{-5 + x_1}{2} = -10, \frac{-8 + y_1}{2} = 4, \frac{3 + z_1}{2} = 6$</p> <p>$x_1 = -15, y_1 = 16, z_1 = 9$</p> <p>Compare and find values. Similarly for BD.</p> Answer 6: <p>The centroid is the midpoint between each pair of oposite vertices of the parallelogram. So if one vertex is at $(2,3,0)$ and the opposite vertex is at $(0,1,2)$, the centroid would be halfway between at $(1,2,1)$. If instead you are given $(0,1,2)$ for a vertex and $(1,2,1)$ for the centroid, the opposite vertex must be at $(2,3,0)$ to get the midpoint in the right place.</p> <p>Now what do you get with your given coordinates?</p> Answer 7: <p>Let $C = (x_1, y_1, z_1)$</p> <p>Then mid point of AC is $(-10, 4, 6)$</p> <p>$\left(\frac{-5 + x_1}{2}, \frac{-8 + y_1}{2}, \frac{3 + z_1}{2}\right) = (-10, 4, 6)$</p> <p>$\frac{-5 + x_1}{2} = -10, \frac{-8 + y_1}{2} = 4, \frac{3 + z_1}{2} = 6$</p> <p>$x_1 = -15, y_1 = 16, z_1 = 9$</p> <p>Compare and find values. Similarly for BD.</p> Answer 8: <p>The centroid is the midpoint between each pair of oposite vertices of the parallelogram. So if one vertex is at $(2,3,0)$ and the opposite vertex is at $(0,1,2)$, the centroid would be halfway between at $(1,2,1)$. If instead you are given $(0,1,2)$ for a vertex and $(1,2,1)$ for the centroid, the opposite vertex must be at $(2,3,0)$ to get the midpoint in the right place.</p> <p>Now what do you get with your given coordinates?</p> Answer 9: <p>Let $C = (x_1, y_1, z_1)$</p> <p>Then mid point of AC is $(-10, 4, 6)$</p> <p>$\left(\frac{-5 + x_1}{2}, \frac{-8 + y_1}{2}, \frac{3 + z_1}{2}\right) = (-10, 4, 6)$</p> <p>$\frac{-5 + x_1}{2} = -10, \frac{-8 + y_1}{2} = 4, \frac{3 + z_1}{2} = 6$</p> <p>$x_1 = -15, y_1 = 16, z_1 = 9$</p> <p>Compare and find values. Similarly for BD.</p> Answer 10: <p>The centroid is the midpoint between each pair of oposite vertices of the parallelogram. So if one vertex is at $(2,3,0)$ and the opposite vertex is at $(0,1,2)$, the centroid would be halfway between at $(1,2,1)$. If instead you are given $(0,1,2)$ for a vertex and $(1,2,1)$ for the centroid, the opposite vertex must be at $(2,3,0)$ to get the midpoint in the right place.</p> <p>Now what do you get with your given coordinates?</p> Answer 11: <p>Let $C = (x_1, y_1, z_1)$</p> <p>Then mid point of AC is $(-10, 4, 6)$</p> <p>$\left(\frac{-5 + x_1}{2}, \frac{-8 + y_1}{2}, \frac{3 + z_1}{2}\right) = (-10, 4, 6)$</p> <p>$\frac{-5 + x_1}{2} = -10, \frac{-8 + y_1}{2} = 4, \frac{3 + z_1}{2} = 6$</p> <p>$x_1 = -15, y_1 = 16, z_1 = 9$</p> <p>Compare and find values. Similarly for BD.</p> Answer 12: <p>The centroid is the midpoint between each pair of oposite vertices of the parallelogram. So if one vertex is at $(2,3,0)$ and the opposite vertex is at $(0,1,2)$, the centroid would be halfway between at $(1,2,1)$. If instead you are given $(0,1,2)$ for a vertex and $(1,2,1)$ for the centroid, the opposite vertex must be at $(2,3,0)$ to get the midpoint in the right place.</p> <p>Now what do you get with your given coordinates?</p>
Does $\sum q^\sqrt n$ converge?
2,109,313
5
https://math.stackexchange.com/questions/2109313/does-sum-q-sqrt-n-converge
<blockquote> <p>Does $\sum q^\sqrt n$ converge? ($q&gt;0$)</p> </blockquote> <p>It is clear that if $q\ge1$ series diverges, but what about $q\in(0; 1)$?</p>
[ { "answer_id": 2109330, "score": 8, "text": "<p>For $q\\in(0,1)$:</p>\n\n<p>$$\\sum_{n=1}^\\infty q^{\\sqrt n}&lt;\\sum_{n=1}^\\infty q^{\\lfloor\\sqrt n\\rfloor}\\stackrel*=\\sum_{k=1}^\\infty (2k+1)q^k&lt;\\infty$$</p>\n\n<p>$(*)$ Note that for each $k\\in\\Bbb N$ there are exactly $2k+1$ natural numbers $n$ such that $\\lfloor\\sqrt n\\rfloor=k$.</p>\n\n<p>EDIT (motivated by Clement's comment):</p>\n\n<p>Let $r\\in(0,1)$. Take some natural $s&gt;1/r$. For each natural $k$, there are\n$$p(k)=(k+1)^s-k^s$$\nnatural numbers $n$ such that $\\lfloor\\sqrt[s]n\\rfloor=k$. Then\n$$\\sum_{n=1}^\\infty q^{n^r}&lt;\\sum_{k=1}^\\infty p(k)q^k&lt;\\infty$$</p>\n" }, { "answer_id": 2109325, "score": 5, "text": "<p><strong>Yes</strong>, it does converge <em>(and quite fast, but that's besides the point)</em>. We can prove it by comparison with, for instance, the series $\\sum_{n=1}^\\infty \\frac{1}{n^2}$.</p>\n\n<p>Let $q\\in(0,1)$ be any number.</p>\n\n<p>$$\\frac{q^{\\sqrt{n}}}{\\frac{1}{n^2}} = n^2 e^{-\\sqrt{n} \\ln\\frac{1}{q}}\n= e^{-\\sqrt{n} \\ln\\frac{1}{q}+2\\ln n}\n= e^{-\\ln\\frac{1}{q}\\left( \\sqrt{n} - \\frac{2}{\\ln\\frac{1}{q}}\\ln n\\right)}$$\nNow, you can use the fact$^{(\\dagger)}$ that for any real number $a\\in\\mathbb{R}$,\n$$\n\\sqrt{n}-a\\ln n \\xrightarrow[n\\to\\infty]{} \\infty\n$$\nto conclude that $\\frac{q^{\\sqrt{n}}}{\\frac{1}{n^2}}\\xrightarrow[n\\to\\infty]{} 0$.</p>\n\n<hr>\n\n<p>If $(\\dagger)$ is not obvious or known to you:\n$$\n\\sqrt{n}-a\\ln n = \\sqrt{n}\\left(1-a\\frac{\\ln n}{\\sqrt{n}}\\right) \n= \\sqrt{n}\\left(1-2a\\frac{\\ln \\sqrt{n}}{\\sqrt{n}}\\right) $$\nso it is sufficient to know or prove that $\\frac{\\ln x}{x} \\xrightarrow[x\\to\\infty]{} 0$.</p>\n" }, { "answer_id": 2109338, "score": 1, "text": "<p>Let $f:\\mathbb{Z}_{\\geq 0}\\to\\mathbb{R}_{\\geq 0}$ be a function such that, as a function of $m\\in\\mathbb{Z}_{\\geq 0}$, the cardinality of the preimage $T_m:=f^{-1}\\big([m,m+1)\\big)$ is polynomially bounded, namely, for some polynomial $p(X)\\in\\mathbb{R}[X]$, $\\big|T_m\\big|\\leq p(m)$ for every $m$. (In fact, we only need that $T_m$ is finite for all $m$ and $\\limsup\\limits_{m\\to\\infty}\\,\\sqrt[m]{\\left|T_m\\right|}\\leq1$.) Then, for $q&gt;0$, the infinite sum $\\displaystyle\\sum_{n=0}^\\infty\\,q^{f(n)}$ converges if and only if $q&lt;1$. </p>\n\n<p>It is trivial that $q&lt;1$ is necessary for the sum to converge. We shall show that the sum indeed converges when $q&lt;1$. Observe that\n$$\\sum_{n=0}^\\infty\\,q^{f(n)}=\\sum_{m=0}^\\infty\\,q^{m}\\,\\sum_{j\\in T_m}\\,q^{f(j)-m}\\leq\\sum_{m=0}^\\infty\\,q^m\\,\\left|T_m\\right|\\,.$$\nSince $\\left|T_m\\right|$ is polynomially bounded, we conclude that $\\sum\\limits_{m=0}^\\infty\\,q^m\\,\\left|T_m\\right|&lt;\\infty$ for $q\\in(0,1)$.</p>\n\n<p>In particular, if $f(n)=\\sqrt{n}$, then $\\big|T_m\\big|=(m+1)^2-m^2=2m+1$ is linear, whence polynomially bounded. Hence, for $q&gt;0$, the sum $\\sum\\limits_{n=0}^\\infty\\,q^{\\sqrt{n}}$ is convergent iff $q&lt;1$.</p>\n" }, { "answer_id": 2109346, "score": 1, "text": "<ul>\n<li>Write $q=\\frac{1}{1+y}$</li>\n<li>Note that $(1+y)^k &gt; {k \\choose 3}y^3$</li>\n<li>Deduce that $q^k &lt; \\frac{y^{-3}3!}{k(k-1)(k-2)}$</li>\n<li>Take $k=\\sqrt{n}$ to see$$q^{\\sqrt{n}}=O(n^{-3/2})$$</li>\n<li>Remark that $n^{-3/2 }$ has a convergent sum</li>\n<li>Deduce that the sum is convergent</li>\n</ul>\n" }, { "answer_id": 2109405, "score": 0, "text": "<p>We can use the estimate $\\sqrt n\\geq\\ln n$ to get\n$$\n\\sum_{n=1}^\\infty q^{\\sqrt n}\\leq\\sum_{n=1}^\\infty q^{\\ln n}=\\sum_{n=1}^\\infty n^{\\ln q}&lt;\\infty,\\ \\ \\text{ if }\\ln q&lt;-1,$$\nwhich happens for $0&lt;q&lt;\\frac{1}{e}\\approx0.367878$. We have thus obtained only a partial solution, but this argument can be modified in order to get the desired result:</p>\n\n<p>Let's fix $k\\geq1$. As $\\frac{\\sqrt n}{k\\ln n}\\to\\infty$ when $n\\to\\infty$ we know that there is some $N(k)\\in\\mathbb{N}$ such that $$\\sqrt{n}\\geq k\\ln n,\\ \\ \\text{for }n\\geq N.$$ Performing the above estimations again we arrive at\n$$\n\\sum_{n=N}^\\infty q^{\\sqrt n}\\leq\\sum_{N}^\\infty n^{k\\ln q}&lt;\\infty,\n\\ \\ \\text{ if }k\\ln q&lt;-1,$$ which happens for $0&lt;q&lt;e^{-\\frac{1}{k}}$. As $e^{-\\frac{1}{k}}\\nearrow1$ when $k\\to\\infty$ this yields the desired convergence for $0&lt;q&lt;1$. </p>\n" }, { "answer_id": 2109330, "score": 8, "text": "<p>For $q\\in(0,1)$:</p>\n\n<p>$$\\sum_{n=1}^\\infty q^{\\sqrt n}&lt;\\sum_{n=1}^\\infty q^{\\lfloor\\sqrt n\\rfloor}\\stackrel*=\\sum_{k=1}^\\infty (2k+1)q^k&lt;\\infty$$</p>\n\n<p>$(*)$ Note that for each $k\\in\\Bbb N$ there are exactly $2k+1$ natural numbers $n$ such that $\\lfloor\\sqrt n\\rfloor=k$.</p>\n\n<p>EDIT (motivated by Clement's comment):</p>\n\n<p>Let $r\\in(0,1)$. Take some natural $s&gt;1/r$. For each natural $k$, there are\n$$p(k)=(k+1)^s-k^s$$\nnatural numbers $n$ such that $\\lfloor\\sqrt[s]n\\rfloor=k$. Then\n$$\\sum_{n=1}^\\infty q^{n^r}&lt;\\sum_{k=1}^\\infty p(k)q^k&lt;\\infty$$</p>\n" }, { "answer_id": 2109325, "score": 5, "text": "<p><strong>Yes</strong>, it does converge <em>(and quite fast, but that's besides the point)</em>. We can prove it by comparison with, for instance, the series $\\sum_{n=1}^\\infty \\frac{1}{n^2}$.</p>\n\n<p>Let $q\\in(0,1)$ be any number.</p>\n\n<p>$$\\frac{q^{\\sqrt{n}}}{\\frac{1}{n^2}} = n^2 e^{-\\sqrt{n} \\ln\\frac{1}{q}}\n= e^{-\\sqrt{n} \\ln\\frac{1}{q}+2\\ln n}\n= e^{-\\ln\\frac{1}{q}\\left( \\sqrt{n} - \\frac{2}{\\ln\\frac{1}{q}}\\ln n\\right)}$$\nNow, you can use the fact$^{(\\dagger)}$ that for any real number $a\\in\\mathbb{R}$,\n$$\n\\sqrt{n}-a\\ln n \\xrightarrow[n\\to\\infty]{} \\infty\n$$\nto conclude that $\\frac{q^{\\sqrt{n}}}{\\frac{1}{n^2}}\\xrightarrow[n\\to\\infty]{} 0$.</p>\n\n<hr>\n\n<p>If $(\\dagger)$ is not obvious or known to you:\n$$\n\\sqrt{n}-a\\ln n = \\sqrt{n}\\left(1-a\\frac{\\ln n}{\\sqrt{n}}\\right) \n= \\sqrt{n}\\left(1-2a\\frac{\\ln \\sqrt{n}}{\\sqrt{n}}\\right) $$\nso it is sufficient to know or prove that $\\frac{\\ln x}{x} \\xrightarrow[x\\to\\infty]{} 0$.</p>\n" }, { "answer_id": 2109338, "score": 1, "text": "<p>Let $f:\\mathbb{Z}_{\\geq 0}\\to\\mathbb{R}_{\\geq 0}$ be a function such that, as a function of $m\\in\\mathbb{Z}_{\\geq 0}$, the cardinality of the preimage $T_m:=f^{-1}\\big([m,m+1)\\big)$ is polynomially bounded, namely, for some polynomial $p(X)\\in\\mathbb{R}[X]$, $\\big|T_m\\big|\\leq p(m)$ for every $m$. (In fact, we only need that $T_m$ is finite for all $m$ and $\\limsup\\limits_{m\\to\\infty}\\,\\sqrt[m]{\\left|T_m\\right|}\\leq1$.) Then, for $q&gt;0$, the infinite sum $\\displaystyle\\sum_{n=0}^\\infty\\,q^{f(n)}$ converges if and only if $q&lt;1$. </p>\n\n<p>It is trivial that $q&lt;1$ is necessary for the sum to converge. We shall show that the sum indeed converges when $q&lt;1$. Observe that\n$$\\sum_{n=0}^\\infty\\,q^{f(n)}=\\sum_{m=0}^\\infty\\,q^{m}\\,\\sum_{j\\in T_m}\\,q^{f(j)-m}\\leq\\sum_{m=0}^\\infty\\,q^m\\,\\left|T_m\\right|\\,.$$\nSince $\\left|T_m\\right|$ is polynomially bounded, we conclude that $\\sum\\limits_{m=0}^\\infty\\,q^m\\,\\left|T_m\\right|&lt;\\infty$ for $q\\in(0,1)$.</p>\n\n<p>In particular, if $f(n)=\\sqrt{n}$, then $\\big|T_m\\big|=(m+1)^2-m^2=2m+1$ is linear, whence polynomially bounded. Hence, for $q&gt;0$, the sum $\\sum\\limits_{n=0}^\\infty\\,q^{\\sqrt{n}}$ is convergent iff $q&lt;1$.</p>\n" }, { "answer_id": 2109346, "score": 1, "text": "<ul>\n<li>Write $q=\\frac{1}{1+y}$</li>\n<li>Note that $(1+y)^k &gt; {k \\choose 3}y^3$</li>\n<li>Deduce that $q^k &lt; \\frac{y^{-3}3!}{k(k-1)(k-2)}$</li>\n<li>Take $k=\\sqrt{n}$ to see$$q^{\\sqrt{n}}=O(n^{-3/2})$$</li>\n<li>Remark that $n^{-3/2 }$ has a convergent sum</li>\n<li>Deduce that the sum is convergent</li>\n</ul>\n" }, { "answer_id": 2109405, "score": 0, "text": "<p>We can use the estimate $\\sqrt n\\geq\\ln n$ to get\n$$\n\\sum_{n=1}^\\infty q^{\\sqrt n}\\leq\\sum_{n=1}^\\infty q^{\\ln n}=\\sum_{n=1}^\\infty n^{\\ln q}&lt;\\infty,\\ \\ \\text{ if }\\ln q&lt;-1,$$\nwhich happens for $0&lt;q&lt;\\frac{1}{e}\\approx0.367878$. We have thus obtained only a partial solution, but this argument can be modified in order to get the desired result:</p>\n\n<p>Let's fix $k\\geq1$. As $\\frac{\\sqrt n}{k\\ln n}\\to\\infty$ when $n\\to\\infty$ we know that there is some $N(k)\\in\\mathbb{N}$ such that $$\\sqrt{n}\\geq k\\ln n,\\ \\ \\text{for }n\\geq N.$$ Performing the above estimations again we arrive at\n$$\n\\sum_{n=N}^\\infty q^{\\sqrt n}\\leq\\sum_{N}^\\infty n^{k\\ln q}&lt;\\infty,\n\\ \\ \\text{ if }k\\ln q&lt;-1,$$ which happens for $0&lt;q&lt;e^{-\\frac{1}{k}}$. As $e^{-\\frac{1}{k}}\\nearrow1$ when $k\\to\\infty$ this yields the desired convergence for $0&lt;q&lt;1$. </p>\n" }, { "answer_id": 2109330, "score": 8, "text": "<p>For $q\\in(0,1)$:</p>\n\n<p>$$\\sum_{n=1}^\\infty q^{\\sqrt n}&lt;\\sum_{n=1}^\\infty q^{\\lfloor\\sqrt n\\rfloor}\\stackrel*=\\sum_{k=1}^\\infty (2k+1)q^k&lt;\\infty$$</p>\n\n<p>$(*)$ Note that for each $k\\in\\Bbb N$ there are exactly $2k+1$ natural numbers $n$ such that $\\lfloor\\sqrt n\\rfloor=k$.</p>\n\n<p>EDIT (motivated by Clement's comment):</p>\n\n<p>Let $r\\in(0,1)$. Take some natural $s&gt;1/r$. For each natural $k$, there are\n$$p(k)=(k+1)^s-k^s$$\nnatural numbers $n$ such that $\\lfloor\\sqrt[s]n\\rfloor=k$. Then\n$$\\sum_{n=1}^\\infty q^{n^r}&lt;\\sum_{k=1}^\\infty p(k)q^k&lt;\\infty$$</p>\n" }, { "answer_id": 2109325, "score": 5, "text": "<p><strong>Yes</strong>, it does converge <em>(and quite fast, but that's besides the point)</em>. We can prove it by comparison with, for instance, the series $\\sum_{n=1}^\\infty \\frac{1}{n^2}$.</p>\n\n<p>Let $q\\in(0,1)$ be any number.</p>\n\n<p>$$\\frac{q^{\\sqrt{n}}}{\\frac{1}{n^2}} = n^2 e^{-\\sqrt{n} \\ln\\frac{1}{q}}\n= e^{-\\sqrt{n} \\ln\\frac{1}{q}+2\\ln n}\n= e^{-\\ln\\frac{1}{q}\\left( \\sqrt{n} - \\frac{2}{\\ln\\frac{1}{q}}\\ln n\\right)}$$\nNow, you can use the fact$^{(\\dagger)}$ that for any real number $a\\in\\mathbb{R}$,\n$$\n\\sqrt{n}-a\\ln n \\xrightarrow[n\\to\\infty]{} \\infty\n$$\nto conclude that $\\frac{q^{\\sqrt{n}}}{\\frac{1}{n^2}}\\xrightarrow[n\\to\\infty]{} 0$.</p>\n\n<hr>\n\n<p>If $(\\dagger)$ is not obvious or known to you:\n$$\n\\sqrt{n}-a\\ln n = \\sqrt{n}\\left(1-a\\frac{\\ln n}{\\sqrt{n}}\\right) \n= \\sqrt{n}\\left(1-2a\\frac{\\ln \\sqrt{n}}{\\sqrt{n}}\\right) $$\nso it is sufficient to know or prove that $\\frac{\\ln x}{x} \\xrightarrow[x\\to\\infty]{} 0$.</p>\n" }, { "answer_id": 2109338, "score": 1, "text": "<p>Let $f:\\mathbb{Z}_{\\geq 0}\\to\\mathbb{R}_{\\geq 0}$ be a function such that, as a function of $m\\in\\mathbb{Z}_{\\geq 0}$, the cardinality of the preimage $T_m:=f^{-1}\\big([m,m+1)\\big)$ is polynomially bounded, namely, for some polynomial $p(X)\\in\\mathbb{R}[X]$, $\\big|T_m\\big|\\leq p(m)$ for every $m$. (In fact, we only need that $T_m$ is finite for all $m$ and $\\limsup\\limits_{m\\to\\infty}\\,\\sqrt[m]{\\left|T_m\\right|}\\leq1$.) Then, for $q&gt;0$, the infinite sum $\\displaystyle\\sum_{n=0}^\\infty\\,q^{f(n)}$ converges if and only if $q&lt;1$. </p>\n\n<p>It is trivial that $q&lt;1$ is necessary for the sum to converge. We shall show that the sum indeed converges when $q&lt;1$. Observe that\n$$\\sum_{n=0}^\\infty\\,q^{f(n)}=\\sum_{m=0}^\\infty\\,q^{m}\\,\\sum_{j\\in T_m}\\,q^{f(j)-m}\\leq\\sum_{m=0}^\\infty\\,q^m\\,\\left|T_m\\right|\\,.$$\nSince $\\left|T_m\\right|$ is polynomially bounded, we conclude that $\\sum\\limits_{m=0}^\\infty\\,q^m\\,\\left|T_m\\right|&lt;\\infty$ for $q\\in(0,1)$.</p>\n\n<p>In particular, if $f(n)=\\sqrt{n}$, then $\\big|T_m\\big|=(m+1)^2-m^2=2m+1$ is linear, whence polynomially bounded. Hence, for $q&gt;0$, the sum $\\sum\\limits_{n=0}^\\infty\\,q^{\\sqrt{n}}$ is convergent iff $q&lt;1$.</p>\n" }, { "answer_id": 2109346, "score": 1, "text": "<ul>\n<li>Write $q=\\frac{1}{1+y}$</li>\n<li>Note that $(1+y)^k &gt; {k \\choose 3}y^3$</li>\n<li>Deduce that $q^k &lt; \\frac{y^{-3}3!}{k(k-1)(k-2)}$</li>\n<li>Take $k=\\sqrt{n}$ to see$$q^{\\sqrt{n}}=O(n^{-3/2})$$</li>\n<li>Remark that $n^{-3/2 }$ has a convergent sum</li>\n<li>Deduce that the sum is convergent</li>\n</ul>\n" }, { "answer_id": 2109405, "score": 0, "text": "<p>We can use the estimate $\\sqrt n\\geq\\ln n$ to get\n$$\n\\sum_{n=1}^\\infty q^{\\sqrt n}\\leq\\sum_{n=1}^\\infty q^{\\ln n}=\\sum_{n=1}^\\infty n^{\\ln q}&lt;\\infty,\\ \\ \\text{ if }\\ln q&lt;-1,$$\nwhich happens for $0&lt;q&lt;\\frac{1}{e}\\approx0.367878$. We have thus obtained only a partial solution, but this argument can be modified in order to get the desired result:</p>\n\n<p>Let's fix $k\\geq1$. As $\\frac{\\sqrt n}{k\\ln n}\\to\\infty$ when $n\\to\\infty$ we know that there is some $N(k)\\in\\mathbb{N}$ such that $$\\sqrt{n}\\geq k\\ln n,\\ \\ \\text{for }n\\geq N.$$ Performing the above estimations again we arrive at\n$$\n\\sum_{n=N}^\\infty q^{\\sqrt n}\\leq\\sum_{N}^\\infty n^{k\\ln q}&lt;\\infty,\n\\ \\ \\text{ if }k\\ln q&lt;-1,$$ which happens for $0&lt;q&lt;e^{-\\frac{1}{k}}$. As $e^{-\\frac{1}{k}}\\nearrow1$ when $k\\to\\infty$ this yields the desired convergence for $0&lt;q&lt;1$. </p>\n" }, { "answer_id": 2109330, "score": 8, "text": "<p>For $q\\in(0,1)$:</p>\n\n<p>$$\\sum_{n=1}^\\infty q^{\\sqrt n}&lt;\\sum_{n=1}^\\infty q^{\\lfloor\\sqrt n\\rfloor}\\stackrel*=\\sum_{k=1}^\\infty (2k+1)q^k&lt;\\infty$$</p>\n\n<p>$(*)$ Note that for each $k\\in\\Bbb N$ there are exactly $2k+1$ natural numbers $n$ such that $\\lfloor\\sqrt n\\rfloor=k$.</p>\n\n<p>EDIT (motivated by Clement's comment):</p>\n\n<p>Let $r\\in(0,1)$. Take some natural $s&gt;1/r$. For each natural $k$, there are\n$$p(k)=(k+1)^s-k^s$$\nnatural numbers $n$ such that $\\lfloor\\sqrt[s]n\\rfloor=k$. Then\n$$\\sum_{n=1}^\\infty q^{n^r}&lt;\\sum_{k=1}^\\infty p(k)q^k&lt;\\infty$$</p>\n" }, { "answer_id": 2109325, "score": 5, "text": "<p><strong>Yes</strong>, it does converge <em>(and quite fast, but that's besides the point)</em>. We can prove it by comparison with, for instance, the series $\\sum_{n=1}^\\infty \\frac{1}{n^2}$.</p>\n\n<p>Let $q\\in(0,1)$ be any number.</p>\n\n<p>$$\\frac{q^{\\sqrt{n}}}{\\frac{1}{n^2}} = n^2 e^{-\\sqrt{n} \\ln\\frac{1}{q}}\n= e^{-\\sqrt{n} \\ln\\frac{1}{q}+2\\ln n}\n= e^{-\\ln\\frac{1}{q}\\left( \\sqrt{n} - \\frac{2}{\\ln\\frac{1}{q}}\\ln n\\right)}$$\nNow, you can use the fact$^{(\\dagger)}$ that for any real number $a\\in\\mathbb{R}$,\n$$\n\\sqrt{n}-a\\ln n \\xrightarrow[n\\to\\infty]{} \\infty\n$$\nto conclude that $\\frac{q^{\\sqrt{n}}}{\\frac{1}{n^2}}\\xrightarrow[n\\to\\infty]{} 0$.</p>\n\n<hr>\n\n<p>If $(\\dagger)$ is not obvious or known to you:\n$$\n\\sqrt{n}-a\\ln n = \\sqrt{n}\\left(1-a\\frac{\\ln n}{\\sqrt{n}}\\right) \n= \\sqrt{n}\\left(1-2a\\frac{\\ln \\sqrt{n}}{\\sqrt{n}}\\right) $$\nso it is sufficient to know or prove that $\\frac{\\ln x}{x} \\xrightarrow[x\\to\\infty]{} 0$.</p>\n" }, { "answer_id": 2109338, "score": 1, "text": "<p>Let $f:\\mathbb{Z}_{\\geq 0}\\to\\mathbb{R}_{\\geq 0}$ be a function such that, as a function of $m\\in\\mathbb{Z}_{\\geq 0}$, the cardinality of the preimage $T_m:=f^{-1}\\big([m,m+1)\\big)$ is polynomially bounded, namely, for some polynomial $p(X)\\in\\mathbb{R}[X]$, $\\big|T_m\\big|\\leq p(m)$ for every $m$. (In fact, we only need that $T_m$ is finite for all $m$ and $\\limsup\\limits_{m\\to\\infty}\\,\\sqrt[m]{\\left|T_m\\right|}\\leq1$.) Then, for $q&gt;0$, the infinite sum $\\displaystyle\\sum_{n=0}^\\infty\\,q^{f(n)}$ converges if and only if $q&lt;1$. </p>\n\n<p>It is trivial that $q&lt;1$ is necessary for the sum to converge. We shall show that the sum indeed converges when $q&lt;1$. Observe that\n$$\\sum_{n=0}^\\infty\\,q^{f(n)}=\\sum_{m=0}^\\infty\\,q^{m}\\,\\sum_{j\\in T_m}\\,q^{f(j)-m}\\leq\\sum_{m=0}^\\infty\\,q^m\\,\\left|T_m\\right|\\,.$$\nSince $\\left|T_m\\right|$ is polynomially bounded, we conclude that $\\sum\\limits_{m=0}^\\infty\\,q^m\\,\\left|T_m\\right|&lt;\\infty$ for $q\\in(0,1)$.</p>\n\n<p>In particular, if $f(n)=\\sqrt{n}$, then $\\big|T_m\\big|=(m+1)^2-m^2=2m+1$ is linear, whence polynomially bounded. Hence, for $q&gt;0$, the sum $\\sum\\limits_{n=0}^\\infty\\,q^{\\sqrt{n}}$ is convergent iff $q&lt;1$.</p>\n" }, { "answer_id": 2109346, "score": 1, "text": "<ul>\n<li>Write $q=\\frac{1}{1+y}$</li>\n<li>Note that $(1+y)^k &gt; {k \\choose 3}y^3$</li>\n<li>Deduce that $q^k &lt; \\frac{y^{-3}3!}{k(k-1)(k-2)}$</li>\n<li>Take $k=\\sqrt{n}$ to see$$q^{\\sqrt{n}}=O(n^{-3/2})$$</li>\n<li>Remark that $n^{-3/2 }$ has a convergent sum</li>\n<li>Deduce that the sum is convergent</li>\n</ul>\n" }, { "answer_id": 2109405, "score": 0, "text": "<p>We can use the estimate $\\sqrt n\\geq\\ln n$ to get\n$$\n\\sum_{n=1}^\\infty q^{\\sqrt n}\\leq\\sum_{n=1}^\\infty q^{\\ln n}=\\sum_{n=1}^\\infty n^{\\ln q}&lt;\\infty,\\ \\ \\text{ if }\\ln q&lt;-1,$$\nwhich happens for $0&lt;q&lt;\\frac{1}{e}\\approx0.367878$. We have thus obtained only a partial solution, but this argument can be modified in order to get the desired result:</p>\n\n<p>Let's fix $k\\geq1$. As $\\frac{\\sqrt n}{k\\ln n}\\to\\infty$ when $n\\to\\infty$ we know that there is some $N(k)\\in\\mathbb{N}$ such that $$\\sqrt{n}\\geq k\\ln n,\\ \\ \\text{for }n\\geq N.$$ Performing the above estimations again we arrive at\n$$\n\\sum_{n=N}^\\infty q^{\\sqrt n}\\leq\\sum_{N}^\\infty n^{k\\ln q}&lt;\\infty,\n\\ \\ \\text{ if }k\\ln q&lt;-1,$$ which happens for $0&lt;q&lt;e^{-\\frac{1}{k}}$. As $e^{-\\frac{1}{k}}\\nearrow1$ when $k\\to\\infty$ this yields the desired convergence for $0&lt;q&lt;1$. </p>\n" }, { "answer_id": 2109330, "score": 8, "text": "<p>For $q\\in(0,1)$:</p>\n\n<p>$$\\sum_{n=1}^\\infty q^{\\sqrt n}&lt;\\sum_{n=1}^\\infty q^{\\lfloor\\sqrt n\\rfloor}\\stackrel*=\\sum_{k=1}^\\infty (2k+1)q^k&lt;\\infty$$</p>\n\n<p>$(*)$ Note that for each $k\\in\\Bbb N$ there are exactly $2k+1$ natural numbers $n$ such that $\\lfloor\\sqrt n\\rfloor=k$.</p>\n\n<p>EDIT (motivated by Clement's comment):</p>\n\n<p>Let $r\\in(0,1)$. Take some natural $s&gt;1/r$. For each natural $k$, there are\n$$p(k)=(k+1)^s-k^s$$\nnatural numbers $n$ such that $\\lfloor\\sqrt[s]n\\rfloor=k$. Then\n$$\\sum_{n=1}^\\infty q^{n^r}&lt;\\sum_{k=1}^\\infty p(k)q^k&lt;\\infty$$</p>\n" }, { "answer_id": 2109325, "score": 5, "text": "<p><strong>Yes</strong>, it does converge <em>(and quite fast, but that's besides the point)</em>. We can prove it by comparison with, for instance, the series $\\sum_{n=1}^\\infty \\frac{1}{n^2}$.</p>\n\n<p>Let $q\\in(0,1)$ be any number.</p>\n\n<p>$$\\frac{q^{\\sqrt{n}}}{\\frac{1}{n^2}} = n^2 e^{-\\sqrt{n} \\ln\\frac{1}{q}}\n= e^{-\\sqrt{n} \\ln\\frac{1}{q}+2\\ln n}\n= e^{-\\ln\\frac{1}{q}\\left( \\sqrt{n} - \\frac{2}{\\ln\\frac{1}{q}}\\ln n\\right)}$$\nNow, you can use the fact$^{(\\dagger)}$ that for any real number $a\\in\\mathbb{R}$,\n$$\n\\sqrt{n}-a\\ln n \\xrightarrow[n\\to\\infty]{} \\infty\n$$\nto conclude that $\\frac{q^{\\sqrt{n}}}{\\frac{1}{n^2}}\\xrightarrow[n\\to\\infty]{} 0$.</p>\n\n<hr>\n\n<p>If $(\\dagger)$ is not obvious or known to you:\n$$\n\\sqrt{n}-a\\ln n = \\sqrt{n}\\left(1-a\\frac{\\ln n}{\\sqrt{n}}\\right) \n= \\sqrt{n}\\left(1-2a\\frac{\\ln \\sqrt{n}}{\\sqrt{n}}\\right) $$\nso it is sufficient to know or prove that $\\frac{\\ln x}{x} \\xrightarrow[x\\to\\infty]{} 0$.</p>\n" }, { "answer_id": 2109338, "score": 1, "text": "<p>Let $f:\\mathbb{Z}_{\\geq 0}\\to\\mathbb{R}_{\\geq 0}$ be a function such that, as a function of $m\\in\\mathbb{Z}_{\\geq 0}$, the cardinality of the preimage $T_m:=f^{-1}\\big([m,m+1)\\big)$ is polynomially bounded, namely, for some polynomial $p(X)\\in\\mathbb{R}[X]$, $\\big|T_m\\big|\\leq p(m)$ for every $m$. (In fact, we only need that $T_m$ is finite for all $m$ and $\\limsup\\limits_{m\\to\\infty}\\,\\sqrt[m]{\\left|T_m\\right|}\\leq1$.) Then, for $q&gt;0$, the infinite sum $\\displaystyle\\sum_{n=0}^\\infty\\,q^{f(n)}$ converges if and only if $q&lt;1$. </p>\n\n<p>It is trivial that $q&lt;1$ is necessary for the sum to converge. We shall show that the sum indeed converges when $q&lt;1$. Observe that\n$$\\sum_{n=0}^\\infty\\,q^{f(n)}=\\sum_{m=0}^\\infty\\,q^{m}\\,\\sum_{j\\in T_m}\\,q^{f(j)-m}\\leq\\sum_{m=0}^\\infty\\,q^m\\,\\left|T_m\\right|\\,.$$\nSince $\\left|T_m\\right|$ is polynomially bounded, we conclude that $\\sum\\limits_{m=0}^\\infty\\,q^m\\,\\left|T_m\\right|&lt;\\infty$ for $q\\in(0,1)$.</p>\n\n<p>In particular, if $f(n)=\\sqrt{n}$, then $\\big|T_m\\big|=(m+1)^2-m^2=2m+1$ is linear, whence polynomially bounded. Hence, for $q&gt;0$, the sum $\\sum\\limits_{n=0}^\\infty\\,q^{\\sqrt{n}}$ is convergent iff $q&lt;1$.</p>\n" }, { "answer_id": 2109346, "score": 1, "text": "<ul>\n<li>Write $q=\\frac{1}{1+y}$</li>\n<li>Note that $(1+y)^k &gt; {k \\choose 3}y^3$</li>\n<li>Deduce that $q^k &lt; \\frac{y^{-3}3!}{k(k-1)(k-2)}$</li>\n<li>Take $k=\\sqrt{n}$ to see$$q^{\\sqrt{n}}=O(n^{-3/2})$$</li>\n<li>Remark that $n^{-3/2 }$ has a convergent sum</li>\n<li>Deduce that the sum is convergent</li>\n</ul>\n" }, { "answer_id": 2109405, "score": 0, "text": "<p>We can use the estimate $\\sqrt n\\geq\\ln n$ to get\n$$\n\\sum_{n=1}^\\infty q^{\\sqrt n}\\leq\\sum_{n=1}^\\infty q^{\\ln n}=\\sum_{n=1}^\\infty n^{\\ln q}&lt;\\infty,\\ \\ \\text{ if }\\ln q&lt;-1,$$\nwhich happens for $0&lt;q&lt;\\frac{1}{e}\\approx0.367878$. We have thus obtained only a partial solution, but this argument can be modified in order to get the desired result:</p>\n\n<p>Let's fix $k\\geq1$. As $\\frac{\\sqrt n}{k\\ln n}\\to\\infty$ when $n\\to\\infty$ we know that there is some $N(k)\\in\\mathbb{N}$ such that $$\\sqrt{n}\\geq k\\ln n,\\ \\ \\text{for }n\\geq N.$$ Performing the above estimations again we arrive at\n$$\n\\sum_{n=N}^\\infty q^{\\sqrt n}\\leq\\sum_{N}^\\infty n^{k\\ln q}&lt;\\infty,\n\\ \\ \\text{ if }k\\ln q&lt;-1,$$ which happens for $0&lt;q&lt;e^{-\\frac{1}{k}}$. As $e^{-\\frac{1}{k}}\\nearrow1$ when $k\\to\\infty$ this yields the desired convergence for $0&lt;q&lt;1$. </p>\n" }, { "answer_id": 2109330, "score": 8, "text": "<p>For $q\\in(0,1)$:</p>\n\n<p>$$\\sum_{n=1}^\\infty q^{\\sqrt n}&lt;\\sum_{n=1}^\\infty q^{\\lfloor\\sqrt n\\rfloor}\\stackrel*=\\sum_{k=1}^\\infty (2k+1)q^k&lt;\\infty$$</p>\n\n<p>$(*)$ Note that for each $k\\in\\Bbb N$ there are exactly $2k+1$ natural numbers $n$ such that $\\lfloor\\sqrt n\\rfloor=k$.</p>\n\n<p>EDIT (motivated by Clement's comment):</p>\n\n<p>Let $r\\in(0,1)$. Take some natural $s&gt;1/r$. For each natural $k$, there are\n$$p(k)=(k+1)^s-k^s$$\nnatural numbers $n$ such that $\\lfloor\\sqrt[s]n\\rfloor=k$. Then\n$$\\sum_{n=1}^\\infty q^{n^r}&lt;\\sum_{k=1}^\\infty p(k)q^k&lt;\\infty$$</p>\n" }, { "answer_id": 2109325, "score": 5, "text": "<p><strong>Yes</strong>, it does converge <em>(and quite fast, but that's besides the point)</em>. We can prove it by comparison with, for instance, the series $\\sum_{n=1}^\\infty \\frac{1}{n^2}$.</p>\n\n<p>Let $q\\in(0,1)$ be any number.</p>\n\n<p>$$\\frac{q^{\\sqrt{n}}}{\\frac{1}{n^2}} = n^2 e^{-\\sqrt{n} \\ln\\frac{1}{q}}\n= e^{-\\sqrt{n} \\ln\\frac{1}{q}+2\\ln n}\n= e^{-\\ln\\frac{1}{q}\\left( \\sqrt{n} - \\frac{2}{\\ln\\frac{1}{q}}\\ln n\\right)}$$\nNow, you can use the fact$^{(\\dagger)}$ that for any real number $a\\in\\mathbb{R}$,\n$$\n\\sqrt{n}-a\\ln n \\xrightarrow[n\\to\\infty]{} \\infty\n$$\nto conclude that $\\frac{q^{\\sqrt{n}}}{\\frac{1}{n^2}}\\xrightarrow[n\\to\\infty]{} 0$.</p>\n\n<hr>\n\n<p>If $(\\dagger)$ is not obvious or known to you:\n$$\n\\sqrt{n}-a\\ln n = \\sqrt{n}\\left(1-a\\frac{\\ln n}{\\sqrt{n}}\\right) \n= \\sqrt{n}\\left(1-2a\\frac{\\ln \\sqrt{n}}{\\sqrt{n}}\\right) $$\nso it is sufficient to know or prove that $\\frac{\\ln x}{x} \\xrightarrow[x\\to\\infty]{} 0$.</p>\n" }, { "answer_id": 2109338, "score": 1, "text": "<p>Let $f:\\mathbb{Z}_{\\geq 0}\\to\\mathbb{R}_{\\geq 0}$ be a function such that, as a function of $m\\in\\mathbb{Z}_{\\geq 0}$, the cardinality of the preimage $T_m:=f^{-1}\\big([m,m+1)\\big)$ is polynomially bounded, namely, for some polynomial $p(X)\\in\\mathbb{R}[X]$, $\\big|T_m\\big|\\leq p(m)$ for every $m$. (In fact, we only need that $T_m$ is finite for all $m$ and $\\limsup\\limits_{m\\to\\infty}\\,\\sqrt[m]{\\left|T_m\\right|}\\leq1$.) Then, for $q&gt;0$, the infinite sum $\\displaystyle\\sum_{n=0}^\\infty\\,q^{f(n)}$ converges if and only if $q&lt;1$. </p>\n\n<p>It is trivial that $q&lt;1$ is necessary for the sum to converge. We shall show that the sum indeed converges when $q&lt;1$. Observe that\n$$\\sum_{n=0}^\\infty\\,q^{f(n)}=\\sum_{m=0}^\\infty\\,q^{m}\\,\\sum_{j\\in T_m}\\,q^{f(j)-m}\\leq\\sum_{m=0}^\\infty\\,q^m\\,\\left|T_m\\right|\\,.$$\nSince $\\left|T_m\\right|$ is polynomially bounded, we conclude that $\\sum\\limits_{m=0}^\\infty\\,q^m\\,\\left|T_m\\right|&lt;\\infty$ for $q\\in(0,1)$.</p>\n\n<p>In particular, if $f(n)=\\sqrt{n}$, then $\\big|T_m\\big|=(m+1)^2-m^2=2m+1$ is linear, whence polynomially bounded. Hence, for $q&gt;0$, the sum $\\sum\\limits_{n=0}^\\infty\\,q^{\\sqrt{n}}$ is convergent iff $q&lt;1$.</p>\n" }, { "answer_id": 2109346, "score": 1, "text": "<ul>\n<li>Write $q=\\frac{1}{1+y}$</li>\n<li>Note that $(1+y)^k &gt; {k \\choose 3}y^3$</li>\n<li>Deduce that $q^k &lt; \\frac{y^{-3}3!}{k(k-1)(k-2)}$</li>\n<li>Take $k=\\sqrt{n}$ to see$$q^{\\sqrt{n}}=O(n^{-3/2})$$</li>\n<li>Remark that $n^{-3/2 }$ has a convergent sum</li>\n<li>Deduce that the sum is convergent</li>\n</ul>\n" }, { "answer_id": 2109405, "score": 0, "text": "<p>We can use the estimate $\\sqrt n\\geq\\ln n$ to get\n$$\n\\sum_{n=1}^\\infty q^{\\sqrt n}\\leq\\sum_{n=1}^\\infty q^{\\ln n}=\\sum_{n=1}^\\infty n^{\\ln q}&lt;\\infty,\\ \\ \\text{ if }\\ln q&lt;-1,$$\nwhich happens for $0&lt;q&lt;\\frac{1}{e}\\approx0.367878$. We have thus obtained only a partial solution, but this argument can be modified in order to get the desired result:</p>\n\n<p>Let's fix $k\\geq1$. As $\\frac{\\sqrt n}{k\\ln n}\\to\\infty$ when $n\\to\\infty$ we know that there is some $N(k)\\in\\mathbb{N}$ such that $$\\sqrt{n}\\geq k\\ln n,\\ \\ \\text{for }n\\geq N.$$ Performing the above estimations again we arrive at\n$$\n\\sum_{n=N}^\\infty q^{\\sqrt n}\\leq\\sum_{N}^\\infty n^{k\\ln q}&lt;\\infty,\n\\ \\ \\text{ if }k\\ln q&lt;-1,$$ which happens for $0&lt;q&lt;e^{-\\frac{1}{k}}$. As $e^{-\\frac{1}{k}}\\nearrow1$ when $k\\to\\infty$ this yields the desired convergence for $0&lt;q&lt;1$. </p>\n" } ]
Question: Does $\sum q^\sqrt n$ converge? Answer 1: <p>For $q\in(0,1)$:</p> <p>$$\sum_{n=1}^\infty q^{\sqrt n}&lt;\sum_{n=1}^\infty q^{\lfloor\sqrt n\rfloor}\stackrel*=\sum_{k=1}^\infty (2k+1)q^k&lt;\infty$$</p> <p>$(*)$ Note that for each $k\in\Bbb N$ there are exactly $2k+1$ natural numbers $n$ such that $\lfloor\sqrt n\rfloor=k$.</p> <p>EDIT (motivated by Clement's comment):</p> <p>Let $r\in(0,1)$. Take some natural $s&gt;1/r$. For each natural $k$, there are $$p(k)=(k+1)^s-k^s$$ natural numbers $n$ such that $\lfloor\sqrt[s]n\rfloor=k$. Then $$\sum_{n=1}^\infty q^{n^r}&lt;\sum_{k=1}^\infty p(k)q^k&lt;\infty$$</p> Answer 2: <p><strong>Yes</strong>, it does converge <em>(and quite fast, but that's besides the point)</em>. We can prove it by comparison with, for instance, the series $\sum_{n=1}^\infty \frac{1}{n^2}$.</p> <p>Let $q\in(0,1)$ be any number.</p> <p>$$\frac{q^{\sqrt{n}}}{\frac{1}{n^2}} = n^2 e^{-\sqrt{n} \ln\frac{1}{q}} = e^{-\sqrt{n} \ln\frac{1}{q}+2\ln n} = e^{-\ln\frac{1}{q}\left( \sqrt{n} - \frac{2}{\ln\frac{1}{q}}\ln n\right)}$$ Now, you can use the fact$^{(\dagger)}$ that for any real number $a\in\mathbb{R}$, $$ \sqrt{n}-a\ln n \xrightarrow[n\to\infty]{} \infty $$ to conclude that $\frac{q^{\sqrt{n}}}{\frac{1}{n^2}}\xrightarrow[n\to\infty]{} 0$.</p> <hr> <p>If $(\dagger)$ is not obvious or known to you: $$ \sqrt{n}-a\ln n = \sqrt{n}\left(1-a\frac{\ln n}{\sqrt{n}}\right) = \sqrt{n}\left(1-2a\frac{\ln \sqrt{n}}{\sqrt{n}}\right) $$ so it is sufficient to know or prove that $\frac{\ln x}{x} \xrightarrow[x\to\infty]{} 0$.</p> Answer 3: <p>Let $f:\mathbb{Z}_{\geq 0}\to\mathbb{R}_{\geq 0}$ be a function such that, as a function of $m\in\mathbb{Z}_{\geq 0}$, the cardinality of the preimage $T_m:=f^{-1}\big([m,m+1)\big)$ is polynomially bounded, namely, for some polynomial $p(X)\in\mathbb{R}[X]$, $\big|T_m\big|\leq p(m)$ for every $m$. (In fact, we only need that $T_m$ is finite for all $m$ and $\limsup\limits_{m\to\infty}\,\sqrt[m]{\left|T_m\right|}\leq1$.) Then, for $q&gt;0$, the infinite sum $\displaystyle\sum_{n=0}^\infty\,q^{f(n)}$ converges if and only if $q&lt;1$. </p> <p>It is trivial that $q&lt;1$ is necessary for the sum to converge. We shall show that the sum indeed converges when $q&lt;1$. Observe that $$\sum_{n=0}^\infty\,q^{f(n)}=\sum_{m=0}^\infty\,q^{m}\,\sum_{j\in T_m}\,q^{f(j)-m}\leq\sum_{m=0}^\infty\,q^m\,\left|T_m\right|\,.$$ Since $\left|T_m\right|$ is polynomially bounded, we conclude that $\sum\limits_{m=0}^\infty\,q^m\,\left|T_m\right|&lt;\infty$ for $q\in(0,1)$.</p> <p>In particular, if $f(n)=\sqrt{n}$, then $\big|T_m\big|=(m+1)^2-m^2=2m+1$ is linear, whence polynomially bounded. Hence, for $q&gt;0$, the sum $\sum\limits_{n=0}^\infty\,q^{\sqrt{n}}$ is convergent iff $q&lt;1$.</p> Answer 4: <ul> <li>Write $q=\frac{1}{1+y}$</li> <li>Note that $(1+y)^k &gt; {k \choose 3}y^3$</li> <li>Deduce that $q^k &lt; \frac{y^{-3}3!}{k(k-1)(k-2)}$</li> <li>Take $k=\sqrt{n}$ to see$$q^{\sqrt{n}}=O(n^{-3/2})$$</li> <li>Remark that $n^{-3/2 }$ has a convergent sum</li> <li>Deduce that the sum is convergent</li> </ul> Answer 5: <p>We can use the estimate $\sqrt n\geq\ln n$ to get $$ \sum_{n=1}^\infty q^{\sqrt n}\leq\sum_{n=1}^\infty q^{\ln n}=\sum_{n=1}^\infty n^{\ln q}&lt;\infty,\ \ \text{ if }\ln q&lt;-1,$$ which happens for $0&lt;q&lt;\frac{1}{e}\approx0.367878$. We have thus obtained only a partial solution, but this argument can be modified in order to get the desired result:</p> <p>Let's fix $k\geq1$. As $\frac{\sqrt n}{k\ln n}\to\infty$ when $n\to\infty$ we know that there is some $N(k)\in\mathbb{N}$ such that $$\sqrt{n}\geq k\ln n,\ \ \text{for }n\geq N.$$ Performing the above estimations again we arrive at $$ \sum_{n=N}^\infty q^{\sqrt n}\leq\sum_{N}^\infty n^{k\ln q}&lt;\infty, \ \ \text{ if }k\ln q&lt;-1,$$ which happens for $0&lt;q&lt;e^{-\frac{1}{k}}$. As $e^{-\frac{1}{k}}\nearrow1$ when $k\to\infty$ this yields the desired convergence for $0&lt;q&lt;1$. </p> Answer 6: <p>For $q\in(0,1)$:</p> <p>$$\sum_{n=1}^\infty q^{\sqrt n}&lt;\sum_{n=1}^\infty q^{\lfloor\sqrt n\rfloor}\stackrel*=\sum_{k=1}^\infty (2k+1)q^k&lt;\infty$$</p> <p>$(*)$ Note that for each $k\in\Bbb N$ there are exactly $2k+1$ natural numbers $n$ such that $\lfloor\sqrt n\rfloor=k$.</p> <p>EDIT (motivated by Clement's comment):</p> <p>Let $r\in(0,1)$. Take some natural $s&gt;1/r$. For each natural $k$, there are $$p(k)=(k+1)^s-k^s$$ natural numbers $n$ such that $\lfloor\sqrt[s]n\rfloor=k$. Then $$\sum_{n=1}^\infty q^{n^r}&lt;\sum_{k=1}^\infty p(k)q^k&lt;\infty$$</p> Answer 7: <p><strong>Yes</strong>, it does converge <em>(and quite fast, but that's besides the point)</em>. We can prove it by comparison with, for instance, the series $\sum_{n=1}^\infty \frac{1}{n^2}$.</p> <p>Let $q\in(0,1)$ be any number.</p> <p>$$\frac{q^{\sqrt{n}}}{\frac{1}{n^2}} = n^2 e^{-\sqrt{n} \ln\frac{1}{q}} = e^{-\sqrt{n} \ln\frac{1}{q}+2\ln n} = e^{-\ln\frac{1}{q}\left( \sqrt{n} - \frac{2}{\ln\frac{1}{q}}\ln n\right)}$$ Now, you can use the fact$^{(\dagger)}$ that for any real number $a\in\mathbb{R}$, $$ \sqrt{n}-a\ln n \xrightarrow[n\to\infty]{} \infty $$ to conclude that $\frac{q^{\sqrt{n}}}{\frac{1}{n^2}}\xrightarrow[n\to\infty]{} 0$.</p> <hr> <p>If $(\dagger)$ is not obvious or known to you: $$ \sqrt{n}-a\ln n = \sqrt{n}\left(1-a\frac{\ln n}{\sqrt{n}}\right) = \sqrt{n}\left(1-2a\frac{\ln \sqrt{n}}{\sqrt{n}}\right) $$ so it is sufficient to know or prove that $\frac{\ln x}{x} \xrightarrow[x\to\infty]{} 0$.</p> Answer 8: <p>Let $f:\mathbb{Z}_{\geq 0}\to\mathbb{R}_{\geq 0}$ be a function such that, as a function of $m\in\mathbb{Z}_{\geq 0}$, the cardinality of the preimage $T_m:=f^{-1}\big([m,m+1)\big)$ is polynomially bounded, namely, for some polynomial $p(X)\in\mathbb{R}[X]$, $\big|T_m\big|\leq p(m)$ for every $m$. (In fact, we only need that $T_m$ is finite for all $m$ and $\limsup\limits_{m\to\infty}\,\sqrt[m]{\left|T_m\right|}\leq1$.) Then, for $q&gt;0$, the infinite sum $\displaystyle\sum_{n=0}^\infty\,q^{f(n)}$ converges if and only if $q&lt;1$. </p> <p>It is trivial that $q&lt;1$ is necessary for the sum to converge. We shall show that the sum indeed converges when $q&lt;1$. Observe that $$\sum_{n=0}^\infty\,q^{f(n)}=\sum_{m=0}^\infty\,q^{m}\,\sum_{j\in T_m}\,q^{f(j)-m}\leq\sum_{m=0}^\infty\,q^m\,\left|T_m\right|\,.$$ Since $\left|T_m\right|$ is polynomially bounded, we conclude that $\sum\limits_{m=0}^\infty\,q^m\,\left|T_m\right|&lt;\infty$ for $q\in(0,1)$.</p> <p>In particular, if $f(n)=\sqrt{n}$, then $\big|T_m\big|=(m+1)^2-m^2=2m+1$ is linear, whence polynomially bounded. Hence, for $q&gt;0$, the sum $\sum\limits_{n=0}^\infty\,q^{\sqrt{n}}$ is convergent iff $q&lt;1$.</p> Answer 9: <ul> <li>Write $q=\frac{1}{1+y}$</li> <li>Note that $(1+y)^k &gt; {k \choose 3}y^3$</li> <li>Deduce that $q^k &lt; \frac{y^{-3}3!}{k(k-1)(k-2)}$</li> <li>Take $k=\sqrt{n}$ to see$$q^{\sqrt{n}}=O(n^{-3/2})$$</li> <li>Remark that $n^{-3/2 }$ has a convergent sum</li> <li>Deduce that the sum is convergent</li> </ul> Answer 10: <p>We can use the estimate $\sqrt n\geq\ln n$ to get $$ \sum_{n=1}^\infty q^{\sqrt n}\leq\sum_{n=1}^\infty q^{\ln n}=\sum_{n=1}^\infty n^{\ln q}&lt;\infty,\ \ \text{ if }\ln q&lt;-1,$$ which happens for $0&lt;q&lt;\frac{1}{e}\approx0.367878$. We have thus obtained only a partial solution, but this argument can be modified in order to get the desired result:</p> <p>Let's fix $k\geq1$. As $\frac{\sqrt n}{k\ln n}\to\infty$ when $n\to\infty$ we know that there is some $N(k)\in\mathbb{N}$ such that $$\sqrt{n}\geq k\ln n,\ \ \text{for }n\geq N.$$ Performing the above estimations again we arrive at $$ \sum_{n=N}^\infty q^{\sqrt n}\leq\sum_{N}^\infty n^{k\ln q}&lt;\infty, \ \ \text{ if }k\ln q&lt;-1,$$ which happens for $0&lt;q&lt;e^{-\frac{1}{k}}$. As $e^{-\frac{1}{k}}\nearrow1$ when $k\to\infty$ this yields the desired convergence for $0&lt;q&lt;1$. </p> Answer 11: <p>For $q\in(0,1)$:</p> <p>$$\sum_{n=1}^\infty q^{\sqrt n}&lt;\sum_{n=1}^\infty q^{\lfloor\sqrt n\rfloor}\stackrel*=\sum_{k=1}^\infty (2k+1)q^k&lt;\infty$$</p> <p>$(*)$ Note that for each $k\in\Bbb N$ there are exactly $2k+1$ natural numbers $n$ such that $\lfloor\sqrt n\rfloor=k$.</p> <p>EDIT (motivated by Clement's comment):</p> <p>Let $r\in(0,1)$. Take some natural $s&gt;1/r$. For each natural $k$, there are $$p(k)=(k+1)^s-k^s$$ natural numbers $n$ such that $\lfloor\sqrt[s]n\rfloor=k$. Then $$\sum_{n=1}^\infty q^{n^r}&lt;\sum_{k=1}^\infty p(k)q^k&lt;\infty$$</p> Answer 12: <p><strong>Yes</strong>, it does converge <em>(and quite fast, but that's besides the point)</em>. We can prove it by comparison with, for instance, the series $\sum_{n=1}^\infty \frac{1}{n^2}$.</p> <p>Let $q\in(0,1)$ be any number.</p> <p>$$\frac{q^{\sqrt{n}}}{\frac{1}{n^2}} = n^2 e^{-\sqrt{n} \ln\frac{1}{q}} = e^{-\sqrt{n} \ln\frac{1}{q}+2\ln n} = e^{-\ln\frac{1}{q}\left( \sqrt{n} - \frac{2}{\ln\frac{1}{q}}\ln n\right)}$$ Now, you can use the fact$^{(\dagger)}$ that for any real number $a\in\mathbb{R}$, $$ \sqrt{n}-a\ln n \xrightarrow[n\to\infty]{} \infty $$ to conclude that $\frac{q^{\sqrt{n}}}{\frac{1}{n^2}}\xrightarrow[n\to\infty]{} 0$.</p> <hr> <p>If $(\dagger)$ is not obvious or known to you: $$ \sqrt{n}-a\ln n = \sqrt{n}\left(1-a\frac{\ln n}{\sqrt{n}}\right) = \sqrt{n}\left(1-2a\frac{\ln \sqrt{n}}{\sqrt{n}}\right) $$ so it is sufficient to know or prove that $\frac{\ln x}{x} \xrightarrow[x\to\infty]{} 0$.</p> Answer 13: <p>Let $f:\mathbb{Z}_{\geq 0}\to\mathbb{R}_{\geq 0}$ be a function such that, as a function of $m\in\mathbb{Z}_{\geq 0}$, the cardinality of the preimage $T_m:=f^{-1}\big([m,m+1)\big)$ is polynomially bounded, namely, for some polynomial $p(X)\in\mathbb{R}[X]$, $\big|T_m\big|\leq p(m)$ for every $m$. (In fact, we only need that $T_m$ is finite for all $m$ and $\limsup\limits_{m\to\infty}\,\sqrt[m]{\left|T_m\right|}\leq1$.) Then, for $q&gt;0$, the infinite sum $\displaystyle\sum_{n=0}^\infty\,q^{f(n)}$ converges if and only if $q&lt;1$. </p> <p>It is trivial that $q&lt;1$ is necessary for the sum to converge. We shall show that the sum indeed converges when $q&lt;1$. Observe that $$\sum_{n=0}^\infty\,q^{f(n)}=\sum_{m=0}^\infty\,q^{m}\,\sum_{j\in T_m}\,q^{f(j)-m}\leq\sum_{m=0}^\infty\,q^m\,\left|T_m\right|\,.$$ Since $\left|T_m\right|$ is polynomially bounded, we conclude that $\sum\limits_{m=0}^\infty\,q^m\,\left|T_m\right|&lt;\infty$ for $q\in(0,1)$.</p> <p>In particular, if $f(n)=\sqrt{n}$, then $\big|T_m\big|=(m+1)^2-m^2=2m+1$ is linear, whence polynomially bounded. Hence, for $q&gt;0$, the sum $\sum\limits_{n=0}^\infty\,q^{\sqrt{n}}$ is convergent iff $q&lt;1$.</p> Answer 14: <ul> <li>Write $q=\frac{1}{1+y}$</li> <li>Note that $(1+y)^k &gt; {k \choose 3}y^3$</li> <li>Deduce that $q^k &lt; \frac{y^{-3}3!}{k(k-1)(k-2)}$</li> <li>Take $k=\sqrt{n}$ to see$$q^{\sqrt{n}}=O(n^{-3/2})$$</li> <li>Remark that $n^{-3/2 }$ has a convergent sum</li> <li>Deduce that the sum is convergent</li> </ul> Answer 15: <p>We can use the estimate $\sqrt n\geq\ln n$ to get $$ \sum_{n=1}^\infty q^{\sqrt n}\leq\sum_{n=1}^\infty q^{\ln n}=\sum_{n=1}^\infty n^{\ln q}&lt;\infty,\ \ \text{ if }\ln q&lt;-1,$$ which happens for $0&lt;q&lt;\frac{1}{e}\approx0.367878$. We have thus obtained only a partial solution, but this argument can be modified in order to get the desired result:</p> <p>Let's fix $k\geq1$. As $\frac{\sqrt n}{k\ln n}\to\infty$ when $n\to\infty$ we know that there is some $N(k)\in\mathbb{N}$ such that $$\sqrt{n}\geq k\ln n,\ \ \text{for }n\geq N.$$ Performing the above estimations again we arrive at $$ \sum_{n=N}^\infty q^{\sqrt n}\leq\sum_{N}^\infty n^{k\ln q}&lt;\infty, \ \ \text{ if }k\ln q&lt;-1,$$ which happens for $0&lt;q&lt;e^{-\frac{1}{k}}$. As $e^{-\frac{1}{k}}\nearrow1$ when $k\to\infty$ this yields the desired convergence for $0&lt;q&lt;1$. </p> Answer 16: <p>For $q\in(0,1)$:</p> <p>$$\sum_{n=1}^\infty q^{\sqrt n}&lt;\sum_{n=1}^\infty q^{\lfloor\sqrt n\rfloor}\stackrel*=\sum_{k=1}^\infty (2k+1)q^k&lt;\infty$$</p> <p>$(*)$ Note that for each $k\in\Bbb N$ there are exactly $2k+1$ natural numbers $n$ such that $\lfloor\sqrt n\rfloor=k$.</p> <p>EDIT (motivated by Clement's comment):</p> <p>Let $r\in(0,1)$. Take some natural $s&gt;1/r$. For each natural $k$, there are $$p(k)=(k+1)^s-k^s$$ natural numbers $n$ such that $\lfloor\sqrt[s]n\rfloor=k$. Then $$\sum_{n=1}^\infty q^{n^r}&lt;\sum_{k=1}^\infty p(k)q^k&lt;\infty$$</p> Answer 17: <p><strong>Yes</strong>, it does converge <em>(and quite fast, but that's besides the point)</em>. We can prove it by comparison with, for instance, the series $\sum_{n=1}^\infty \frac{1}{n^2}$.</p> <p>Let $q\in(0,1)$ be any number.</p> <p>$$\frac{q^{\sqrt{n}}}{\frac{1}{n^2}} = n^2 e^{-\sqrt{n} \ln\frac{1}{q}} = e^{-\sqrt{n} \ln\frac{1}{q}+2\ln n} = e^{-\ln\frac{1}{q}\left( \sqrt{n} - \frac{2}{\ln\frac{1}{q}}\ln n\right)}$$ Now, you can use the fact$^{(\dagger)}$ that for any real number $a\in\mathbb{R}$, $$ \sqrt{n}-a\ln n \xrightarrow[n\to\infty]{} \infty $$ to conclude that $\frac{q^{\sqrt{n}}}{\frac{1}{n^2}}\xrightarrow[n\to\infty]{} 0$.</p> <hr> <p>If $(\dagger)$ is not obvious or known to you: $$ \sqrt{n}-a\ln n = \sqrt{n}\left(1-a\frac{\ln n}{\sqrt{n}}\right) = \sqrt{n}\left(1-2a\frac{\ln \sqrt{n}}{\sqrt{n}}\right) $$ so it is sufficient to know or prove that $\frac{\ln x}{x} \xrightarrow[x\to\infty]{} 0$.</p> Answer 18: <p>Let $f:\mathbb{Z}_{\geq 0}\to\mathbb{R}_{\geq 0}$ be a function such that, as a function of $m\in\mathbb{Z}_{\geq 0}$, the cardinality of the preimage $T_m:=f^{-1}\big([m,m+1)\big)$ is polynomially bounded, namely, for some polynomial $p(X)\in\mathbb{R}[X]$, $\big|T_m\big|\leq p(m)$ for every $m$. (In fact, we only need that $T_m$ is finite for all $m$ and $\limsup\limits_{m\to\infty}\,\sqrt[m]{\left|T_m\right|}\leq1$.) Then, for $q&gt;0$, the infinite sum $\displaystyle\sum_{n=0}^\infty\,q^{f(n)}$ converges if and only if $q&lt;1$. </p> <p>It is trivial that $q&lt;1$ is necessary for the sum to converge. We shall show that the sum indeed converges when $q&lt;1$. Observe that $$\sum_{n=0}^\infty\,q^{f(n)}=\sum_{m=0}^\infty\,q^{m}\,\sum_{j\in T_m}\,q^{f(j)-m}\leq\sum_{m=0}^\infty\,q^m\,\left|T_m\right|\,.$$ Since $\left|T_m\right|$ is polynomially bounded, we conclude that $\sum\limits_{m=0}^\infty\,q^m\,\left|T_m\right|&lt;\infty$ for $q\in(0,1)$.</p> <p>In particular, if $f(n)=\sqrt{n}$, then $\big|T_m\big|=(m+1)^2-m^2=2m+1$ is linear, whence polynomially bounded. Hence, for $q&gt;0$, the sum $\sum\limits_{n=0}^\infty\,q^{\sqrt{n}}$ is convergent iff $q&lt;1$.</p> Answer 19: <ul> <li>Write $q=\frac{1}{1+y}$</li> <li>Note that $(1+y)^k &gt; {k \choose 3}y^3$</li> <li>Deduce that $q^k &lt; \frac{y^{-3}3!}{k(k-1)(k-2)}$</li> <li>Take $k=\sqrt{n}$ to see$$q^{\sqrt{n}}=O(n^{-3/2})$$</li> <li>Remark that $n^{-3/2 }$ has a convergent sum</li> <li>Deduce that the sum is convergent</li> </ul> Answer 20: <p>We can use the estimate $\sqrt n\geq\ln n$ to get $$ \sum_{n=1}^\infty q^{\sqrt n}\leq\sum_{n=1}^\infty q^{\ln n}=\sum_{n=1}^\infty n^{\ln q}&lt;\infty,\ \ \text{ if }\ln q&lt;-1,$$ which happens for $0&lt;q&lt;\frac{1}{e}\approx0.367878$. We have thus obtained only a partial solution, but this argument can be modified in order to get the desired result:</p> <p>Let's fix $k\geq1$. As $\frac{\sqrt n}{k\ln n}\to\infty$ when $n\to\infty$ we know that there is some $N(k)\in\mathbb{N}$ such that $$\sqrt{n}\geq k\ln n,\ \ \text{for }n\geq N.$$ Performing the above estimations again we arrive at $$ \sum_{n=N}^\infty q^{\sqrt n}\leq\sum_{N}^\infty n^{k\ln q}&lt;\infty, \ \ \text{ if }k\ln q&lt;-1,$$ which happens for $0&lt;q&lt;e^{-\frac{1}{k}}$. As $e^{-\frac{1}{k}}\nearrow1$ when $k\to\infty$ this yields the desired convergence for $0&lt;q&lt;1$. </p> Answer 21: <p>For $q\in(0,1)$:</p> <p>$$\sum_{n=1}^\infty q^{\sqrt n}&lt;\sum_{n=1}^\infty q^{\lfloor\sqrt n\rfloor}\stackrel*=\sum_{k=1}^\infty (2k+1)q^k&lt;\infty$$</p> <p>$(*)$ Note that for each $k\in\Bbb N$ there are exactly $2k+1$ natural numbers $n$ such that $\lfloor\sqrt n\rfloor=k$.</p> <p>EDIT (motivated by Clement's comment):</p> <p>Let $r\in(0,1)$. Take some natural $s&gt;1/r$. For each natural $k$, there are $$p(k)=(k+1)^s-k^s$$ natural numbers $n$ such that $\lfloor\sqrt[s]n\rfloor=k$. Then $$\sum_{n=1}^\infty q^{n^r}&lt;\sum_{k=1}^\infty p(k)q^k&lt;\infty$$</p> Answer 22: <p><strong>Yes</strong>, it does converge <em>(and quite fast, but that's besides the point)</em>. We can prove it by comparison with, for instance, the series $\sum_{n=1}^\infty \frac{1}{n^2}$.</p> <p>Let $q\in(0,1)$ be any number.</p> <p>$$\frac{q^{\sqrt{n}}}{\frac{1}{n^2}} = n^2 e^{-\sqrt{n} \ln\frac{1}{q}} = e^{-\sqrt{n} \ln\frac{1}{q}+2\ln n} = e^{-\ln\frac{1}{q}\left( \sqrt{n} - \frac{2}{\ln\frac{1}{q}}\ln n\right)}$$ Now, you can use the fact$^{(\dagger)}$ that for any real number $a\in\mathbb{R}$, $$ \sqrt{n}-a\ln n \xrightarrow[n\to\infty]{} \infty $$ to conclude that $\frac{q^{\sqrt{n}}}{\frac{1}{n^2}}\xrightarrow[n\to\infty]{} 0$.</p> <hr> <p>If $(\dagger)$ is not obvious or known to you: $$ \sqrt{n}-a\ln n = \sqrt{n}\left(1-a\frac{\ln n}{\sqrt{n}}\right) = \sqrt{n}\left(1-2a\frac{\ln \sqrt{n}}{\sqrt{n}}\right) $$ so it is sufficient to know or prove that $\frac{\ln x}{x} \xrightarrow[x\to\infty]{} 0$.</p> Answer 23: <p>Let $f:\mathbb{Z}_{\geq 0}\to\mathbb{R}_{\geq 0}$ be a function such that, as a function of $m\in\mathbb{Z}_{\geq 0}$, the cardinality of the preimage $T_m:=f^{-1}\big([m,m+1)\big)$ is polynomially bounded, namely, for some polynomial $p(X)\in\mathbb{R}[X]$, $\big|T_m\big|\leq p(m)$ for every $m$. (In fact, we only need that $T_m$ is finite for all $m$ and $\limsup\limits_{m\to\infty}\,\sqrt[m]{\left|T_m\right|}\leq1$.) Then, for $q&gt;0$, the infinite sum $\displaystyle\sum_{n=0}^\infty\,q^{f(n)}$ converges if and only if $q&lt;1$. </p> <p>It is trivial that $q&lt;1$ is necessary for the sum to converge. We shall show that the sum indeed converges when $q&lt;1$. Observe that $$\sum_{n=0}^\infty\,q^{f(n)}=\sum_{m=0}^\infty\,q^{m}\,\sum_{j\in T_m}\,q^{f(j)-m}\leq\sum_{m=0}^\infty\,q^m\,\left|T_m\right|\,.$$ Since $\left|T_m\right|$ is polynomially bounded, we conclude that $\sum\limits_{m=0}^\infty\,q^m\,\left|T_m\right|&lt;\infty$ for $q\in(0,1)$.</p> <p>In particular, if $f(n)=\sqrt{n}$, then $\big|T_m\big|=(m+1)^2-m^2=2m+1$ is linear, whence polynomially bounded. Hence, for $q&gt;0$, the sum $\sum\limits_{n=0}^\infty\,q^{\sqrt{n}}$ is convergent iff $q&lt;1$.</p> Answer 24: <ul> <li>Write $q=\frac{1}{1+y}$</li> <li>Note that $(1+y)^k &gt; {k \choose 3}y^3$</li> <li>Deduce that $q^k &lt; \frac{y^{-3}3!}{k(k-1)(k-2)}$</li> <li>Take $k=\sqrt{n}$ to see$$q^{\sqrt{n}}=O(n^{-3/2})$$</li> <li>Remark that $n^{-3/2 }$ has a convergent sum</li> <li>Deduce that the sum is convergent</li> </ul> Answer 25: <p>We can use the estimate $\sqrt n\geq\ln n$ to get $$ \sum_{n=1}^\infty q^{\sqrt n}\leq\sum_{n=1}^\infty q^{\ln n}=\sum_{n=1}^\infty n^{\ln q}&lt;\infty,\ \ \text{ if }\ln q&lt;-1,$$ which happens for $0&lt;q&lt;\frac{1}{e}\approx0.367878$. We have thus obtained only a partial solution, but this argument can be modified in order to get the desired result:</p> <p>Let's fix $k\geq1$. As $\frac{\sqrt n}{k\ln n}\to\infty$ when $n\to\infty$ we know that there is some $N(k)\in\mathbb{N}$ such that $$\sqrt{n}\geq k\ln n,\ \ \text{for }n\geq N.$$ Performing the above estimations again we arrive at $$ \sum_{n=N}^\infty q^{\sqrt n}\leq\sum_{N}^\infty n^{k\ln q}&lt;\infty, \ \ \text{ if }k\ln q&lt;-1,$$ which happens for $0&lt;q&lt;e^{-\frac{1}{k}}$. As $e^{-\frac{1}{k}}\nearrow1$ when $k\to\infty$ this yields the desired convergence for $0&lt;q&lt;1$. </p> Answer 26: <p>For $q\in(0,1)$:</p> <p>$$\sum_{n=1}^\infty q^{\sqrt n}&lt;\sum_{n=1}^\infty q^{\lfloor\sqrt n\rfloor}\stackrel*=\sum_{k=1}^\infty (2k+1)q^k&lt;\infty$$</p> <p>$(*)$ Note that for each $k\in\Bbb N$ there are exactly $2k+1$ natural numbers $n$ such that $\lfloor\sqrt n\rfloor=k$.</p> <p>EDIT (motivated by Clement's comment):</p> <p>Let $r\in(0,1)$. Take some natural $s&gt;1/r$. For each natural $k$, there are $$p(k)=(k+1)^s-k^s$$ natural numbers $n$ such that $\lfloor\sqrt[s]n\rfloor=k$. Then $$\sum_{n=1}^\infty q^{n^r}&lt;\sum_{k=1}^\infty p(k)q^k&lt;\infty$$</p> Answer 27: <p><strong>Yes</strong>, it does converge <em>(and quite fast, but that's besides the point)</em>. We can prove it by comparison with, for instance, the series $\sum_{n=1}^\infty \frac{1}{n^2}$.</p> <p>Let $q\in(0,1)$ be any number.</p> <p>$$\frac{q^{\sqrt{n}}}{\frac{1}{n^2}} = n^2 e^{-\sqrt{n} \ln\frac{1}{q}} = e^{-\sqrt{n} \ln\frac{1}{q}+2\ln n} = e^{-\ln\frac{1}{q}\left( \sqrt{n} - \frac{2}{\ln\frac{1}{q}}\ln n\right)}$$ Now, you can use the fact$^{(\dagger)}$ that for any real number $a\in\mathbb{R}$, $$ \sqrt{n}-a\ln n \xrightarrow[n\to\infty]{} \infty $$ to conclude that $\frac{q^{\sqrt{n}}}{\frac{1}{n^2}}\xrightarrow[n\to\infty]{} 0$.</p> <hr> <p>If $(\dagger)$ is not obvious or known to you: $$ \sqrt{n}-a\ln n = \sqrt{n}\left(1-a\frac{\ln n}{\sqrt{n}}\right) = \sqrt{n}\left(1-2a\frac{\ln \sqrt{n}}{\sqrt{n}}\right) $$ so it is sufficient to know or prove that $\frac{\ln x}{x} \xrightarrow[x\to\infty]{} 0$.</p> Answer 28: <p>Let $f:\mathbb{Z}_{\geq 0}\to\mathbb{R}_{\geq 0}$ be a function such that, as a function of $m\in\mathbb{Z}_{\geq 0}$, the cardinality of the preimage $T_m:=f^{-1}\big([m,m+1)\big)$ is polynomially bounded, namely, for some polynomial $p(X)\in\mathbb{R}[X]$, $\big|T_m\big|\leq p(m)$ for every $m$. (In fact, we only need that $T_m$ is finite for all $m$ and $\limsup\limits_{m\to\infty}\,\sqrt[m]{\left|T_m\right|}\leq1$.) Then, for $q&gt;0$, the infinite sum $\displaystyle\sum_{n=0}^\infty\,q^{f(n)}$ converges if and only if $q&lt;1$. </p> <p>It is trivial that $q&lt;1$ is necessary for the sum to converge. We shall show that the sum indeed converges when $q&lt;1$. Observe that $$\sum_{n=0}^\infty\,q^{f(n)}=\sum_{m=0}^\infty\,q^{m}\,\sum_{j\in T_m}\,q^{f(j)-m}\leq\sum_{m=0}^\infty\,q^m\,\left|T_m\right|\,.$$ Since $\left|T_m\right|$ is polynomially bounded, we conclude that $\sum\limits_{m=0}^\infty\,q^m\,\left|T_m\right|&lt;\infty$ for $q\in(0,1)$.</p> <p>In particular, if $f(n)=\sqrt{n}$, then $\big|T_m\big|=(m+1)^2-m^2=2m+1$ is linear, whence polynomially bounded. Hence, for $q&gt;0$, the sum $\sum\limits_{n=0}^\infty\,q^{\sqrt{n}}$ is convergent iff $q&lt;1$.</p> Answer 29: <ul> <li>Write $q=\frac{1}{1+y}$</li> <li>Note that $(1+y)^k &gt; {k \choose 3}y^3$</li> <li>Deduce that $q^k &lt; \frac{y^{-3}3!}{k(k-1)(k-2)}$</li> <li>Take $k=\sqrt{n}$ to see$$q^{\sqrt{n}}=O(n^{-3/2})$$</li> <li>Remark that $n^{-3/2 }$ has a convergent sum</li> <li>Deduce that the sum is convergent</li> </ul> Answer 30: <p>We can use the estimate $\sqrt n\geq\ln n$ to get $$ \sum_{n=1}^\infty q^{\sqrt n}\leq\sum_{n=1}^\infty q^{\ln n}=\sum_{n=1}^\infty n^{\ln q}&lt;\infty,\ \ \text{ if }\ln q&lt;-1,$$ which happens for $0&lt;q&lt;\frac{1}{e}\approx0.367878$. We have thus obtained only a partial solution, but this argument can be modified in order to get the desired result:</p> <p>Let's fix $k\geq1$. As $\frac{\sqrt n}{k\ln n}\to\infty$ when $n\to\infty$ we know that there is some $N(k)\in\mathbb{N}$ such that $$\sqrt{n}\geq k\ln n,\ \ \text{for }n\geq N.$$ Performing the above estimations again we arrive at $$ \sum_{n=N}^\infty q^{\sqrt n}\leq\sum_{N}^\infty n^{k\ln q}&lt;\infty, \ \ \text{ if }k\ln q&lt;-1,$$ which happens for $0&lt;q&lt;e^{-\frac{1}{k}}$. As $e^{-\frac{1}{k}}\nearrow1$ when $k\to\infty$ this yields the desired convergence for $0&lt;q&lt;1$. </p>
$\ker S\subset\ker T\Leftrightarrow\exists R, T=RS$
2,109,364
5
https://math.stackexchange.com/questions/2109364/ker-s-subset-ker-t-leftrightarrow-exists-r-t-rs
<p>Let $V,W$ be two vector spaces over $F=\mathbb{R}$ or $\mathbb{C}$, such that $W$ is finite dimensional and $S,T\in L(V,W)$.</p> <blockquote> <p>Show that $\ker S\subset\ker T$ if, and only if, there exists a linear operator $R:W\to W$ such that $T=RS$.</p> </blockquote> <hr> <p>The converse was easy to verify, since if $Sv = 0$, then $Tv = RSv = R(0) = 0$.</p> <p>I tried to prove the other assertion by constructing such $R$ considering basis for $\operatorname{range} T$ and $\operatorname{range} S$ and send one to the other, but this is as far as I got. Could anyone give me a hint? </p>
[ { "answer_id": 2109378, "score": 4, "text": "<p>You are on the right track.</p>\n\n<p>Define a linear map $R_0\\colon S(V)\\to W$ as follows: For $w\\in S(V)$, pick $v\\in V$ with $S(v)=W$ (possible by definition of $S(V)$). Now let $R_0(w)=T(v)$. This is well-defined because if $S(v')=w$, too, then $v-v'\\in \\ker S\\subseteq \\ker T$, hence $T(v)=T(v')$.\nOf course, we can extend $R_0$ to a linear map $R$ defined on all of $V$ by picking a complementary subspace (or: by extending a basis of $S(V)$ to a basis of $W$).</p>\n" }, { "answer_id": 2109425, "score": 2, "text": "<p>Consider a basis $\\{v_1,\\dots,v_r\\}$ of $\\ker S$, which we can extend to a basis $\\{v_1,\\dots,v_m\\}$ of $V$.</p>\n\n<p>The set $B=\\{S(v_{r+1}),\\dots,S(v_n)\\}$ is a basis for the image of $S$, as it is readily verified.</p>\n\n<p>The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\\le i\\le n$. </p>\n\n<p>Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements.</p>\n\n<p>For $1\\le i\\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$.</p>\n\n<hr>\n\n<p>Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.</p>\n" }, { "answer_id": 2109378, "score": 4, "text": "<p>You are on the right track.</p>\n\n<p>Define a linear map $R_0\\colon S(V)\\to W$ as follows: For $w\\in S(V)$, pick $v\\in V$ with $S(v)=W$ (possible by definition of $S(V)$). Now let $R_0(w)=T(v)$. This is well-defined because if $S(v')=w$, too, then $v-v'\\in \\ker S\\subseteq \\ker T$, hence $T(v)=T(v')$.\nOf course, we can extend $R_0$ to a linear map $R$ defined on all of $V$ by picking a complementary subspace (or: by extending a basis of $S(V)$ to a basis of $W$).</p>\n" }, { "answer_id": 2109425, "score": 2, "text": "<p>Consider a basis $\\{v_1,\\dots,v_r\\}$ of $\\ker S$, which we can extend to a basis $\\{v_1,\\dots,v_m\\}$ of $V$.</p>\n\n<p>The set $B=\\{S(v_{r+1}),\\dots,S(v_n)\\}$ is a basis for the image of $S$, as it is readily verified.</p>\n\n<p>The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\\le i\\le n$. </p>\n\n<p>Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements.</p>\n\n<p>For $1\\le i\\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$.</p>\n\n<hr>\n\n<p>Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.</p>\n" }, { "answer_id": 2109378, "score": 4, "text": "<p>You are on the right track.</p>\n\n<p>Define a linear map $R_0\\colon S(V)\\to W$ as follows: For $w\\in S(V)$, pick $v\\in V$ with $S(v)=W$ (possible by definition of $S(V)$). Now let $R_0(w)=T(v)$. This is well-defined because if $S(v')=w$, too, then $v-v'\\in \\ker S\\subseteq \\ker T$, hence $T(v)=T(v')$.\nOf course, we can extend $R_0$ to a linear map $R$ defined on all of $V$ by picking a complementary subspace (or: by extending a basis of $S(V)$ to a basis of $W$).</p>\n" }, { "answer_id": 2109425, "score": 2, "text": "<p>Consider a basis $\\{v_1,\\dots,v_r\\}$ of $\\ker S$, which we can extend to a basis $\\{v_1,\\dots,v_m\\}$ of $V$.</p>\n\n<p>The set $B=\\{S(v_{r+1}),\\dots,S(v_n)\\}$ is a basis for the image of $S$, as it is readily verified.</p>\n\n<p>The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\\le i\\le n$. </p>\n\n<p>Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements.</p>\n\n<p>For $1\\le i\\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$.</p>\n\n<hr>\n\n<p>Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.</p>\n" }, { "answer_id": 2109378, "score": 4, "text": "<p>You are on the right track.</p>\n\n<p>Define a linear map $R_0\\colon S(V)\\to W$ as follows: For $w\\in S(V)$, pick $v\\in V$ with $S(v)=W$ (possible by definition of $S(V)$). Now let $R_0(w)=T(v)$. This is well-defined because if $S(v')=w$, too, then $v-v'\\in \\ker S\\subseteq \\ker T$, hence $T(v)=T(v')$.\nOf course, we can extend $R_0$ to a linear map $R$ defined on all of $V$ by picking a complementary subspace (or: by extending a basis of $S(V)$ to a basis of $W$).</p>\n" }, { "answer_id": 2109425, "score": 2, "text": "<p>Consider a basis $\\{v_1,\\dots,v_r\\}$ of $\\ker S$, which we can extend to a basis $\\{v_1,\\dots,v_m\\}$ of $V$.</p>\n\n<p>The set $B=\\{S(v_{r+1}),\\dots,S(v_n)\\}$ is a basis for the image of $S$, as it is readily verified.</p>\n\n<p>The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\\le i\\le n$. </p>\n\n<p>Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements.</p>\n\n<p>For $1\\le i\\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$.</p>\n\n<hr>\n\n<p>Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.</p>\n" }, { "answer_id": 2109378, "score": 4, "text": "<p>You are on the right track.</p>\n\n<p>Define a linear map $R_0\\colon S(V)\\to W$ as follows: For $w\\in S(V)$, pick $v\\in V$ with $S(v)=W$ (possible by definition of $S(V)$). Now let $R_0(w)=T(v)$. This is well-defined because if $S(v')=w$, too, then $v-v'\\in \\ker S\\subseteq \\ker T$, hence $T(v)=T(v')$.\nOf course, we can extend $R_0$ to a linear map $R$ defined on all of $V$ by picking a complementary subspace (or: by extending a basis of $S(V)$ to a basis of $W$).</p>\n" }, { "answer_id": 2109425, "score": 2, "text": "<p>Consider a basis $\\{v_1,\\dots,v_r\\}$ of $\\ker S$, which we can extend to a basis $\\{v_1,\\dots,v_m\\}$ of $V$.</p>\n\n<p>The set $B=\\{S(v_{r+1}),\\dots,S(v_n)\\}$ is a basis for the image of $S$, as it is readily verified.</p>\n\n<p>The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\\le i\\le n$. </p>\n\n<p>Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements.</p>\n\n<p>For $1\\le i\\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$.</p>\n\n<hr>\n\n<p>Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.</p>\n" }, { "answer_id": 2109378, "score": 4, "text": "<p>You are on the right track.</p>\n\n<p>Define a linear map $R_0\\colon S(V)\\to W$ as follows: For $w\\in S(V)$, pick $v\\in V$ with $S(v)=W$ (possible by definition of $S(V)$). Now let $R_0(w)=T(v)$. This is well-defined because if $S(v')=w$, too, then $v-v'\\in \\ker S\\subseteq \\ker T$, hence $T(v)=T(v')$.\nOf course, we can extend $R_0$ to a linear map $R$ defined on all of $V$ by picking a complementary subspace (or: by extending a basis of $S(V)$ to a basis of $W$).</p>\n" }, { "answer_id": 2109425, "score": 2, "text": "<p>Consider a basis $\\{v_1,\\dots,v_r\\}$ of $\\ker S$, which we can extend to a basis $\\{v_1,\\dots,v_m\\}$ of $V$.</p>\n\n<p>The set $B=\\{S(v_{r+1}),\\dots,S(v_n)\\}$ is a basis for the image of $S$, as it is readily verified.</p>\n\n<p>The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\\le i\\le n$. </p>\n\n<p>Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements.</p>\n\n<p>For $1\\le i\\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$.</p>\n\n<hr>\n\n<p>Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.</p>\n" } ]
Question: $\ker S\subset\ker T\Leftrightarrow\exists R, T=RS$ Answer 1: <p>You are on the right track.</p> <p>Define a linear map $R_0\colon S(V)\to W$ as follows: For $w\in S(V)$, pick $v\in V$ with $S(v)=W$ (possible by definition of $S(V)$). Now let $R_0(w)=T(v)$. This is well-defined because if $S(v')=w$, too, then $v-v'\in \ker S\subseteq \ker T$, hence $T(v)=T(v')$. Of course, we can extend $R_0$ to a linear map $R$ defined on all of $V$ by picking a complementary subspace (or: by extending a basis of $S(V)$ to a basis of $W$).</p> Answer 2: <p>Consider a basis $\{v_1,\dots,v_r\}$ of $\ker S$, which we can extend to a basis $\{v_1,\dots,v_m\}$ of $V$.</p> <p>The set $B=\{S(v_{r+1}),\dots,S(v_n)\}$ is a basis for the image of $S$, as it is readily verified.</p> <p>The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\le i\le n$. </p> <p>Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements.</p> <p>For $1\le i\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$.</p> <hr> <p>Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.</p> Answer 3: <p>You are on the right track.</p> <p>Define a linear map $R_0\colon S(V)\to W$ as follows: For $w\in S(V)$, pick $v\in V$ with $S(v)=W$ (possible by definition of $S(V)$). Now let $R_0(w)=T(v)$. This is well-defined because if $S(v')=w$, too, then $v-v'\in \ker S\subseteq \ker T$, hence $T(v)=T(v')$. Of course, we can extend $R_0$ to a linear map $R$ defined on all of $V$ by picking a complementary subspace (or: by extending a basis of $S(V)$ to a basis of $W$).</p> Answer 4: <p>Consider a basis $\{v_1,\dots,v_r\}$ of $\ker S$, which we can extend to a basis $\{v_1,\dots,v_m\}$ of $V$.</p> <p>The set $B=\{S(v_{r+1}),\dots,S(v_n)\}$ is a basis for the image of $S$, as it is readily verified.</p> <p>The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\le i\le n$. </p> <p>Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements.</p> <p>For $1\le i\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$.</p> <hr> <p>Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.</p> Answer 5: <p>You are on the right track.</p> <p>Define a linear map $R_0\colon S(V)\to W$ as follows: For $w\in S(V)$, pick $v\in V$ with $S(v)=W$ (possible by definition of $S(V)$). Now let $R_0(w)=T(v)$. This is well-defined because if $S(v')=w$, too, then $v-v'\in \ker S\subseteq \ker T$, hence $T(v)=T(v')$. Of course, we can extend $R_0$ to a linear map $R$ defined on all of $V$ by picking a complementary subspace (or: by extending a basis of $S(V)$ to a basis of $W$).</p> Answer 6: <p>Consider a basis $\{v_1,\dots,v_r\}$ of $\ker S$, which we can extend to a basis $\{v_1,\dots,v_m\}$ of $V$.</p> <p>The set $B=\{S(v_{r+1}),\dots,S(v_n)\}$ is a basis for the image of $S$, as it is readily verified.</p> <p>The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\le i\le n$. </p> <p>Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements.</p> <p>For $1\le i\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$.</p> <hr> <p>Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.</p> Answer 7: <p>You are on the right track.</p> <p>Define a linear map $R_0\colon S(V)\to W$ as follows: For $w\in S(V)$, pick $v\in V$ with $S(v)=W$ (possible by definition of $S(V)$). Now let $R_0(w)=T(v)$. This is well-defined because if $S(v')=w$, too, then $v-v'\in \ker S\subseteq \ker T$, hence $T(v)=T(v')$. Of course, we can extend $R_0$ to a linear map $R$ defined on all of $V$ by picking a complementary subspace (or: by extending a basis of $S(V)$ to a basis of $W$).</p> Answer 8: <p>Consider a basis $\{v_1,\dots,v_r\}$ of $\ker S$, which we can extend to a basis $\{v_1,\dots,v_m\}$ of $V$.</p> <p>The set $B=\{S(v_{r+1}),\dots,S(v_n)\}$ is a basis for the image of $S$, as it is readily verified.</p> <p>The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\le i\le n$. </p> <p>Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements.</p> <p>For $1\le i\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$.</p> <hr> <p>Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.</p> Answer 9: <p>You are on the right track.</p> <p>Define a linear map $R_0\colon S(V)\to W$ as follows: For $w\in S(V)$, pick $v\in V$ with $S(v)=W$ (possible by definition of $S(V)$). Now let $R_0(w)=T(v)$. This is well-defined because if $S(v')=w$, too, then $v-v'\in \ker S\subseteq \ker T$, hence $T(v)=T(v')$. Of course, we can extend $R_0$ to a linear map $R$ defined on all of $V$ by picking a complementary subspace (or: by extending a basis of $S(V)$ to a basis of $W$).</p> Answer 10: <p>Consider a basis $\{v_1,\dots,v_r\}$ of $\ker S$, which we can extend to a basis $\{v_1,\dots,v_m\}$ of $V$.</p> <p>The set $B=\{S(v_{r+1}),\dots,S(v_n)\}$ is a basis for the image of $S$, as it is readily verified.</p> <p>The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\le i\le n$. </p> <p>Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements.</p> <p>For $1\le i\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$.</p> <hr> <p>Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.</p> Answer 11: <p>You are on the right track.</p> <p>Define a linear map $R_0\colon S(V)\to W$ as follows: For $w\in S(V)$, pick $v\in V$ with $S(v)=W$ (possible by definition of $S(V)$). Now let $R_0(w)=T(v)$. This is well-defined because if $S(v')=w$, too, then $v-v'\in \ker S\subseteq \ker T$, hence $T(v)=T(v')$. Of course, we can extend $R_0$ to a linear map $R$ defined on all of $V$ by picking a complementary subspace (or: by extending a basis of $S(V)$ to a basis of $W$).</p> Answer 12: <p>Consider a basis $\{v_1,\dots,v_r\}$ of $\ker S$, which we can extend to a basis $\{v_1,\dots,v_m\}$ of $V$.</p> <p>The set $B=\{S(v_{r+1}),\dots,S(v_n)\}$ is a basis for the image of $S$, as it is readily verified.</p> <p>The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\le i\le n$. </p> <p>Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements.</p> <p>For $1\le i\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$.</p> <hr> <p>Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.</p>
Proof: $\lim\limits_{n\to\infty}\frac{\sqrt[n]{n!}}{n}=e^{-1}$
2,109,409
5
https://math.stackexchange.com/questions/2109409/proof-lim-limits-n-to-infty-frac-sqrtnnn-e-1
<p>Prove that <span class="math-container">$\displaystyle\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}=e^{-1}$</span>.</p> <p>Here's my solution:</p> <p>Stirling's formula tells us <span class="math-container">$$\lim_{n\to\infty}\frac{n!}{n^ne^{-n}\sqrt{2\pi n}}=1$$</span>which implies <span class="math-container">$$\lim_{n\to\infty}\sqrt[n]{\frac{n!}{n^ne^{-n}\sqrt{2\pi n}}}=1$$</span>then simplifying the left side we have <span class="math-container">$$\lim_{n\to\infty}\sqrt[n]{\frac{n!}{n^n}}\lim_{n\to\infty}\sqrt[n]{e^{n}}\lim_{n\to\infty}\frac{1}{\sqrt[2n]{2\pi n}}=e\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}=1$$</span> since <span class="math-container">$\lim_{n\to\infty}\sqrt[2n]{2\pi n}=1$</span>. Divide both sides by <span class="math-container">$e$</span> and we're done.</p> <p>Is this correct? This is a problem from <em>Problems in Real Analysis</em> by Radulescu and Andreescu. The book gives two other proofs. Thanks!</p>
[ { "answer_id": 2110823, "score": 5, "text": "<p>Let $$A_n=\\frac{\\sqrt[n]{n!}}n.$$\nThen $$\\log(A_n)=\\frac 1n \\log(n!)-\\log(n)=\\frac 1n \\sum_{k=1}^n\\log k-\\log(n)=\\sum_{k=1}^n\\frac1n\\log(\\frac{k}{n})\\to\\int_0^1\\log xdx=-1$$\nand hence\n$$ \\lim_{n\\to\\infty} A_n=e^{-1}. $$</p>\n" }, { "answer_id": 2109957, "score": 2, "text": "<p>You could make it shorter \n$$A=\\frac{\\sqrt[n]{n!}}n\\implies \\log(A)=\\frac 1n \\log(n!)-\\log(n)$$ Using Stirling approximation$$ \\log(n!)=n (\\log (n)-1)+\\frac{1}{2} \\left(\\log (2 \\pi )+\\log\n \\left(n\\right)\\right)+\\frac{1}{12 n}+O\\left(\\frac{1}{n^2}\\right)$$ leads to $$\\log(A)=-1+\\frac{\\log (n)+\\log (2 \\pi )}{2 n}+\\frac{1}{12 n^2}+O\\left(\\frac{1}{n^3}\\right)$$</p>\n" }, { "answer_id": 2110823, "score": 5, "text": "<p>Let $$A_n=\\frac{\\sqrt[n]{n!}}n.$$\nThen $$\\log(A_n)=\\frac 1n \\log(n!)-\\log(n)=\\frac 1n \\sum_{k=1}^n\\log k-\\log(n)=\\sum_{k=1}^n\\frac1n\\log(\\frac{k}{n})\\to\\int_0^1\\log xdx=-1$$\nand hence\n$$ \\lim_{n\\to\\infty} A_n=e^{-1}. $$</p>\n" }, { "answer_id": 2109957, "score": 2, "text": "<p>You could make it shorter \n$$A=\\frac{\\sqrt[n]{n!}}n\\implies \\log(A)=\\frac 1n \\log(n!)-\\log(n)$$ Using Stirling approximation$$ \\log(n!)=n (\\log (n)-1)+\\frac{1}{2} \\left(\\log (2 \\pi )+\\log\n \\left(n\\right)\\right)+\\frac{1}{12 n}+O\\left(\\frac{1}{n^2}\\right)$$ leads to $$\\log(A)=-1+\\frac{\\log (n)+\\log (2 \\pi )}{2 n}+\\frac{1}{12 n^2}+O\\left(\\frac{1}{n^3}\\right)$$</p>\n" }, { "answer_id": 2110823, "score": 5, "text": "<p>Let $$A_n=\\frac{\\sqrt[n]{n!}}n.$$\nThen $$\\log(A_n)=\\frac 1n \\log(n!)-\\log(n)=\\frac 1n \\sum_{k=1}^n\\log k-\\log(n)=\\sum_{k=1}^n\\frac1n\\log(\\frac{k}{n})\\to\\int_0^1\\log xdx=-1$$\nand hence\n$$ \\lim_{n\\to\\infty} A_n=e^{-1}. $$</p>\n" }, { "answer_id": 2109957, "score": 2, "text": "<p>You could make it shorter \n$$A=\\frac{\\sqrt[n]{n!}}n\\implies \\log(A)=\\frac 1n \\log(n!)-\\log(n)$$ Using Stirling approximation$$ \\log(n!)=n (\\log (n)-1)+\\frac{1}{2} \\left(\\log (2 \\pi )+\\log\n \\left(n\\right)\\right)+\\frac{1}{12 n}+O\\left(\\frac{1}{n^2}\\right)$$ leads to $$\\log(A)=-1+\\frac{\\log (n)+\\log (2 \\pi )}{2 n}+\\frac{1}{12 n^2}+O\\left(\\frac{1}{n^3}\\right)$$</p>\n" }, { "answer_id": 2110823, "score": 5, "text": "<p>Let $$A_n=\\frac{\\sqrt[n]{n!}}n.$$\nThen $$\\log(A_n)=\\frac 1n \\log(n!)-\\log(n)=\\frac 1n \\sum_{k=1}^n\\log k-\\log(n)=\\sum_{k=1}^n\\frac1n\\log(\\frac{k}{n})\\to\\int_0^1\\log xdx=-1$$\nand hence\n$$ \\lim_{n\\to\\infty} A_n=e^{-1}. $$</p>\n" }, { "answer_id": 2109957, "score": 2, "text": "<p>You could make it shorter \n$$A=\\frac{\\sqrt[n]{n!}}n\\implies \\log(A)=\\frac 1n \\log(n!)-\\log(n)$$ Using Stirling approximation$$ \\log(n!)=n (\\log (n)-1)+\\frac{1}{2} \\left(\\log (2 \\pi )+\\log\n \\left(n\\right)\\right)+\\frac{1}{12 n}+O\\left(\\frac{1}{n^2}\\right)$$ leads to $$\\log(A)=-1+\\frac{\\log (n)+\\log (2 \\pi )}{2 n}+\\frac{1}{12 n^2}+O\\left(\\frac{1}{n^3}\\right)$$</p>\n" }, { "answer_id": 2110823, "score": 5, "text": "<p>Let $$A_n=\\frac{\\sqrt[n]{n!}}n.$$\nThen $$\\log(A_n)=\\frac 1n \\log(n!)-\\log(n)=\\frac 1n \\sum_{k=1}^n\\log k-\\log(n)=\\sum_{k=1}^n\\frac1n\\log(\\frac{k}{n})\\to\\int_0^1\\log xdx=-1$$\nand hence\n$$ \\lim_{n\\to\\infty} A_n=e^{-1}. $$</p>\n" }, { "answer_id": 2109957, "score": 2, "text": "<p>You could make it shorter \n$$A=\\frac{\\sqrt[n]{n!}}n\\implies \\log(A)=\\frac 1n \\log(n!)-\\log(n)$$ Using Stirling approximation$$ \\log(n!)=n (\\log (n)-1)+\\frac{1}{2} \\left(\\log (2 \\pi )+\\log\n \\left(n\\right)\\right)+\\frac{1}{12 n}+O\\left(\\frac{1}{n^2}\\right)$$ leads to $$\\log(A)=-1+\\frac{\\log (n)+\\log (2 \\pi )}{2 n}+\\frac{1}{12 n^2}+O\\left(\\frac{1}{n^3}\\right)$$</p>\n" }, { "answer_id": 2110823, "score": 5, "text": "<p>Let $$A_n=\\frac{\\sqrt[n]{n!}}n.$$\nThen $$\\log(A_n)=\\frac 1n \\log(n!)-\\log(n)=\\frac 1n \\sum_{k=1}^n\\log k-\\log(n)=\\sum_{k=1}^n\\frac1n\\log(\\frac{k}{n})\\to\\int_0^1\\log xdx=-1$$\nand hence\n$$ \\lim_{n\\to\\infty} A_n=e^{-1}. $$</p>\n" }, { "answer_id": 2109957, "score": 2, "text": "<p>You could make it shorter \n$$A=\\frac{\\sqrt[n]{n!}}n\\implies \\log(A)=\\frac 1n \\log(n!)-\\log(n)$$ Using Stirling approximation$$ \\log(n!)=n (\\log (n)-1)+\\frac{1}{2} \\left(\\log (2 \\pi )+\\log\n \\left(n\\right)\\right)+\\frac{1}{12 n}+O\\left(\\frac{1}{n^2}\\right)$$ leads to $$\\log(A)=-1+\\frac{\\log (n)+\\log (2 \\pi )}{2 n}+\\frac{1}{12 n^2}+O\\left(\\frac{1}{n^3}\\right)$$</p>\n" } ]
Question: Proof: $\lim\limits_{n\to\infty}\frac{\sqrt[n]{n!}}{n}=e^{-1}$ Answer 1: <p>Let $$A_n=\frac{\sqrt[n]{n!}}n.$$ Then $$\log(A_n)=\frac 1n \log(n!)-\log(n)=\frac 1n \sum_{k=1}^n\log k-\log(n)=\sum_{k=1}^n\frac1n\log(\frac{k}{n})\to\int_0^1\log xdx=-1$$ and hence $$ \lim_{n\to\infty} A_n=e^{-1}. $$</p> Answer 2: <p>You could make it shorter $$A=\frac{\sqrt[n]{n!}}n\implies \log(A)=\frac 1n \log(n!)-\log(n)$$ Using Stirling approximation$$ \log(n!)=n (\log (n)-1)+\frac{1}{2} \left(\log (2 \pi )+\log \left(n\right)\right)+\frac{1}{12 n}+O\left(\frac{1}{n^2}\right)$$ leads to $$\log(A)=-1+\frac{\log (n)+\log (2 \pi )}{2 n}+\frac{1}{12 n^2}+O\left(\frac{1}{n^3}\right)$$</p> Answer 3: <p>Let $$A_n=\frac{\sqrt[n]{n!}}n.$$ Then $$\log(A_n)=\frac 1n \log(n!)-\log(n)=\frac 1n \sum_{k=1}^n\log k-\log(n)=\sum_{k=1}^n\frac1n\log(\frac{k}{n})\to\int_0^1\log xdx=-1$$ and hence $$ \lim_{n\to\infty} A_n=e^{-1}. $$</p> Answer 4: <p>You could make it shorter $$A=\frac{\sqrt[n]{n!}}n\implies \log(A)=\frac 1n \log(n!)-\log(n)$$ Using Stirling approximation$$ \log(n!)=n (\log (n)-1)+\frac{1}{2} \left(\log (2 \pi )+\log \left(n\right)\right)+\frac{1}{12 n}+O\left(\frac{1}{n^2}\right)$$ leads to $$\log(A)=-1+\frac{\log (n)+\log (2 \pi )}{2 n}+\frac{1}{12 n^2}+O\left(\frac{1}{n^3}\right)$$</p> Answer 5: <p>Let $$A_n=\frac{\sqrt[n]{n!}}n.$$ Then $$\log(A_n)=\frac 1n \log(n!)-\log(n)=\frac 1n \sum_{k=1}^n\log k-\log(n)=\sum_{k=1}^n\frac1n\log(\frac{k}{n})\to\int_0^1\log xdx=-1$$ and hence $$ \lim_{n\to\infty} A_n=e^{-1}. $$</p> Answer 6: <p>You could make it shorter $$A=\frac{\sqrt[n]{n!}}n\implies \log(A)=\frac 1n \log(n!)-\log(n)$$ Using Stirling approximation$$ \log(n!)=n (\log (n)-1)+\frac{1}{2} \left(\log (2 \pi )+\log \left(n\right)\right)+\frac{1}{12 n}+O\left(\frac{1}{n^2}\right)$$ leads to $$\log(A)=-1+\frac{\log (n)+\log (2 \pi )}{2 n}+\frac{1}{12 n^2}+O\left(\frac{1}{n^3}\right)$$</p> Answer 7: <p>Let $$A_n=\frac{\sqrt[n]{n!}}n.$$ Then $$\log(A_n)=\frac 1n \log(n!)-\log(n)=\frac 1n \sum_{k=1}^n\log k-\log(n)=\sum_{k=1}^n\frac1n\log(\frac{k}{n})\to\int_0^1\log xdx=-1$$ and hence $$ \lim_{n\to\infty} A_n=e^{-1}. $$</p> Answer 8: <p>You could make it shorter $$A=\frac{\sqrt[n]{n!}}n\implies \log(A)=\frac 1n \log(n!)-\log(n)$$ Using Stirling approximation$$ \log(n!)=n (\log (n)-1)+\frac{1}{2} \left(\log (2 \pi )+\log \left(n\right)\right)+\frac{1}{12 n}+O\left(\frac{1}{n^2}\right)$$ leads to $$\log(A)=-1+\frac{\log (n)+\log (2 \pi )}{2 n}+\frac{1}{12 n^2}+O\left(\frac{1}{n^3}\right)$$</p> Answer 9: <p>Let $$A_n=\frac{\sqrt[n]{n!}}n.$$ Then $$\log(A_n)=\frac 1n \log(n!)-\log(n)=\frac 1n \sum_{k=1}^n\log k-\log(n)=\sum_{k=1}^n\frac1n\log(\frac{k}{n})\to\int_0^1\log xdx=-1$$ and hence $$ \lim_{n\to\infty} A_n=e^{-1}. $$</p> Answer 10: <p>You could make it shorter $$A=\frac{\sqrt[n]{n!}}n\implies \log(A)=\frac 1n \log(n!)-\log(n)$$ Using Stirling approximation$$ \log(n!)=n (\log (n)-1)+\frac{1}{2} \left(\log (2 \pi )+\log \left(n\right)\right)+\frac{1}{12 n}+O\left(\frac{1}{n^2}\right)$$ leads to $$\log(A)=-1+\frac{\log (n)+\log (2 \pi )}{2 n}+\frac{1}{12 n^2}+O\left(\frac{1}{n^3}\right)$$</p> Answer 11: <p>Let $$A_n=\frac{\sqrt[n]{n!}}n.$$ Then $$\log(A_n)=\frac 1n \log(n!)-\log(n)=\frac 1n \sum_{k=1}^n\log k-\log(n)=\sum_{k=1}^n\frac1n\log(\frac{k}{n})\to\int_0^1\log xdx=-1$$ and hence $$ \lim_{n\to\infty} A_n=e^{-1}. $$</p> Answer 12: <p>You could make it shorter $$A=\frac{\sqrt[n]{n!}}n\implies \log(A)=\frac 1n \log(n!)-\log(n)$$ Using Stirling approximation$$ \log(n!)=n (\log (n)-1)+\frac{1}{2} \left(\log (2 \pi )+\log \left(n\right)\right)+\frac{1}{12 n}+O\left(\frac{1}{n^2}\right)$$ leads to $$\log(A)=-1+\frac{\log (n)+\log (2 \pi )}{2 n}+\frac{1}{12 n^2}+O\left(\frac{1}{n^3}\right)$$</p>
problem on convergence of series $\sum_{n=1}^\infty\left(\frac1n-\tan^{-1} \frac1n\right)^a$
2,109,419
5
https://math.stackexchange.com/questions/2109419/problem-on-convergence-of-series-sum-n-1-infty-left-frac1n-tan-1-frac
<p>Finding the set of all positive values of $a$ for which the series $$ \sum_{n=1}^\infty\left(\frac1n-\tan^{-1} \frac1n\right)^a $$ converges. How will it depend on the the value of a that is its power of the term? After expanding the arc tan term I get the form of summation of $[ -1/n^3(1/3+1/n^2+......]^a $. now how does it depend on a ?</p>
[ { "answer_id": 2109423, "score": 4, "text": "<p><strong>Hint</strong>. One may use a Taylor series expansion, as $n \\to \\infty$,\n$$\n\\arctan \\frac1n=\\frac1n+O\\left(\\frac1{n^3}\\right)\n$$ giving, as $n \\to \\infty$,\n$$\n\\left(\\frac1n-\\arctan \\frac1n\\right)^a=O\\left(\\frac1{n^{3a}}\\right)\n$$ I hope you can take it from here.</p>\n" }, { "answer_id": 2109424, "score": 0, "text": "<p><em>Since you don't ask for which values of $a$ the series converges, here is an heuristic answer.</em></p>\n\n<p>When $n$ goes to $\\infty$, $\\frac 1n$ goes to $0$ so $\\tan^{—1}(1/n)$ goes to $0$.</p>\n\n<p>So the general term (without the \"$a$\") of your series goes to $0$, so there is a reasonable chance that the series will converge.</p>\n\n<p>It will converge if the general term goes to $0$ sufficiently fast.</p>\n\n<p>That would be the case if $a$ is sufficiently big, but it won't be the case if $a$ is too small.</p>\n" }, { "answer_id": 2109423, "score": 4, "text": "<p><strong>Hint</strong>. One may use a Taylor series expansion, as $n \\to \\infty$,\n$$\n\\arctan \\frac1n=\\frac1n+O\\left(\\frac1{n^3}\\right)\n$$ giving, as $n \\to \\infty$,\n$$\n\\left(\\frac1n-\\arctan \\frac1n\\right)^a=O\\left(\\frac1{n^{3a}}\\right)\n$$ I hope you can take it from here.</p>\n" }, { "answer_id": 2109424, "score": 0, "text": "<p><em>Since you don't ask for which values of $a$ the series converges, here is an heuristic answer.</em></p>\n\n<p>When $n$ goes to $\\infty$, $\\frac 1n$ goes to $0$ so $\\tan^{—1}(1/n)$ goes to $0$.</p>\n\n<p>So the general term (without the \"$a$\") of your series goes to $0$, so there is a reasonable chance that the series will converge.</p>\n\n<p>It will converge if the general term goes to $0$ sufficiently fast.</p>\n\n<p>That would be the case if $a$ is sufficiently big, but it won't be the case if $a$ is too small.</p>\n" }, { "answer_id": 2109423, "score": 4, "text": "<p><strong>Hint</strong>. One may use a Taylor series expansion, as $n \\to \\infty$,\n$$\n\\arctan \\frac1n=\\frac1n+O\\left(\\frac1{n^3}\\right)\n$$ giving, as $n \\to \\infty$,\n$$\n\\left(\\frac1n-\\arctan \\frac1n\\right)^a=O\\left(\\frac1{n^{3a}}\\right)\n$$ I hope you can take it from here.</p>\n" }, { "answer_id": 2109424, "score": 0, "text": "<p><em>Since you don't ask for which values of $a$ the series converges, here is an heuristic answer.</em></p>\n\n<p>When $n$ goes to $\\infty$, $\\frac 1n$ goes to $0$ so $\\tan^{—1}(1/n)$ goes to $0$.</p>\n\n<p>So the general term (without the \"$a$\") of your series goes to $0$, so there is a reasonable chance that the series will converge.</p>\n\n<p>It will converge if the general term goes to $0$ sufficiently fast.</p>\n\n<p>That would be the case if $a$ is sufficiently big, but it won't be the case if $a$ is too small.</p>\n" }, { "answer_id": 2109423, "score": 4, "text": "<p><strong>Hint</strong>. One may use a Taylor series expansion, as $n \\to \\infty$,\n$$\n\\arctan \\frac1n=\\frac1n+O\\left(\\frac1{n^3}\\right)\n$$ giving, as $n \\to \\infty$,\n$$\n\\left(\\frac1n-\\arctan \\frac1n\\right)^a=O\\left(\\frac1{n^{3a}}\\right)\n$$ I hope you can take it from here.</p>\n" }, { "answer_id": 2109424, "score": 0, "text": "<p><em>Since you don't ask for which values of $a$ the series converges, here is an heuristic answer.</em></p>\n\n<p>When $n$ goes to $\\infty$, $\\frac 1n$ goes to $0$ so $\\tan^{—1}(1/n)$ goes to $0$.</p>\n\n<p>So the general term (without the \"$a$\") of your series goes to $0$, so there is a reasonable chance that the series will converge.</p>\n\n<p>It will converge if the general term goes to $0$ sufficiently fast.</p>\n\n<p>That would be the case if $a$ is sufficiently big, but it won't be the case if $a$ is too small.</p>\n" }, { "answer_id": 2109423, "score": 4, "text": "<p><strong>Hint</strong>. One may use a Taylor series expansion, as $n \\to \\infty$,\n$$\n\\arctan \\frac1n=\\frac1n+O\\left(\\frac1{n^3}\\right)\n$$ giving, as $n \\to \\infty$,\n$$\n\\left(\\frac1n-\\arctan \\frac1n\\right)^a=O\\left(\\frac1{n^{3a}}\\right)\n$$ I hope you can take it from here.</p>\n" }, { "answer_id": 2109424, "score": 0, "text": "<p><em>Since you don't ask for which values of $a$ the series converges, here is an heuristic answer.</em></p>\n\n<p>When $n$ goes to $\\infty$, $\\frac 1n$ goes to $0$ so $\\tan^{—1}(1/n)$ goes to $0$.</p>\n\n<p>So the general term (without the \"$a$\") of your series goes to $0$, so there is a reasonable chance that the series will converge.</p>\n\n<p>It will converge if the general term goes to $0$ sufficiently fast.</p>\n\n<p>That would be the case if $a$ is sufficiently big, but it won't be the case if $a$ is too small.</p>\n" }, { "answer_id": 2109423, "score": 4, "text": "<p><strong>Hint</strong>. One may use a Taylor series expansion, as $n \\to \\infty$,\n$$\n\\arctan \\frac1n=\\frac1n+O\\left(\\frac1{n^3}\\right)\n$$ giving, as $n \\to \\infty$,\n$$\n\\left(\\frac1n-\\arctan \\frac1n\\right)^a=O\\left(\\frac1{n^{3a}}\\right)\n$$ I hope you can take it from here.</p>\n" }, { "answer_id": 2109424, "score": 0, "text": "<p><em>Since you don't ask for which values of $a$ the series converges, here is an heuristic answer.</em></p>\n\n<p>When $n$ goes to $\\infty$, $\\frac 1n$ goes to $0$ so $\\tan^{—1}(1/n)$ goes to $0$.</p>\n\n<p>So the general term (without the \"$a$\") of your series goes to $0$, so there is a reasonable chance that the series will converge.</p>\n\n<p>It will converge if the general term goes to $0$ sufficiently fast.</p>\n\n<p>That would be the case if $a$ is sufficiently big, but it won't be the case if $a$ is too small.</p>\n" } ]
Question: problem on convergence of series $\sum_{n=1}^\infty\left(\frac1n-\tan^{-1} \frac1n\right)^a$ Answer 1: <p><strong>Hint</strong>. One may use a Taylor series expansion, as $n \to \infty$, $$ \arctan \frac1n=\frac1n+O\left(\frac1{n^3}\right) $$ giving, as $n \to \infty$, $$ \left(\frac1n-\arctan \frac1n\right)^a=O\left(\frac1{n^{3a}}\right) $$ I hope you can take it from here.</p> Answer 2: <p><em>Since you don't ask for which values of $a$ the series converges, here is an heuristic answer.</em></p> <p>When $n$ goes to $\infty$, $\frac 1n$ goes to $0$ so $\tan^{—1}(1/n)$ goes to $0$.</p> <p>So the general term (without the "$a$") of your series goes to $0$, so there is a reasonable chance that the series will converge.</p> <p>It will converge if the general term goes to $0$ sufficiently fast.</p> <p>That would be the case if $a$ is sufficiently big, but it won't be the case if $a$ is too small.</p> Answer 3: <p><strong>Hint</strong>. One may use a Taylor series expansion, as $n \to \infty$, $$ \arctan \frac1n=\frac1n+O\left(\frac1{n^3}\right) $$ giving, as $n \to \infty$, $$ \left(\frac1n-\arctan \frac1n\right)^a=O\left(\frac1{n^{3a}}\right) $$ I hope you can take it from here.</p> Answer 4: <p><em>Since you don't ask for which values of $a$ the series converges, here is an heuristic answer.</em></p> <p>When $n$ goes to $\infty$, $\frac 1n$ goes to $0$ so $\tan^{—1}(1/n)$ goes to $0$.</p> <p>So the general term (without the "$a$") of your series goes to $0$, so there is a reasonable chance that the series will converge.</p> <p>It will converge if the general term goes to $0$ sufficiently fast.</p> <p>That would be the case if $a$ is sufficiently big, but it won't be the case if $a$ is too small.</p> Answer 5: <p><strong>Hint</strong>. One may use a Taylor series expansion, as $n \to \infty$, $$ \arctan \frac1n=\frac1n+O\left(\frac1{n^3}\right) $$ giving, as $n \to \infty$, $$ \left(\frac1n-\arctan \frac1n\right)^a=O\left(\frac1{n^{3a}}\right) $$ I hope you can take it from here.</p> Answer 6: <p><em>Since you don't ask for which values of $a$ the series converges, here is an heuristic answer.</em></p> <p>When $n$ goes to $\infty$, $\frac 1n$ goes to $0$ so $\tan^{—1}(1/n)$ goes to $0$.</p> <p>So the general term (without the "$a$") of your series goes to $0$, so there is a reasonable chance that the series will converge.</p> <p>It will converge if the general term goes to $0$ sufficiently fast.</p> <p>That would be the case if $a$ is sufficiently big, but it won't be the case if $a$ is too small.</p> Answer 7: <p><strong>Hint</strong>. One may use a Taylor series expansion, as $n \to \infty$, $$ \arctan \frac1n=\frac1n+O\left(\frac1{n^3}\right) $$ giving, as $n \to \infty$, $$ \left(\frac1n-\arctan \frac1n\right)^a=O\left(\frac1{n^{3a}}\right) $$ I hope you can take it from here.</p> Answer 8: <p><em>Since you don't ask for which values of $a$ the series converges, here is an heuristic answer.</em></p> <p>When $n$ goes to $\infty$, $\frac 1n$ goes to $0$ so $\tan^{—1}(1/n)$ goes to $0$.</p> <p>So the general term (without the "$a$") of your series goes to $0$, so there is a reasonable chance that the series will converge.</p> <p>It will converge if the general term goes to $0$ sufficiently fast.</p> <p>That would be the case if $a$ is sufficiently big, but it won't be the case if $a$ is too small.</p> Answer 9: <p><strong>Hint</strong>. One may use a Taylor series expansion, as $n \to \infty$, $$ \arctan \frac1n=\frac1n+O\left(\frac1{n^3}\right) $$ giving, as $n \to \infty$, $$ \left(\frac1n-\arctan \frac1n\right)^a=O\left(\frac1{n^{3a}}\right) $$ I hope you can take it from here.</p> Answer 10: <p><em>Since you don't ask for which values of $a$ the series converges, here is an heuristic answer.</em></p> <p>When $n$ goes to $\infty$, $\frac 1n$ goes to $0$ so $\tan^{—1}(1/n)$ goes to $0$.</p> <p>So the general term (without the "$a$") of your series goes to $0$, so there is a reasonable chance that the series will converge.</p> <p>It will converge if the general term goes to $0$ sufficiently fast.</p> <p>That would be the case if $a$ is sufficiently big, but it won't be the case if $a$ is too small.</p> Answer 11: <p><strong>Hint</strong>. One may use a Taylor series expansion, as $n \to \infty$, $$ \arctan \frac1n=\frac1n+O\left(\frac1{n^3}\right) $$ giving, as $n \to \infty$, $$ \left(\frac1n-\arctan \frac1n\right)^a=O\left(\frac1{n^{3a}}\right) $$ I hope you can take it from here.</p> Answer 12: <p><em>Since you don't ask for which values of $a$ the series converges, here is an heuristic answer.</em></p> <p>When $n$ goes to $\infty$, $\frac 1n$ goes to $0$ so $\tan^{—1}(1/n)$ goes to $0$.</p> <p>So the general term (without the "$a$") of your series goes to $0$, so there is a reasonable chance that the series will converge.</p> <p>It will converge if the general term goes to $0$ sufficiently fast.</p> <p>That would be the case if $a$ is sufficiently big, but it won't be the case if $a$ is too small.</p>
Find all $x$ such that $x^6 = (x+1)^6$.
2,109,609
5
https://math.stackexchange.com/questions/2109609/find-all-x-such-that-x6-x16
<blockquote> <p>Find all $x$ such that $$x^6=(x+1)^6.$$</p> </blockquote> <p>So far, I have found the real solution $x= -\frac{1}{2}$, and the complex solution $x = -\sqrt[3]{-1}$.</p> <p>Are there more, and if so what/how would be the most efficient find all the solutions to this problem? I am struggling to find the rest of the solutions.</p>
[ { "answer_id": 2109635, "score": 10, "text": "<p>Since $0$ is not a solution, this is equivalent to solve\n$$\n\\frac{x+1}{x}=\\zeta\n$$\nwhere $\\zeta$ is any sixth root of $1$.</p>\n\n<p>Then $x+1=\\zeta x$, so\n$$\nx=\\frac{1}{\\zeta-1}\n$$\nOf course $\\zeta=1$ should not be considered. There are other five sixth root of $1$, namely\n$$\ne^{2k\\pi i/3}\n$$\nfor $1\\le k\\le 5$.</p>\n" }, { "answer_id": 2109621, "score": 5, "text": "<p><strong>Hint</strong> <a href=\"https://www.wolframalpha.com/input/?i=a%5E6-b%5E6\" rel=\"nofollow noreferrer\">We may factor this as</a>:</p>\n\n<p>$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$</p>\n\n<p>So then we have a linear and two quadratics which you should be able to solve.</p>\n" }, { "answer_id": 2109643, "score": 2, "text": "<p>We can simplify the problem by substituting $x=y-\\frac{1}{2}$.</p>\n\n<p>$$x^6 = \\left(x+1\\right)^6$$</p>\n\n<p>$$\\left(y-\\frac{1}{2}\\right)^6 = \\left(y+\\frac{1}{2}\\right)^6$$</p>\n\n<p>If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain.</p>\n\n<p>$$6y^5+5y^3+\\frac{3}{8}y=0$$</p>\n\n<p>You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\\frac{1}{2}$) and then factor the remaining polynomial into two quadratics.</p>\n\n<p>$$6y\\left(y^2+\\frac{3}{4}\\right)\\left(y^2+\\frac{1}{12}\\right)=0$$</p>\n\n<p>Think you can take it from here?</p>\n" }, { "answer_id": 2109626, "score": 0, "text": "<p>Expand out the right hand side (binomial theorem will save you time):\n$$x^6=x^6+6x^5+15x^4+20x^3+15x^2+6x+1$$ Subtract $x^6$ from both sides:\n$$0=6x^5+15x^4+20x^3+15x^2+6x+1$$ Now just find the roots of this $5$th degree polynomial. If you are working over the complex numbers there will be exactly $5$ (see fundamental theorem of algebra), if you are only working in the real numbers $x=-\\frac 12$ is the only solution.</p>\n" }, { "answer_id": 2109635, "score": 10, "text": "<p>Since $0$ is not a solution, this is equivalent to solve\n$$\n\\frac{x+1}{x}=\\zeta\n$$\nwhere $\\zeta$ is any sixth root of $1$.</p>\n\n<p>Then $x+1=\\zeta x$, so\n$$\nx=\\frac{1}{\\zeta-1}\n$$\nOf course $\\zeta=1$ should not be considered. There are other five sixth root of $1$, namely\n$$\ne^{2k\\pi i/3}\n$$\nfor $1\\le k\\le 5$.</p>\n" }, { "answer_id": 2109621, "score": 5, "text": "<p><strong>Hint</strong> <a href=\"https://www.wolframalpha.com/input/?i=a%5E6-b%5E6\" rel=\"nofollow noreferrer\">We may factor this as</a>:</p>\n\n<p>$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$</p>\n\n<p>So then we have a linear and two quadratics which you should be able to solve.</p>\n" }, { "answer_id": 2109643, "score": 2, "text": "<p>We can simplify the problem by substituting $x=y-\\frac{1}{2}$.</p>\n\n<p>$$x^6 = \\left(x+1\\right)^6$$</p>\n\n<p>$$\\left(y-\\frac{1}{2}\\right)^6 = \\left(y+\\frac{1}{2}\\right)^6$$</p>\n\n<p>If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain.</p>\n\n<p>$$6y^5+5y^3+\\frac{3}{8}y=0$$</p>\n\n<p>You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\\frac{1}{2}$) and then factor the remaining polynomial into two quadratics.</p>\n\n<p>$$6y\\left(y^2+\\frac{3}{4}\\right)\\left(y^2+\\frac{1}{12}\\right)=0$$</p>\n\n<p>Think you can take it from here?</p>\n" }, { "answer_id": 2109626, "score": 0, "text": "<p>Expand out the right hand side (binomial theorem will save you time):\n$$x^6=x^6+6x^5+15x^4+20x^3+15x^2+6x+1$$ Subtract $x^6$ from both sides:\n$$0=6x^5+15x^4+20x^3+15x^2+6x+1$$ Now just find the roots of this $5$th degree polynomial. If you are working over the complex numbers there will be exactly $5$ (see fundamental theorem of algebra), if you are only working in the real numbers $x=-\\frac 12$ is the only solution.</p>\n" }, { "answer_id": 2109635, "score": 10, "text": "<p>Since $0$ is not a solution, this is equivalent to solve\n$$\n\\frac{x+1}{x}=\\zeta\n$$\nwhere $\\zeta$ is any sixth root of $1$.</p>\n\n<p>Then $x+1=\\zeta x$, so\n$$\nx=\\frac{1}{\\zeta-1}\n$$\nOf course $\\zeta=1$ should not be considered. There are other five sixth root of $1$, namely\n$$\ne^{2k\\pi i/3}\n$$\nfor $1\\le k\\le 5$.</p>\n" }, { "answer_id": 2109621, "score": 5, "text": "<p><strong>Hint</strong> <a href=\"https://www.wolframalpha.com/input/?i=a%5E6-b%5E6\" rel=\"nofollow noreferrer\">We may factor this as</a>:</p>\n\n<p>$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$</p>\n\n<p>So then we have a linear and two quadratics which you should be able to solve.</p>\n" }, { "answer_id": 2109643, "score": 2, "text": "<p>We can simplify the problem by substituting $x=y-\\frac{1}{2}$.</p>\n\n<p>$$x^6 = \\left(x+1\\right)^6$$</p>\n\n<p>$$\\left(y-\\frac{1}{2}\\right)^6 = \\left(y+\\frac{1}{2}\\right)^6$$</p>\n\n<p>If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain.</p>\n\n<p>$$6y^5+5y^3+\\frac{3}{8}y=0$$</p>\n\n<p>You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\\frac{1}{2}$) and then factor the remaining polynomial into two quadratics.</p>\n\n<p>$$6y\\left(y^2+\\frac{3}{4}\\right)\\left(y^2+\\frac{1}{12}\\right)=0$$</p>\n\n<p>Think you can take it from here?</p>\n" }, { "answer_id": 2109626, "score": 0, "text": "<p>Expand out the right hand side (binomial theorem will save you time):\n$$x^6=x^6+6x^5+15x^4+20x^3+15x^2+6x+1$$ Subtract $x^6$ from both sides:\n$$0=6x^5+15x^4+20x^3+15x^2+6x+1$$ Now just find the roots of this $5$th degree polynomial. If you are working over the complex numbers there will be exactly $5$ (see fundamental theorem of algebra), if you are only working in the real numbers $x=-\\frac 12$ is the only solution.</p>\n" }, { "answer_id": 2109635, "score": 10, "text": "<p>Since $0$ is not a solution, this is equivalent to solve\n$$\n\\frac{x+1}{x}=\\zeta\n$$\nwhere $\\zeta$ is any sixth root of $1$.</p>\n\n<p>Then $x+1=\\zeta x$, so\n$$\nx=\\frac{1}{\\zeta-1}\n$$\nOf course $\\zeta=1$ should not be considered. There are other five sixth root of $1$, namely\n$$\ne^{2k\\pi i/3}\n$$\nfor $1\\le k\\le 5$.</p>\n" }, { "answer_id": 2109621, "score": 5, "text": "<p><strong>Hint</strong> <a href=\"https://www.wolframalpha.com/input/?i=a%5E6-b%5E6\" rel=\"nofollow noreferrer\">We may factor this as</a>:</p>\n\n<p>$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$</p>\n\n<p>So then we have a linear and two quadratics which you should be able to solve.</p>\n" }, { "answer_id": 2109643, "score": 2, "text": "<p>We can simplify the problem by substituting $x=y-\\frac{1}{2}$.</p>\n\n<p>$$x^6 = \\left(x+1\\right)^6$$</p>\n\n<p>$$\\left(y-\\frac{1}{2}\\right)^6 = \\left(y+\\frac{1}{2}\\right)^6$$</p>\n\n<p>If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain.</p>\n\n<p>$$6y^5+5y^3+\\frac{3}{8}y=0$$</p>\n\n<p>You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\\frac{1}{2}$) and then factor the remaining polynomial into two quadratics.</p>\n\n<p>$$6y\\left(y^2+\\frac{3}{4}\\right)\\left(y^2+\\frac{1}{12}\\right)=0$$</p>\n\n<p>Think you can take it from here?</p>\n" }, { "answer_id": 2109626, "score": 0, "text": "<p>Expand out the right hand side (binomial theorem will save you time):\n$$x^6=x^6+6x^5+15x^4+20x^3+15x^2+6x+1$$ Subtract $x^6$ from both sides:\n$$0=6x^5+15x^4+20x^3+15x^2+6x+1$$ Now just find the roots of this $5$th degree polynomial. If you are working over the complex numbers there will be exactly $5$ (see fundamental theorem of algebra), if you are only working in the real numbers $x=-\\frac 12$ is the only solution.</p>\n" }, { "answer_id": 2109635, "score": 10, "text": "<p>Since $0$ is not a solution, this is equivalent to solve\n$$\n\\frac{x+1}{x}=\\zeta\n$$\nwhere $\\zeta$ is any sixth root of $1$.</p>\n\n<p>Then $x+1=\\zeta x$, so\n$$\nx=\\frac{1}{\\zeta-1}\n$$\nOf course $\\zeta=1$ should not be considered. There are other five sixth root of $1$, namely\n$$\ne^{2k\\pi i/3}\n$$\nfor $1\\le k\\le 5$.</p>\n" }, { "answer_id": 2109621, "score": 5, "text": "<p><strong>Hint</strong> <a href=\"https://www.wolframalpha.com/input/?i=a%5E6-b%5E6\" rel=\"nofollow noreferrer\">We may factor this as</a>:</p>\n\n<p>$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$</p>\n\n<p>So then we have a linear and two quadratics which you should be able to solve.</p>\n" }, { "answer_id": 2109643, "score": 2, "text": "<p>We can simplify the problem by substituting $x=y-\\frac{1}{2}$.</p>\n\n<p>$$x^6 = \\left(x+1\\right)^6$$</p>\n\n<p>$$\\left(y-\\frac{1}{2}\\right)^6 = \\left(y+\\frac{1}{2}\\right)^6$$</p>\n\n<p>If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain.</p>\n\n<p>$$6y^5+5y^3+\\frac{3}{8}y=0$$</p>\n\n<p>You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\\frac{1}{2}$) and then factor the remaining polynomial into two quadratics.</p>\n\n<p>$$6y\\left(y^2+\\frac{3}{4}\\right)\\left(y^2+\\frac{1}{12}\\right)=0$$</p>\n\n<p>Think you can take it from here?</p>\n" }, { "answer_id": 2109626, "score": 0, "text": "<p>Expand out the right hand side (binomial theorem will save you time):\n$$x^6=x^6+6x^5+15x^4+20x^3+15x^2+6x+1$$ Subtract $x^6$ from both sides:\n$$0=6x^5+15x^4+20x^3+15x^2+6x+1$$ Now just find the roots of this $5$th degree polynomial. If you are working over the complex numbers there will be exactly $5$ (see fundamental theorem of algebra), if you are only working in the real numbers $x=-\\frac 12$ is the only solution.</p>\n" }, { "answer_id": 2109635, "score": 10, "text": "<p>Since $0$ is not a solution, this is equivalent to solve\n$$\n\\frac{x+1}{x}=\\zeta\n$$\nwhere $\\zeta$ is any sixth root of $1$.</p>\n\n<p>Then $x+1=\\zeta x$, so\n$$\nx=\\frac{1}{\\zeta-1}\n$$\nOf course $\\zeta=1$ should not be considered. There are other five sixth root of $1$, namely\n$$\ne^{2k\\pi i/3}\n$$\nfor $1\\le k\\le 5$.</p>\n" }, { "answer_id": 2109621, "score": 5, "text": "<p><strong>Hint</strong> <a href=\"https://www.wolframalpha.com/input/?i=a%5E6-b%5E6\" rel=\"nofollow noreferrer\">We may factor this as</a>:</p>\n\n<p>$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$</p>\n\n<p>So then we have a linear and two quadratics which you should be able to solve.</p>\n" }, { "answer_id": 2109643, "score": 2, "text": "<p>We can simplify the problem by substituting $x=y-\\frac{1}{2}$.</p>\n\n<p>$$x^6 = \\left(x+1\\right)^6$$</p>\n\n<p>$$\\left(y-\\frac{1}{2}\\right)^6 = \\left(y+\\frac{1}{2}\\right)^6$$</p>\n\n<p>If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain.</p>\n\n<p>$$6y^5+5y^3+\\frac{3}{8}y=0$$</p>\n\n<p>You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\\frac{1}{2}$) and then factor the remaining polynomial into two quadratics.</p>\n\n<p>$$6y\\left(y^2+\\frac{3}{4}\\right)\\left(y^2+\\frac{1}{12}\\right)=0$$</p>\n\n<p>Think you can take it from here?</p>\n" }, { "answer_id": 2109626, "score": 0, "text": "<p>Expand out the right hand side (binomial theorem will save you time):\n$$x^6=x^6+6x^5+15x^4+20x^3+15x^2+6x+1$$ Subtract $x^6$ from both sides:\n$$0=6x^5+15x^4+20x^3+15x^2+6x+1$$ Now just find the roots of this $5$th degree polynomial. If you are working over the complex numbers there will be exactly $5$ (see fundamental theorem of algebra), if you are only working in the real numbers $x=-\\frac 12$ is the only solution.</p>\n" } ]
Question: Find all $x$ such that $x^6 = (x+1)^6$. Answer 1: <p>Since $0$ is not a solution, this is equivalent to solve $$ \frac{x+1}{x}=\zeta $$ where $\zeta$ is any sixth root of $1$.</p> <p>Then $x+1=\zeta x$, so $$ x=\frac{1}{\zeta-1} $$ Of course $\zeta=1$ should not be considered. There are other five sixth root of $1$, namely $$ e^{2k\pi i/3} $$ for $1\le k\le 5$.</p> Answer 2: <p><strong>Hint</strong> <a href="https://www.wolframalpha.com/input/?i=a%5E6-b%5E6" rel="nofollow noreferrer">We may factor this as</a>:</p> <p>$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$</p> <p>So then we have a linear and two quadratics which you should be able to solve.</p> Answer 3: <p>We can simplify the problem by substituting $x=y-\frac{1}{2}$.</p> <p>$$x^6 = \left(x+1\right)^6$$</p> <p>$$\left(y-\frac{1}{2}\right)^6 = \left(y+\frac{1}{2}\right)^6$$</p> <p>If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain.</p> <p>$$6y^5+5y^3+\frac{3}{8}y=0$$</p> <p>You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\frac{1}{2}$) and then factor the remaining polynomial into two quadratics.</p> <p>$$6y\left(y^2+\frac{3}{4}\right)\left(y^2+\frac{1}{12}\right)=0$$</p> <p>Think you can take it from here?</p> Answer 4: <p>Expand out the right hand side (binomial theorem will save you time): $$x^6=x^6+6x^5+15x^4+20x^3+15x^2+6x+1$$ Subtract $x^6$ from both sides: $$0=6x^5+15x^4+20x^3+15x^2+6x+1$$ Now just find the roots of this $5$th degree polynomial. If you are working over the complex numbers there will be exactly $5$ (see fundamental theorem of algebra), if you are only working in the real numbers $x=-\frac 12$ is the only solution.</p> Answer 5: <p>Since $0$ is not a solution, this is equivalent to solve $$ \frac{x+1}{x}=\zeta $$ where $\zeta$ is any sixth root of $1$.</p> <p>Then $x+1=\zeta x$, so $$ x=\frac{1}{\zeta-1} $$ Of course $\zeta=1$ should not be considered. There are other five sixth root of $1$, namely $$ e^{2k\pi i/3} $$ for $1\le k\le 5$.</p> Answer 6: <p><strong>Hint</strong> <a href="https://www.wolframalpha.com/input/?i=a%5E6-b%5E6" rel="nofollow noreferrer">We may factor this as</a>:</p> <p>$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$</p> <p>So then we have a linear and two quadratics which you should be able to solve.</p> Answer 7: <p>We can simplify the problem by substituting $x=y-\frac{1}{2}$.</p> <p>$$x^6 = \left(x+1\right)^6$$</p> <p>$$\left(y-\frac{1}{2}\right)^6 = \left(y+\frac{1}{2}\right)^6$$</p> <p>If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain.</p> <p>$$6y^5+5y^3+\frac{3}{8}y=0$$</p> <p>You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\frac{1}{2}$) and then factor the remaining polynomial into two quadratics.</p> <p>$$6y\left(y^2+\frac{3}{4}\right)\left(y^2+\frac{1}{12}\right)=0$$</p> <p>Think you can take it from here?</p> Answer 8: <p>Expand out the right hand side (binomial theorem will save you time): $$x^6=x^6+6x^5+15x^4+20x^3+15x^2+6x+1$$ Subtract $x^6$ from both sides: $$0=6x^5+15x^4+20x^3+15x^2+6x+1$$ Now just find the roots of this $5$th degree polynomial. If you are working over the complex numbers there will be exactly $5$ (see fundamental theorem of algebra), if you are only working in the real numbers $x=-\frac 12$ is the only solution.</p> Answer 9: <p>Since $0$ is not a solution, this is equivalent to solve $$ \frac{x+1}{x}=\zeta $$ where $\zeta$ is any sixth root of $1$.</p> <p>Then $x+1=\zeta x$, so $$ x=\frac{1}{\zeta-1} $$ Of course $\zeta=1$ should not be considered. There are other five sixth root of $1$, namely $$ e^{2k\pi i/3} $$ for $1\le k\le 5$.</p> Answer 10: <p><strong>Hint</strong> <a href="https://www.wolframalpha.com/input/?i=a%5E6-b%5E6" rel="nofollow noreferrer">We may factor this as</a>:</p> <p>$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$</p> <p>So then we have a linear and two quadratics which you should be able to solve.</p> Answer 11: <p>We can simplify the problem by substituting $x=y-\frac{1}{2}$.</p> <p>$$x^6 = \left(x+1\right)^6$$</p> <p>$$\left(y-\frac{1}{2}\right)^6 = \left(y+\frac{1}{2}\right)^6$$</p> <p>If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain.</p> <p>$$6y^5+5y^3+\frac{3}{8}y=0$$</p> <p>You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\frac{1}{2}$) and then factor the remaining polynomial into two quadratics.</p> <p>$$6y\left(y^2+\frac{3}{4}\right)\left(y^2+\frac{1}{12}\right)=0$$</p> <p>Think you can take it from here?</p> Answer 12: <p>Expand out the right hand side (binomial theorem will save you time): $$x^6=x^6+6x^5+15x^4+20x^3+15x^2+6x+1$$ Subtract $x^6$ from both sides: $$0=6x^5+15x^4+20x^3+15x^2+6x+1$$ Now just find the roots of this $5$th degree polynomial. If you are working over the complex numbers there will be exactly $5$ (see fundamental theorem of algebra), if you are only working in the real numbers $x=-\frac 12$ is the only solution.</p> Answer 13: <p>Since $0$ is not a solution, this is equivalent to solve $$ \frac{x+1}{x}=\zeta $$ where $\zeta$ is any sixth root of $1$.</p> <p>Then $x+1=\zeta x$, so $$ x=\frac{1}{\zeta-1} $$ Of course $\zeta=1$ should not be considered. There are other five sixth root of $1$, namely $$ e^{2k\pi i/3} $$ for $1\le k\le 5$.</p> Answer 14: <p><strong>Hint</strong> <a href="https://www.wolframalpha.com/input/?i=a%5E6-b%5E6" rel="nofollow noreferrer">We may factor this as</a>:</p> <p>$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$</p> <p>So then we have a linear and two quadratics which you should be able to solve.</p> Answer 15: <p>We can simplify the problem by substituting $x=y-\frac{1}{2}$.</p> <p>$$x^6 = \left(x+1\right)^6$$</p> <p>$$\left(y-\frac{1}{2}\right)^6 = \left(y+\frac{1}{2}\right)^6$$</p> <p>If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain.</p> <p>$$6y^5+5y^3+\frac{3}{8}y=0$$</p> <p>You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\frac{1}{2}$) and then factor the remaining polynomial into two quadratics.</p> <p>$$6y\left(y^2+\frac{3}{4}\right)\left(y^2+\frac{1}{12}\right)=0$$</p> <p>Think you can take it from here?</p> Answer 16: <p>Expand out the right hand side (binomial theorem will save you time): $$x^6=x^6+6x^5+15x^4+20x^3+15x^2+6x+1$$ Subtract $x^6$ from both sides: $$0=6x^5+15x^4+20x^3+15x^2+6x+1$$ Now just find the roots of this $5$th degree polynomial. If you are working over the complex numbers there will be exactly $5$ (see fundamental theorem of algebra), if you are only working in the real numbers $x=-\frac 12$ is the only solution.</p> Answer 17: <p>Since $0$ is not a solution, this is equivalent to solve $$ \frac{x+1}{x}=\zeta $$ where $\zeta$ is any sixth root of $1$.</p> <p>Then $x+1=\zeta x$, so $$ x=\frac{1}{\zeta-1} $$ Of course $\zeta=1$ should not be considered. There are other five sixth root of $1$, namely $$ e^{2k\pi i/3} $$ for $1\le k\le 5$.</p> Answer 18: <p><strong>Hint</strong> <a href="https://www.wolframalpha.com/input/?i=a%5E6-b%5E6" rel="nofollow noreferrer">We may factor this as</a>:</p> <p>$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$</p> <p>So then we have a linear and two quadratics which you should be able to solve.</p> Answer 19: <p>We can simplify the problem by substituting $x=y-\frac{1}{2}$.</p> <p>$$x^6 = \left(x+1\right)^6$$</p> <p>$$\left(y-\frac{1}{2}\right)^6 = \left(y+\frac{1}{2}\right)^6$$</p> <p>If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain.</p> <p>$$6y^5+5y^3+\frac{3}{8}y=0$$</p> <p>You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\frac{1}{2}$) and then factor the remaining polynomial into two quadratics.</p> <p>$$6y\left(y^2+\frac{3}{4}\right)\left(y^2+\frac{1}{12}\right)=0$$</p> <p>Think you can take it from here?</p> Answer 20: <p>Expand out the right hand side (binomial theorem will save you time): $$x^6=x^6+6x^5+15x^4+20x^3+15x^2+6x+1$$ Subtract $x^6$ from both sides: $$0=6x^5+15x^4+20x^3+15x^2+6x+1$$ Now just find the roots of this $5$th degree polynomial. If you are working over the complex numbers there will be exactly $5$ (see fundamental theorem of algebra), if you are only working in the real numbers $x=-\frac 12$ is the only solution.</p> Answer 21: <p>Since $0$ is not a solution, this is equivalent to solve $$ \frac{x+1}{x}=\zeta $$ where $\zeta$ is any sixth root of $1$.</p> <p>Then $x+1=\zeta x$, so $$ x=\frac{1}{\zeta-1} $$ Of course $\zeta=1$ should not be considered. There are other five sixth root of $1$, namely $$ e^{2k\pi i/3} $$ for $1\le k\le 5$.</p> Answer 22: <p><strong>Hint</strong> <a href="https://www.wolframalpha.com/input/?i=a%5E6-b%5E6" rel="nofollow noreferrer">We may factor this as</a>:</p> <p>$$(x-(x+1))(x+(x+1))(x^2-x(x+1)+(x+1)^2)(x^2+x(x+1)+(x+1)^2)=0$$</p> <p>So then we have a linear and two quadratics which you should be able to solve.</p> Answer 23: <p>We can simplify the problem by substituting $x=y-\frac{1}{2}$.</p> <p>$$x^6 = \left(x+1\right)^6$$</p> <p>$$\left(y-\frac{1}{2}\right)^6 = \left(y+\frac{1}{2}\right)^6$$</p> <p>If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain.</p> <p>$$6y^5+5y^3+\frac{3}{8}y=0$$</p> <p>You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\frac{1}{2}$) and then factor the remaining polynomial into two quadratics.</p> <p>$$6y\left(y^2+\frac{3}{4}\right)\left(y^2+\frac{1}{12}\right)=0$$</p> <p>Think you can take it from here?</p> Answer 24: <p>Expand out the right hand side (binomial theorem will save you time): $$x^6=x^6+6x^5+15x^4+20x^3+15x^2+6x+1$$ Subtract $x^6$ from both sides: $$0=6x^5+15x^4+20x^3+15x^2+6x+1$$ Now just find the roots of this $5$th degree polynomial. If you are working over the complex numbers there will be exactly $5$ (see fundamental theorem of algebra), if you are only working in the real numbers $x=-\frac 12$ is the only solution.</p>
Show that every point of cantor set is a limit point
2,110,663
5
https://math.stackexchange.com/questions/2110663/show-that-every-point-of-cantor-set-is-a-limit-point
<p>The cantor set $C$ is defined as</p> <p>$$ A_0 := [0, 1], \quad A_{n+1} := \frac{1}{3}(A_n \, \cup \, (2+A_n)), \quad C:= \bigcap_{n\in\mathbb{N}_0}A_n \text{.} $$</p> <p>I need to show that $C$ is perfect (every point in $C$ is a limit point). I strongly believe that my proof idea is right but I am stuck at the point where I need to show that each $A_n$ consists of $2^n$ closed intervals and that the boundary points of these intervals are in $C$.</p>
[ { "answer_id": 2777628, "score": 4, "text": "<p>The ternary expansion of <span class=\"math-container\">$x \\in [0,1)$</span> is found by this construction: divide the interval into three equal parts, that is,\n<span class=\"math-container\">$$\n[0,1) = [0,1/3) \\cup [1/3 \\cup 2/3) \\cup [2/3 \\cup 1).\n$$</span>\nTo find the ternary expansion of <span class=\"math-container\">$x$</span>, see in which of the three subintervals <span class=\"math-container\">$x$</span> lies in. If <span class=\"math-container\">$x \\in [0,1/3)$</span>, then the first digit in the expansion is <span class=\"math-container\">$0$</span>; if <span class=\"math-container\">$x \\in [1/3,2/3)$</span>, then the first digit in the expansion is <span class=\"math-container\">$1$</span>; if <span class=\"math-container\">$x \\in [2/3,1)$</span>, then the first digit in the expansion is <span class=\"math-container\">$2$</span>. To find the next digit, further divide the subinterval in which <span class=\"math-container\">$x$</span> lies into three equal parts and repeat the same process.</p>\n\n<p>By the way the Cantor set is contructed, the digit <span class=\"math-container\">$1$</span> can never appear in a ternary expansion of <span class=\"math-container\">$x \\in C$</span> because the middle one-third is removed successively from each sub-interval. Of course, this is unless <span class=\"math-container\">$x = 2/3$</span>, in which case, its ternary expansion is <span class=\"math-container\">$0.1$</span>. However, <span class=\"math-container\">$2/3$</span> can also be represented as <span class=\"math-container\">$0.022222\\dots {}= 0.0\\bar{2}$</span> and we'll make an exception and choose this as the ternary expansion of <span class=\"math-container\">$2/3$</span>. Similarly, for consistency we can choose the ternary expansion of <span class=\"math-container\">$1$</span> to be <span class=\"math-container\">$0.22222\\dots {}=0.\\bar{2}$</span> instead of <span class=\"math-container\">$1.00000\\dots$</span>. In general, if <span class=\"math-container\">$x$</span> has a ternary expansions ending in an infinite string of twos, then we will choose that over a representation ending in an infinite string of zeros.</p>\n\n<p>Conversely, any real number in <span class=\"math-container\">$[0,1]$</span> whose ternary expansion consists only of <span class=\"math-container\">$0$</span>'s and <span class=\"math-container\">$2$</span>'s is in the Cantor set by the same reasoning: the numbers that are removed in the construction of the Cantor set are precisely those that have a <span class=\"math-container\">$1$</span> somewhere in their ternary expansion.</p>\n\n<p>Now, to show that the Cantor set is perfect, it suffices to show that every point in <span class=\"math-container\">$C$</span> is a limit point of <span class=\"math-container\">$C$</span>. So, let <span class=\"math-container\">$x \\in C$</span>. Then, <span class=\"math-container\">$x$</span> has a ternary expansion of the form <span class=\"math-container\">$0.x_1 x_2 x_3 \\dots$</span> where each <span class=\"math-container\">$x_n$</span> is either <span class=\"math-container\">$0$</span> or <span class=\"math-container\">$2$</span>. The sequence of numbers <span class=\"math-container\">$y_k = 0.x_1x_2\\dots x_k$</span>, formed by truncating the ternary expansion at the <span class=\"math-container\">$k$</span>th position, clearly converges to <span class=\"math-container\">$x$</span>. Moreover, <span class=\"math-container\">$y_k \\in C$</span> for each <span class=\"math-container\">$k$</span> because its ternary expansion consists only of <span class=\"math-container\">$0$</span>'s and <span class=\"math-container\">$2$</span>'s. Hence, <span class=\"math-container\">$x$</span> is a limit point of <span class=\"math-container\">$C$</span>, and so <span class=\"math-container\">$C$</span> is perfect.</p>\n" }, { "answer_id": 2110697, "score": 3, "text": "<p>Let $x \\in C$, and let $S$ be an open interval containing $x$. Let $I_n$ be the closed interval in $A_n$ (of which there are $2^n$) that contains $x$. Choose $n$ large enough so that $I_n \\subset S$. Let $x_n$ be an endpoint of $I_n$ such that $x_n \\neq x$.</p>\n\n<p>Now, by construction, $x_n \\in C$ (you should verify this). Hence, $x$ is a limit point: an arbitrary open interval $S$ containing $x$ contains $x_n \\in C$. </p>\n" }, { "answer_id": 2777628, "score": 4, "text": "<p>The ternary expansion of <span class=\"math-container\">$x \\in [0,1)$</span> is found by this construction: divide the interval into three equal parts, that is,\n<span class=\"math-container\">$$\n[0,1) = [0,1/3) \\cup [1/3 \\cup 2/3) \\cup [2/3 \\cup 1).\n$$</span>\nTo find the ternary expansion of <span class=\"math-container\">$x$</span>, see in which of the three subintervals <span class=\"math-container\">$x$</span> lies in. If <span class=\"math-container\">$x \\in [0,1/3)$</span>, then the first digit in the expansion is <span class=\"math-container\">$0$</span>; if <span class=\"math-container\">$x \\in [1/3,2/3)$</span>, then the first digit in the expansion is <span class=\"math-container\">$1$</span>; if <span class=\"math-container\">$x \\in [2/3,1)$</span>, then the first digit in the expansion is <span class=\"math-container\">$2$</span>. To find the next digit, further divide the subinterval in which <span class=\"math-container\">$x$</span> lies into three equal parts and repeat the same process.</p>\n\n<p>By the way the Cantor set is contructed, the digit <span class=\"math-container\">$1$</span> can never appear in a ternary expansion of <span class=\"math-container\">$x \\in C$</span> because the middle one-third is removed successively from each sub-interval. Of course, this is unless <span class=\"math-container\">$x = 2/3$</span>, in which case, its ternary expansion is <span class=\"math-container\">$0.1$</span>. However, <span class=\"math-container\">$2/3$</span> can also be represented as <span class=\"math-container\">$0.022222\\dots {}= 0.0\\bar{2}$</span> and we'll make an exception and choose this as the ternary expansion of <span class=\"math-container\">$2/3$</span>. Similarly, for consistency we can choose the ternary expansion of <span class=\"math-container\">$1$</span> to be <span class=\"math-container\">$0.22222\\dots {}=0.\\bar{2}$</span> instead of <span class=\"math-container\">$1.00000\\dots$</span>. In general, if <span class=\"math-container\">$x$</span> has a ternary expansions ending in an infinite string of twos, then we will choose that over a representation ending in an infinite string of zeros.</p>\n\n<p>Conversely, any real number in <span class=\"math-container\">$[0,1]$</span> whose ternary expansion consists only of <span class=\"math-container\">$0$</span>'s and <span class=\"math-container\">$2$</span>'s is in the Cantor set by the same reasoning: the numbers that are removed in the construction of the Cantor set are precisely those that have a <span class=\"math-container\">$1$</span> somewhere in their ternary expansion.</p>\n\n<p>Now, to show that the Cantor set is perfect, it suffices to show that every point in <span class=\"math-container\">$C$</span> is a limit point of <span class=\"math-container\">$C$</span>. So, let <span class=\"math-container\">$x \\in C$</span>. Then, <span class=\"math-container\">$x$</span> has a ternary expansion of the form <span class=\"math-container\">$0.x_1 x_2 x_3 \\dots$</span> where each <span class=\"math-container\">$x_n$</span> is either <span class=\"math-container\">$0$</span> or <span class=\"math-container\">$2$</span>. The sequence of numbers <span class=\"math-container\">$y_k = 0.x_1x_2\\dots x_k$</span>, formed by truncating the ternary expansion at the <span class=\"math-container\">$k$</span>th position, clearly converges to <span class=\"math-container\">$x$</span>. Moreover, <span class=\"math-container\">$y_k \\in C$</span> for each <span class=\"math-container\">$k$</span> because its ternary expansion consists only of <span class=\"math-container\">$0$</span>'s and <span class=\"math-container\">$2$</span>'s. Hence, <span class=\"math-container\">$x$</span> is a limit point of <span class=\"math-container\">$C$</span>, and so <span class=\"math-container\">$C$</span> is perfect.</p>\n" }, { "answer_id": 2110697, "score": 3, "text": "<p>Let $x \\in C$, and let $S$ be an open interval containing $x$. Let $I_n$ be the closed interval in $A_n$ (of which there are $2^n$) that contains $x$. Choose $n$ large enough so that $I_n \\subset S$. Let $x_n$ be an endpoint of $I_n$ such that $x_n \\neq x$.</p>\n\n<p>Now, by construction, $x_n \\in C$ (you should verify this). Hence, $x$ is a limit point: an arbitrary open interval $S$ containing $x$ contains $x_n \\in C$. </p>\n" }, { "answer_id": 2777628, "score": 4, "text": "<p>The ternary expansion of <span class=\"math-container\">$x \\in [0,1)$</span> is found by this construction: divide the interval into three equal parts, that is,\n<span class=\"math-container\">$$\n[0,1) = [0,1/3) \\cup [1/3 \\cup 2/3) \\cup [2/3 \\cup 1).\n$$</span>\nTo find the ternary expansion of <span class=\"math-container\">$x$</span>, see in which of the three subintervals <span class=\"math-container\">$x$</span> lies in. If <span class=\"math-container\">$x \\in [0,1/3)$</span>, then the first digit in the expansion is <span class=\"math-container\">$0$</span>; if <span class=\"math-container\">$x \\in [1/3,2/3)$</span>, then the first digit in the expansion is <span class=\"math-container\">$1$</span>; if <span class=\"math-container\">$x \\in [2/3,1)$</span>, then the first digit in the expansion is <span class=\"math-container\">$2$</span>. To find the next digit, further divide the subinterval in which <span class=\"math-container\">$x$</span> lies into three equal parts and repeat the same process.</p>\n\n<p>By the way the Cantor set is contructed, the digit <span class=\"math-container\">$1$</span> can never appear in a ternary expansion of <span class=\"math-container\">$x \\in C$</span> because the middle one-third is removed successively from each sub-interval. Of course, this is unless <span class=\"math-container\">$x = 2/3$</span>, in which case, its ternary expansion is <span class=\"math-container\">$0.1$</span>. However, <span class=\"math-container\">$2/3$</span> can also be represented as <span class=\"math-container\">$0.022222\\dots {}= 0.0\\bar{2}$</span> and we'll make an exception and choose this as the ternary expansion of <span class=\"math-container\">$2/3$</span>. Similarly, for consistency we can choose the ternary expansion of <span class=\"math-container\">$1$</span> to be <span class=\"math-container\">$0.22222\\dots {}=0.\\bar{2}$</span> instead of <span class=\"math-container\">$1.00000\\dots$</span>. In general, if <span class=\"math-container\">$x$</span> has a ternary expansions ending in an infinite string of twos, then we will choose that over a representation ending in an infinite string of zeros.</p>\n\n<p>Conversely, any real number in <span class=\"math-container\">$[0,1]$</span> whose ternary expansion consists only of <span class=\"math-container\">$0$</span>'s and <span class=\"math-container\">$2$</span>'s is in the Cantor set by the same reasoning: the numbers that are removed in the construction of the Cantor set are precisely those that have a <span class=\"math-container\">$1$</span> somewhere in their ternary expansion.</p>\n\n<p>Now, to show that the Cantor set is perfect, it suffices to show that every point in <span class=\"math-container\">$C$</span> is a limit point of <span class=\"math-container\">$C$</span>. So, let <span class=\"math-container\">$x \\in C$</span>. Then, <span class=\"math-container\">$x$</span> has a ternary expansion of the form <span class=\"math-container\">$0.x_1 x_2 x_3 \\dots$</span> where each <span class=\"math-container\">$x_n$</span> is either <span class=\"math-container\">$0$</span> or <span class=\"math-container\">$2$</span>. The sequence of numbers <span class=\"math-container\">$y_k = 0.x_1x_2\\dots x_k$</span>, formed by truncating the ternary expansion at the <span class=\"math-container\">$k$</span>th position, clearly converges to <span class=\"math-container\">$x$</span>. Moreover, <span class=\"math-container\">$y_k \\in C$</span> for each <span class=\"math-container\">$k$</span> because its ternary expansion consists only of <span class=\"math-container\">$0$</span>'s and <span class=\"math-container\">$2$</span>'s. Hence, <span class=\"math-container\">$x$</span> is a limit point of <span class=\"math-container\">$C$</span>, and so <span class=\"math-container\">$C$</span> is perfect.</p>\n" }, { "answer_id": 2110697, "score": 3, "text": "<p>Let $x \\in C$, and let $S$ be an open interval containing $x$. Let $I_n$ be the closed interval in $A_n$ (of which there are $2^n$) that contains $x$. Choose $n$ large enough so that $I_n \\subset S$. Let $x_n$ be an endpoint of $I_n$ such that $x_n \\neq x$.</p>\n\n<p>Now, by construction, $x_n \\in C$ (you should verify this). Hence, $x$ is a limit point: an arbitrary open interval $S$ containing $x$ contains $x_n \\in C$. </p>\n" }, { "answer_id": 2777628, "score": 4, "text": "<p>The ternary expansion of <span class=\"math-container\">$x \\in [0,1)$</span> is found by this construction: divide the interval into three equal parts, that is,\n<span class=\"math-container\">$$\n[0,1) = [0,1/3) \\cup [1/3 \\cup 2/3) \\cup [2/3 \\cup 1).\n$$</span>\nTo find the ternary expansion of <span class=\"math-container\">$x$</span>, see in which of the three subintervals <span class=\"math-container\">$x$</span> lies in. If <span class=\"math-container\">$x \\in [0,1/3)$</span>, then the first digit in the expansion is <span class=\"math-container\">$0$</span>; if <span class=\"math-container\">$x \\in [1/3,2/3)$</span>, then the first digit in the expansion is <span class=\"math-container\">$1$</span>; if <span class=\"math-container\">$x \\in [2/3,1)$</span>, then the first digit in the expansion is <span class=\"math-container\">$2$</span>. To find the next digit, further divide the subinterval in which <span class=\"math-container\">$x$</span> lies into three equal parts and repeat the same process.</p>\n\n<p>By the way the Cantor set is contructed, the digit <span class=\"math-container\">$1$</span> can never appear in a ternary expansion of <span class=\"math-container\">$x \\in C$</span> because the middle one-third is removed successively from each sub-interval. Of course, this is unless <span class=\"math-container\">$x = 2/3$</span>, in which case, its ternary expansion is <span class=\"math-container\">$0.1$</span>. However, <span class=\"math-container\">$2/3$</span> can also be represented as <span class=\"math-container\">$0.022222\\dots {}= 0.0\\bar{2}$</span> and we'll make an exception and choose this as the ternary expansion of <span class=\"math-container\">$2/3$</span>. Similarly, for consistency we can choose the ternary expansion of <span class=\"math-container\">$1$</span> to be <span class=\"math-container\">$0.22222\\dots {}=0.\\bar{2}$</span> instead of <span class=\"math-container\">$1.00000\\dots$</span>. In general, if <span class=\"math-container\">$x$</span> has a ternary expansions ending in an infinite string of twos, then we will choose that over a representation ending in an infinite string of zeros.</p>\n\n<p>Conversely, any real number in <span class=\"math-container\">$[0,1]$</span> whose ternary expansion consists only of <span class=\"math-container\">$0$</span>'s and <span class=\"math-container\">$2$</span>'s is in the Cantor set by the same reasoning: the numbers that are removed in the construction of the Cantor set are precisely those that have a <span class=\"math-container\">$1$</span> somewhere in their ternary expansion.</p>\n\n<p>Now, to show that the Cantor set is perfect, it suffices to show that every point in <span class=\"math-container\">$C$</span> is a limit point of <span class=\"math-container\">$C$</span>. So, let <span class=\"math-container\">$x \\in C$</span>. Then, <span class=\"math-container\">$x$</span> has a ternary expansion of the form <span class=\"math-container\">$0.x_1 x_2 x_3 \\dots$</span> where each <span class=\"math-container\">$x_n$</span> is either <span class=\"math-container\">$0$</span> or <span class=\"math-container\">$2$</span>. The sequence of numbers <span class=\"math-container\">$y_k = 0.x_1x_2\\dots x_k$</span>, formed by truncating the ternary expansion at the <span class=\"math-container\">$k$</span>th position, clearly converges to <span class=\"math-container\">$x$</span>. Moreover, <span class=\"math-container\">$y_k \\in C$</span> for each <span class=\"math-container\">$k$</span> because its ternary expansion consists only of <span class=\"math-container\">$0$</span>'s and <span class=\"math-container\">$2$</span>'s. Hence, <span class=\"math-container\">$x$</span> is a limit point of <span class=\"math-container\">$C$</span>, and so <span class=\"math-container\">$C$</span> is perfect.</p>\n" }, { "answer_id": 2110697, "score": 3, "text": "<p>Let $x \\in C$, and let $S$ be an open interval containing $x$. Let $I_n$ be the closed interval in $A_n$ (of which there are $2^n$) that contains $x$. Choose $n$ large enough so that $I_n \\subset S$. Let $x_n$ be an endpoint of $I_n$ such that $x_n \\neq x$.</p>\n\n<p>Now, by construction, $x_n \\in C$ (you should verify this). Hence, $x$ is a limit point: an arbitrary open interval $S$ containing $x$ contains $x_n \\in C$. </p>\n" }, { "answer_id": 2777628, "score": 4, "text": "<p>The ternary expansion of <span class=\"math-container\">$x \\in [0,1)$</span> is found by this construction: divide the interval into three equal parts, that is,\n<span class=\"math-container\">$$\n[0,1) = [0,1/3) \\cup [1/3 \\cup 2/3) \\cup [2/3 \\cup 1).\n$$</span>\nTo find the ternary expansion of <span class=\"math-container\">$x$</span>, see in which of the three subintervals <span class=\"math-container\">$x$</span> lies in. If <span class=\"math-container\">$x \\in [0,1/3)$</span>, then the first digit in the expansion is <span class=\"math-container\">$0$</span>; if <span class=\"math-container\">$x \\in [1/3,2/3)$</span>, then the first digit in the expansion is <span class=\"math-container\">$1$</span>; if <span class=\"math-container\">$x \\in [2/3,1)$</span>, then the first digit in the expansion is <span class=\"math-container\">$2$</span>. To find the next digit, further divide the subinterval in which <span class=\"math-container\">$x$</span> lies into three equal parts and repeat the same process.</p>\n\n<p>By the way the Cantor set is contructed, the digit <span class=\"math-container\">$1$</span> can never appear in a ternary expansion of <span class=\"math-container\">$x \\in C$</span> because the middle one-third is removed successively from each sub-interval. Of course, this is unless <span class=\"math-container\">$x = 2/3$</span>, in which case, its ternary expansion is <span class=\"math-container\">$0.1$</span>. However, <span class=\"math-container\">$2/3$</span> can also be represented as <span class=\"math-container\">$0.022222\\dots {}= 0.0\\bar{2}$</span> and we'll make an exception and choose this as the ternary expansion of <span class=\"math-container\">$2/3$</span>. Similarly, for consistency we can choose the ternary expansion of <span class=\"math-container\">$1$</span> to be <span class=\"math-container\">$0.22222\\dots {}=0.\\bar{2}$</span> instead of <span class=\"math-container\">$1.00000\\dots$</span>. In general, if <span class=\"math-container\">$x$</span> has a ternary expansions ending in an infinite string of twos, then we will choose that over a representation ending in an infinite string of zeros.</p>\n\n<p>Conversely, any real number in <span class=\"math-container\">$[0,1]$</span> whose ternary expansion consists only of <span class=\"math-container\">$0$</span>'s and <span class=\"math-container\">$2$</span>'s is in the Cantor set by the same reasoning: the numbers that are removed in the construction of the Cantor set are precisely those that have a <span class=\"math-container\">$1$</span> somewhere in their ternary expansion.</p>\n\n<p>Now, to show that the Cantor set is perfect, it suffices to show that every point in <span class=\"math-container\">$C$</span> is a limit point of <span class=\"math-container\">$C$</span>. So, let <span class=\"math-container\">$x \\in C$</span>. Then, <span class=\"math-container\">$x$</span> has a ternary expansion of the form <span class=\"math-container\">$0.x_1 x_2 x_3 \\dots$</span> where each <span class=\"math-container\">$x_n$</span> is either <span class=\"math-container\">$0$</span> or <span class=\"math-container\">$2$</span>. The sequence of numbers <span class=\"math-container\">$y_k = 0.x_1x_2\\dots x_k$</span>, formed by truncating the ternary expansion at the <span class=\"math-container\">$k$</span>th position, clearly converges to <span class=\"math-container\">$x$</span>. Moreover, <span class=\"math-container\">$y_k \\in C$</span> for each <span class=\"math-container\">$k$</span> because its ternary expansion consists only of <span class=\"math-container\">$0$</span>'s and <span class=\"math-container\">$2$</span>'s. Hence, <span class=\"math-container\">$x$</span> is a limit point of <span class=\"math-container\">$C$</span>, and so <span class=\"math-container\">$C$</span> is perfect.</p>\n" }, { "answer_id": 2110697, "score": 3, "text": "<p>Let $x \\in C$, and let $S$ be an open interval containing $x$. Let $I_n$ be the closed interval in $A_n$ (of which there are $2^n$) that contains $x$. Choose $n$ large enough so that $I_n \\subset S$. Let $x_n$ be an endpoint of $I_n$ such that $x_n \\neq x$.</p>\n\n<p>Now, by construction, $x_n \\in C$ (you should verify this). Hence, $x$ is a limit point: an arbitrary open interval $S$ containing $x$ contains $x_n \\in C$. </p>\n" }, { "answer_id": 2777628, "score": 4, "text": "<p>The ternary expansion of <span class=\"math-container\">$x \\in [0,1)$</span> is found by this construction: divide the interval into three equal parts, that is,\n<span class=\"math-container\">$$\n[0,1) = [0,1/3) \\cup [1/3 \\cup 2/3) \\cup [2/3 \\cup 1).\n$$</span>\nTo find the ternary expansion of <span class=\"math-container\">$x$</span>, see in which of the three subintervals <span class=\"math-container\">$x$</span> lies in. If <span class=\"math-container\">$x \\in [0,1/3)$</span>, then the first digit in the expansion is <span class=\"math-container\">$0$</span>; if <span class=\"math-container\">$x \\in [1/3,2/3)$</span>, then the first digit in the expansion is <span class=\"math-container\">$1$</span>; if <span class=\"math-container\">$x \\in [2/3,1)$</span>, then the first digit in the expansion is <span class=\"math-container\">$2$</span>. To find the next digit, further divide the subinterval in which <span class=\"math-container\">$x$</span> lies into three equal parts and repeat the same process.</p>\n\n<p>By the way the Cantor set is contructed, the digit <span class=\"math-container\">$1$</span> can never appear in a ternary expansion of <span class=\"math-container\">$x \\in C$</span> because the middle one-third is removed successively from each sub-interval. Of course, this is unless <span class=\"math-container\">$x = 2/3$</span>, in which case, its ternary expansion is <span class=\"math-container\">$0.1$</span>. However, <span class=\"math-container\">$2/3$</span> can also be represented as <span class=\"math-container\">$0.022222\\dots {}= 0.0\\bar{2}$</span> and we'll make an exception and choose this as the ternary expansion of <span class=\"math-container\">$2/3$</span>. Similarly, for consistency we can choose the ternary expansion of <span class=\"math-container\">$1$</span> to be <span class=\"math-container\">$0.22222\\dots {}=0.\\bar{2}$</span> instead of <span class=\"math-container\">$1.00000\\dots$</span>. In general, if <span class=\"math-container\">$x$</span> has a ternary expansions ending in an infinite string of twos, then we will choose that over a representation ending in an infinite string of zeros.</p>\n\n<p>Conversely, any real number in <span class=\"math-container\">$[0,1]$</span> whose ternary expansion consists only of <span class=\"math-container\">$0$</span>'s and <span class=\"math-container\">$2$</span>'s is in the Cantor set by the same reasoning: the numbers that are removed in the construction of the Cantor set are precisely those that have a <span class=\"math-container\">$1$</span> somewhere in their ternary expansion.</p>\n\n<p>Now, to show that the Cantor set is perfect, it suffices to show that every point in <span class=\"math-container\">$C$</span> is a limit point of <span class=\"math-container\">$C$</span>. So, let <span class=\"math-container\">$x \\in C$</span>. Then, <span class=\"math-container\">$x$</span> has a ternary expansion of the form <span class=\"math-container\">$0.x_1 x_2 x_3 \\dots$</span> where each <span class=\"math-container\">$x_n$</span> is either <span class=\"math-container\">$0$</span> or <span class=\"math-container\">$2$</span>. The sequence of numbers <span class=\"math-container\">$y_k = 0.x_1x_2\\dots x_k$</span>, formed by truncating the ternary expansion at the <span class=\"math-container\">$k$</span>th position, clearly converges to <span class=\"math-container\">$x$</span>. Moreover, <span class=\"math-container\">$y_k \\in C$</span> for each <span class=\"math-container\">$k$</span> because its ternary expansion consists only of <span class=\"math-container\">$0$</span>'s and <span class=\"math-container\">$2$</span>'s. Hence, <span class=\"math-container\">$x$</span> is a limit point of <span class=\"math-container\">$C$</span>, and so <span class=\"math-container\">$C$</span> is perfect.</p>\n" }, { "answer_id": 2110697, "score": 3, "text": "<p>Let $x \\in C$, and let $S$ be an open interval containing $x$. Let $I_n$ be the closed interval in $A_n$ (of which there are $2^n$) that contains $x$. Choose $n$ large enough so that $I_n \\subset S$. Let $x_n$ be an endpoint of $I_n$ such that $x_n \\neq x$.</p>\n\n<p>Now, by construction, $x_n \\in C$ (you should verify this). Hence, $x$ is a limit point: an arbitrary open interval $S$ containing $x$ contains $x_n \\in C$. </p>\n" } ]
Question: Show that every point of cantor set is a limit point Answer 1: <p>The ternary expansion of <span class="math-container">$x \in [0,1)$</span> is found by this construction: divide the interval into three equal parts, that is, <span class="math-container">$$ [0,1) = [0,1/3) \cup [1/3 \cup 2/3) \cup [2/3 \cup 1). $$</span> To find the ternary expansion of <span class="math-container">$x$</span>, see in which of the three subintervals <span class="math-container">$x$</span> lies in. If <span class="math-container">$x \in [0,1/3)$</span>, then the first digit in the expansion is <span class="math-container">$0$</span>; if <span class="math-container">$x \in [1/3,2/3)$</span>, then the first digit in the expansion is <span class="math-container">$1$</span>; if <span class="math-container">$x \in [2/3,1)$</span>, then the first digit in the expansion is <span class="math-container">$2$</span>. To find the next digit, further divide the subinterval in which <span class="math-container">$x$</span> lies into three equal parts and repeat the same process.</p> <p>By the way the Cantor set is contructed, the digit <span class="math-container">$1$</span> can never appear in a ternary expansion of <span class="math-container">$x \in C$</span> because the middle one-third is removed successively from each sub-interval. Of course, this is unless <span class="math-container">$x = 2/3$</span>, in which case, its ternary expansion is <span class="math-container">$0.1$</span>. However, <span class="math-container">$2/3$</span> can also be represented as <span class="math-container">$0.022222\dots {}= 0.0\bar{2}$</span> and we'll make an exception and choose this as the ternary expansion of <span class="math-container">$2/3$</span>. Similarly, for consistency we can choose the ternary expansion of <span class="math-container">$1$</span> to be <span class="math-container">$0.22222\dots {}=0.\bar{2}$</span> instead of <span class="math-container">$1.00000\dots$</span>. In general, if <span class="math-container">$x$</span> has a ternary expansions ending in an infinite string of twos, then we will choose that over a representation ending in an infinite string of zeros.</p> <p>Conversely, any real number in <span class="math-container">$[0,1]$</span> whose ternary expansion consists only of <span class="math-container">$0$</span>'s and <span class="math-container">$2$</span>'s is in the Cantor set by the same reasoning: the numbers that are removed in the construction of the Cantor set are precisely those that have a <span class="math-container">$1$</span> somewhere in their ternary expansion.</p> <p>Now, to show that the Cantor set is perfect, it suffices to show that every point in <span class="math-container">$C$</span> is a limit point of <span class="math-container">$C$</span>. So, let <span class="math-container">$x \in C$</span>. Then, <span class="math-container">$x$</span> has a ternary expansion of the form <span class="math-container">$0.x_1 x_2 x_3 \dots$</span> where each <span class="math-container">$x_n$</span> is either <span class="math-container">$0$</span> or <span class="math-container">$2$</span>. The sequence of numbers <span class="math-container">$y_k = 0.x_1x_2\dots x_k$</span>, formed by truncating the ternary expansion at the <span class="math-container">$k$</span>th position, clearly converges to <span class="math-container">$x$</span>. Moreover, <span class="math-container">$y_k \in C$</span> for each <span class="math-container">$k$</span> because its ternary expansion consists only of <span class="math-container">$0$</span>'s and <span class="math-container">$2$</span>'s. Hence, <span class="math-container">$x$</span> is a limit point of <span class="math-container">$C$</span>, and so <span class="math-container">$C$</span> is perfect.</p> Answer 2: <p>Let $x \in C$, and let $S$ be an open interval containing $x$. Let $I_n$ be the closed interval in $A_n$ (of which there are $2^n$) that contains $x$. Choose $n$ large enough so that $I_n \subset S$. Let $x_n$ be an endpoint of $I_n$ such that $x_n \neq x$.</p> <p>Now, by construction, $x_n \in C$ (you should verify this). Hence, $x$ is a limit point: an arbitrary open interval $S$ containing $x$ contains $x_n \in C$. </p> Answer 3: <p>The ternary expansion of <span class="math-container">$x \in [0,1)$</span> is found by this construction: divide the interval into three equal parts, that is, <span class="math-container">$$ [0,1) = [0,1/3) \cup [1/3 \cup 2/3) \cup [2/3 \cup 1). $$</span> To find the ternary expansion of <span class="math-container">$x$</span>, see in which of the three subintervals <span class="math-container">$x$</span> lies in. If <span class="math-container">$x \in [0,1/3)$</span>, then the first digit in the expansion is <span class="math-container">$0$</span>; if <span class="math-container">$x \in [1/3,2/3)$</span>, then the first digit in the expansion is <span class="math-container">$1$</span>; if <span class="math-container">$x \in [2/3,1)$</span>, then the first digit in the expansion is <span class="math-container">$2$</span>. To find the next digit, further divide the subinterval in which <span class="math-container">$x$</span> lies into three equal parts and repeat the same process.</p> <p>By the way the Cantor set is contructed, the digit <span class="math-container">$1$</span> can never appear in a ternary expansion of <span class="math-container">$x \in C$</span> because the middle one-third is removed successively from each sub-interval. Of course, this is unless <span class="math-container">$x = 2/3$</span>, in which case, its ternary expansion is <span class="math-container">$0.1$</span>. However, <span class="math-container">$2/3$</span> can also be represented as <span class="math-container">$0.022222\dots {}= 0.0\bar{2}$</span> and we'll make an exception and choose this as the ternary expansion of <span class="math-container">$2/3$</span>. Similarly, for consistency we can choose the ternary expansion of <span class="math-container">$1$</span> to be <span class="math-container">$0.22222\dots {}=0.\bar{2}$</span> instead of <span class="math-container">$1.00000\dots$</span>. In general, if <span class="math-container">$x$</span> has a ternary expansions ending in an infinite string of twos, then we will choose that over a representation ending in an infinite string of zeros.</p> <p>Conversely, any real number in <span class="math-container">$[0,1]$</span> whose ternary expansion consists only of <span class="math-container">$0$</span>'s and <span class="math-container">$2$</span>'s is in the Cantor set by the same reasoning: the numbers that are removed in the construction of the Cantor set are precisely those that have a <span class="math-container">$1$</span> somewhere in their ternary expansion.</p> <p>Now, to show that the Cantor set is perfect, it suffices to show that every point in <span class="math-container">$C$</span> is a limit point of <span class="math-container">$C$</span>. So, let <span class="math-container">$x \in C$</span>. Then, <span class="math-container">$x$</span> has a ternary expansion of the form <span class="math-container">$0.x_1 x_2 x_3 \dots$</span> where each <span class="math-container">$x_n$</span> is either <span class="math-container">$0$</span> or <span class="math-container">$2$</span>. The sequence of numbers <span class="math-container">$y_k = 0.x_1x_2\dots x_k$</span>, formed by truncating the ternary expansion at the <span class="math-container">$k$</span>th position, clearly converges to <span class="math-container">$x$</span>. Moreover, <span class="math-container">$y_k \in C$</span> for each <span class="math-container">$k$</span> because its ternary expansion consists only of <span class="math-container">$0$</span>'s and <span class="math-container">$2$</span>'s. Hence, <span class="math-container">$x$</span> is a limit point of <span class="math-container">$C$</span>, and so <span class="math-container">$C$</span> is perfect.</p> Answer 4: <p>Let $x \in C$, and let $S$ be an open interval containing $x$. Let $I_n$ be the closed interval in $A_n$ (of which there are $2^n$) that contains $x$. Choose $n$ large enough so that $I_n \subset S$. Let $x_n$ be an endpoint of $I_n$ such that $x_n \neq x$.</p> <p>Now, by construction, $x_n \in C$ (you should verify this). Hence, $x$ is a limit point: an arbitrary open interval $S$ containing $x$ contains $x_n \in C$. </p> Answer 5: <p>The ternary expansion of <span class="math-container">$x \in [0,1)$</span> is found by this construction: divide the interval into three equal parts, that is, <span class="math-container">$$ [0,1) = [0,1/3) \cup [1/3 \cup 2/3) \cup [2/3 \cup 1). $$</span> To find the ternary expansion of <span class="math-container">$x$</span>, see in which of the three subintervals <span class="math-container">$x$</span> lies in. If <span class="math-container">$x \in [0,1/3)$</span>, then the first digit in the expansion is <span class="math-container">$0$</span>; if <span class="math-container">$x \in [1/3,2/3)$</span>, then the first digit in the expansion is <span class="math-container">$1$</span>; if <span class="math-container">$x \in [2/3,1)$</span>, then the first digit in the expansion is <span class="math-container">$2$</span>. To find the next digit, further divide the subinterval in which <span class="math-container">$x$</span> lies into three equal parts and repeat the same process.</p> <p>By the way the Cantor set is contructed, the digit <span class="math-container">$1$</span> can never appear in a ternary expansion of <span class="math-container">$x \in C$</span> because the middle one-third is removed successively from each sub-interval. Of course, this is unless <span class="math-container">$x = 2/3$</span>, in which case, its ternary expansion is <span class="math-container">$0.1$</span>. However, <span class="math-container">$2/3$</span> can also be represented as <span class="math-container">$0.022222\dots {}= 0.0\bar{2}$</span> and we'll make an exception and choose this as the ternary expansion of <span class="math-container">$2/3$</span>. Similarly, for consistency we can choose the ternary expansion of <span class="math-container">$1$</span> to be <span class="math-container">$0.22222\dots {}=0.\bar{2}$</span> instead of <span class="math-container">$1.00000\dots$</span>. In general, if <span class="math-container">$x$</span> has a ternary expansions ending in an infinite string of twos, then we will choose that over a representation ending in an infinite string of zeros.</p> <p>Conversely, any real number in <span class="math-container">$[0,1]$</span> whose ternary expansion consists only of <span class="math-container">$0$</span>'s and <span class="math-container">$2$</span>'s is in the Cantor set by the same reasoning: the numbers that are removed in the construction of the Cantor set are precisely those that have a <span class="math-container">$1$</span> somewhere in their ternary expansion.</p> <p>Now, to show that the Cantor set is perfect, it suffices to show that every point in <span class="math-container">$C$</span> is a limit point of <span class="math-container">$C$</span>. So, let <span class="math-container">$x \in C$</span>. Then, <span class="math-container">$x$</span> has a ternary expansion of the form <span class="math-container">$0.x_1 x_2 x_3 \dots$</span> where each <span class="math-container">$x_n$</span> is either <span class="math-container">$0$</span> or <span class="math-container">$2$</span>. The sequence of numbers <span class="math-container">$y_k = 0.x_1x_2\dots x_k$</span>, formed by truncating the ternary expansion at the <span class="math-container">$k$</span>th position, clearly converges to <span class="math-container">$x$</span>. Moreover, <span class="math-container">$y_k \in C$</span> for each <span class="math-container">$k$</span> because its ternary expansion consists only of <span class="math-container">$0$</span>'s and <span class="math-container">$2$</span>'s. Hence, <span class="math-container">$x$</span> is a limit point of <span class="math-container">$C$</span>, and so <span class="math-container">$C$</span> is perfect.</p> Answer 6: <p>Let $x \in C$, and let $S$ be an open interval containing $x$. Let $I_n$ be the closed interval in $A_n$ (of which there are $2^n$) that contains $x$. Choose $n$ large enough so that $I_n \subset S$. Let $x_n$ be an endpoint of $I_n$ such that $x_n \neq x$.</p> <p>Now, by construction, $x_n \in C$ (you should verify this). Hence, $x$ is a limit point: an arbitrary open interval $S$ containing $x$ contains $x_n \in C$. </p> Answer 7: <p>The ternary expansion of <span class="math-container">$x \in [0,1)$</span> is found by this construction: divide the interval into three equal parts, that is, <span class="math-container">$$ [0,1) = [0,1/3) \cup [1/3 \cup 2/3) \cup [2/3 \cup 1). $$</span> To find the ternary expansion of <span class="math-container">$x$</span>, see in which of the three subintervals <span class="math-container">$x$</span> lies in. If <span class="math-container">$x \in [0,1/3)$</span>, then the first digit in the expansion is <span class="math-container">$0$</span>; if <span class="math-container">$x \in [1/3,2/3)$</span>, then the first digit in the expansion is <span class="math-container">$1$</span>; if <span class="math-container">$x \in [2/3,1)$</span>, then the first digit in the expansion is <span class="math-container">$2$</span>. To find the next digit, further divide the subinterval in which <span class="math-container">$x$</span> lies into three equal parts and repeat the same process.</p> <p>By the way the Cantor set is contructed, the digit <span class="math-container">$1$</span> can never appear in a ternary expansion of <span class="math-container">$x \in C$</span> because the middle one-third is removed successively from each sub-interval. Of course, this is unless <span class="math-container">$x = 2/3$</span>, in which case, its ternary expansion is <span class="math-container">$0.1$</span>. However, <span class="math-container">$2/3$</span> can also be represented as <span class="math-container">$0.022222\dots {}= 0.0\bar{2}$</span> and we'll make an exception and choose this as the ternary expansion of <span class="math-container">$2/3$</span>. Similarly, for consistency we can choose the ternary expansion of <span class="math-container">$1$</span> to be <span class="math-container">$0.22222\dots {}=0.\bar{2}$</span> instead of <span class="math-container">$1.00000\dots$</span>. In general, if <span class="math-container">$x$</span> has a ternary expansions ending in an infinite string of twos, then we will choose that over a representation ending in an infinite string of zeros.</p> <p>Conversely, any real number in <span class="math-container">$[0,1]$</span> whose ternary expansion consists only of <span class="math-container">$0$</span>'s and <span class="math-container">$2$</span>'s is in the Cantor set by the same reasoning: the numbers that are removed in the construction of the Cantor set are precisely those that have a <span class="math-container">$1$</span> somewhere in their ternary expansion.</p> <p>Now, to show that the Cantor set is perfect, it suffices to show that every point in <span class="math-container">$C$</span> is a limit point of <span class="math-container">$C$</span>. So, let <span class="math-container">$x \in C$</span>. Then, <span class="math-container">$x$</span> has a ternary expansion of the form <span class="math-container">$0.x_1 x_2 x_3 \dots$</span> where each <span class="math-container">$x_n$</span> is either <span class="math-container">$0$</span> or <span class="math-container">$2$</span>. The sequence of numbers <span class="math-container">$y_k = 0.x_1x_2\dots x_k$</span>, formed by truncating the ternary expansion at the <span class="math-container">$k$</span>th position, clearly converges to <span class="math-container">$x$</span>. Moreover, <span class="math-container">$y_k \in C$</span> for each <span class="math-container">$k$</span> because its ternary expansion consists only of <span class="math-container">$0$</span>'s and <span class="math-container">$2$</span>'s. Hence, <span class="math-container">$x$</span> is a limit point of <span class="math-container">$C$</span>, and so <span class="math-container">$C$</span> is perfect.</p> Answer 8: <p>Let $x \in C$, and let $S$ be an open interval containing $x$. Let $I_n$ be the closed interval in $A_n$ (of which there are $2^n$) that contains $x$. Choose $n$ large enough so that $I_n \subset S$. Let $x_n$ be an endpoint of $I_n$ such that $x_n \neq x$.</p> <p>Now, by construction, $x_n \in C$ (you should verify this). Hence, $x$ is a limit point: an arbitrary open interval $S$ containing $x$ contains $x_n \in C$. </p> Answer 9: <p>The ternary expansion of <span class="math-container">$x \in [0,1)$</span> is found by this construction: divide the interval into three equal parts, that is, <span class="math-container">$$ [0,1) = [0,1/3) \cup [1/3 \cup 2/3) \cup [2/3 \cup 1). $$</span> To find the ternary expansion of <span class="math-container">$x$</span>, see in which of the three subintervals <span class="math-container">$x$</span> lies in. If <span class="math-container">$x \in [0,1/3)$</span>, then the first digit in the expansion is <span class="math-container">$0$</span>; if <span class="math-container">$x \in [1/3,2/3)$</span>, then the first digit in the expansion is <span class="math-container">$1$</span>; if <span class="math-container">$x \in [2/3,1)$</span>, then the first digit in the expansion is <span class="math-container">$2$</span>. To find the next digit, further divide the subinterval in which <span class="math-container">$x$</span> lies into three equal parts and repeat the same process.</p> <p>By the way the Cantor set is contructed, the digit <span class="math-container">$1$</span> can never appear in a ternary expansion of <span class="math-container">$x \in C$</span> because the middle one-third is removed successively from each sub-interval. Of course, this is unless <span class="math-container">$x = 2/3$</span>, in which case, its ternary expansion is <span class="math-container">$0.1$</span>. However, <span class="math-container">$2/3$</span> can also be represented as <span class="math-container">$0.022222\dots {}= 0.0\bar{2}$</span> and we'll make an exception and choose this as the ternary expansion of <span class="math-container">$2/3$</span>. Similarly, for consistency we can choose the ternary expansion of <span class="math-container">$1$</span> to be <span class="math-container">$0.22222\dots {}=0.\bar{2}$</span> instead of <span class="math-container">$1.00000\dots$</span>. In general, if <span class="math-container">$x$</span> has a ternary expansions ending in an infinite string of twos, then we will choose that over a representation ending in an infinite string of zeros.</p> <p>Conversely, any real number in <span class="math-container">$[0,1]$</span> whose ternary expansion consists only of <span class="math-container">$0$</span>'s and <span class="math-container">$2$</span>'s is in the Cantor set by the same reasoning: the numbers that are removed in the construction of the Cantor set are precisely those that have a <span class="math-container">$1$</span> somewhere in their ternary expansion.</p> <p>Now, to show that the Cantor set is perfect, it suffices to show that every point in <span class="math-container">$C$</span> is a limit point of <span class="math-container">$C$</span>. So, let <span class="math-container">$x \in C$</span>. Then, <span class="math-container">$x$</span> has a ternary expansion of the form <span class="math-container">$0.x_1 x_2 x_3 \dots$</span> where each <span class="math-container">$x_n$</span> is either <span class="math-container">$0$</span> or <span class="math-container">$2$</span>. The sequence of numbers <span class="math-container">$y_k = 0.x_1x_2\dots x_k$</span>, formed by truncating the ternary expansion at the <span class="math-container">$k$</span>th position, clearly converges to <span class="math-container">$x$</span>. Moreover, <span class="math-container">$y_k \in C$</span> for each <span class="math-container">$k$</span> because its ternary expansion consists only of <span class="math-container">$0$</span>'s and <span class="math-container">$2$</span>'s. Hence, <span class="math-container">$x$</span> is a limit point of <span class="math-container">$C$</span>, and so <span class="math-container">$C$</span> is perfect.</p> Answer 10: <p>Let $x \in C$, and let $S$ be an open interval containing $x$. Let $I_n$ be the closed interval in $A_n$ (of which there are $2^n$) that contains $x$. Choose $n$ large enough so that $I_n \subset S$. Let $x_n$ be an endpoint of $I_n$ such that $x_n \neq x$.</p> <p>Now, by construction, $x_n \in C$ (you should verify this). Hence, $x$ is a limit point: an arbitrary open interval $S$ containing $x$ contains $x_n \in C$. </p> Answer 11: <p>The ternary expansion of <span class="math-container">$x \in [0,1)$</span> is found by this construction: divide the interval into three equal parts, that is, <span class="math-container">$$ [0,1) = [0,1/3) \cup [1/3 \cup 2/3) \cup [2/3 \cup 1). $$</span> To find the ternary expansion of <span class="math-container">$x$</span>, see in which of the three subintervals <span class="math-container">$x$</span> lies in. If <span class="math-container">$x \in [0,1/3)$</span>, then the first digit in the expansion is <span class="math-container">$0$</span>; if <span class="math-container">$x \in [1/3,2/3)$</span>, then the first digit in the expansion is <span class="math-container">$1$</span>; if <span class="math-container">$x \in [2/3,1)$</span>, then the first digit in the expansion is <span class="math-container">$2$</span>. To find the next digit, further divide the subinterval in which <span class="math-container">$x$</span> lies into three equal parts and repeat the same process.</p> <p>By the way the Cantor set is contructed, the digit <span class="math-container">$1$</span> can never appear in a ternary expansion of <span class="math-container">$x \in C$</span> because the middle one-third is removed successively from each sub-interval. Of course, this is unless <span class="math-container">$x = 2/3$</span>, in which case, its ternary expansion is <span class="math-container">$0.1$</span>. However, <span class="math-container">$2/3$</span> can also be represented as <span class="math-container">$0.022222\dots {}= 0.0\bar{2}$</span> and we'll make an exception and choose this as the ternary expansion of <span class="math-container">$2/3$</span>. Similarly, for consistency we can choose the ternary expansion of <span class="math-container">$1$</span> to be <span class="math-container">$0.22222\dots {}=0.\bar{2}$</span> instead of <span class="math-container">$1.00000\dots$</span>. In general, if <span class="math-container">$x$</span> has a ternary expansions ending in an infinite string of twos, then we will choose that over a representation ending in an infinite string of zeros.</p> <p>Conversely, any real number in <span class="math-container">$[0,1]$</span> whose ternary expansion consists only of <span class="math-container">$0$</span>'s and <span class="math-container">$2$</span>'s is in the Cantor set by the same reasoning: the numbers that are removed in the construction of the Cantor set are precisely those that have a <span class="math-container">$1$</span> somewhere in their ternary expansion.</p> <p>Now, to show that the Cantor set is perfect, it suffices to show that every point in <span class="math-container">$C$</span> is a limit point of <span class="math-container">$C$</span>. So, let <span class="math-container">$x \in C$</span>. Then, <span class="math-container">$x$</span> has a ternary expansion of the form <span class="math-container">$0.x_1 x_2 x_3 \dots$</span> where each <span class="math-container">$x_n$</span> is either <span class="math-container">$0$</span> or <span class="math-container">$2$</span>. The sequence of numbers <span class="math-container">$y_k = 0.x_1x_2\dots x_k$</span>, formed by truncating the ternary expansion at the <span class="math-container">$k$</span>th position, clearly converges to <span class="math-container">$x$</span>. Moreover, <span class="math-container">$y_k \in C$</span> for each <span class="math-container">$k$</span> because its ternary expansion consists only of <span class="math-container">$0$</span>'s and <span class="math-container">$2$</span>'s. Hence, <span class="math-container">$x$</span> is a limit point of <span class="math-container">$C$</span>, and so <span class="math-container">$C$</span> is perfect.</p> Answer 12: <p>Let $x \in C$, and let $S$ be an open interval containing $x$. Let $I_n$ be the closed interval in $A_n$ (of which there are $2^n$) that contains $x$. Choose $n$ large enough so that $I_n \subset S$. Let $x_n$ be an endpoint of $I_n$ such that $x_n \neq x$.</p> <p>Now, by construction, $x_n \in C$ (you should verify this). Hence, $x$ is a limit point: an arbitrary open interval $S$ containing $x$ contains $x_n \in C$. </p>
Probability of drawing the same color marble twice in a bag of $10$ red, $10$ orange, $10$ green.
2,111,073
5
https://math.stackexchange.com/questions/2111073/probability-of-drawing-the-same-color-marble-twice-in-a-bag-of-10-red-10-or
<p>The question is worded as follows. </p> <blockquote> <p>A bag of marbles contains $10$ red marbles, $10$ orange marbles, and $10$ green marbles. You randomly take one marble out of the bag for yourself, and without replacement you then randomly take another marble out of the bag to give to your roommate. What is the probability that your roommate has the same color of marble as you do?"</p> </blockquote> <p>The answer says that it is $10/30 \times 9/29 = 0.103$. I understand that this question is testing your understanding of "dependent events," where the occurrence of one event affects the likelihood of another event in the sequence. </p> <p>What I don't understand is why it isn't simply $9/29$. I'll explain my thinking, and hopefully someone can point to where I'm wrong. </p> <p>On the first draw, there are $30$ total possible marbles that can be matched by your roommate. Draw $1$: P{drawing a matchable marble} = $30/30$ On the second draw, there are $9$ marbles of the same color remaining out of 29. Draw $2$: P{drawing the same color} = $9/29$. </p> <p>$$30/30 \times 9/29 = 9/29. $$</p> <p>It seems to me that the answer to this problem matches the question: "what is the probability you both draw a green marble?" However, my professor told me that wasn't the case, and that the answer is correct. </p> <p>Can someone help me? Thanks :)</p> <p><strong>EDIT</strong> Turns out the problem was incorrect, the answer actually is 9/29! Thank you guys for your help :)</p>
[ { "answer_id": 2111082, "score": 3, "text": "<p>Well, we will do the following:</p>\n\n<p>First, fix a colour. This is done in one of three ways.</p>\n\n<p>Then, we are now going to calculate the probability that both you and your friend pick that colour ball out of the bag. </p>\n\n<p>For you, that probability is <span class=\"math-container\">$\\frac {10}{30}=\\frac 13$</span>, and for your friend, it is <span class=\"math-container\">$\\frac 9{29}$</span>. So the answer is <span class=\"math-container\">$3 \\times \\frac{1}{3} \\times \\frac {9}{29} = \\frac{9}{29}$</span>, which is, your answer.</p>\n\n<p>So you are correct. I find it honestly atrocious (although everybody is entitled to a mistake) to see that after numerous reviews, these mistakes are present in textbooks. It is elementary probability, so if you would have believed both the book and your professor, you would have gone down the wrong path. So, I should say, it was good you asked this question.</p>\n\n<p>EDIT: As you may already have noticed, the textbook's answer would have been correct if you had fixed the colour of the ball before drawing. But then, that's no excuse for such a terrible mistake.</p>\n" }, { "answer_id": 2111116, "score": 0, "text": "<p>If your teacher still does not believe, perhaps you can try a simulation. This does not prove that you are right, but it is illustrative. The following code will work in R, but you could do something similar with other languages</p>\n\n<pre><code># Make a vector showing the colors of marbles\ncolor=rep(c(1,2,3), each=10)\n\n# Decide how many times your want to draw out two marbles\nnSamples = 10000\n# Make a vector to hold the results\nMatch = rep(NA, nSamples)\n\n# Draw out two marbles without replacement nSamples times\nfor(i in 1:nSamples){\n sampleTemp = sample(color, 2, replace=FALSE)\n Match[i] = sampleTemp[1] == sampleTemp[2]\n}\n\n# Check the fraction of samples in which the colors match\nsum(Match)/nSamples\n</code></pre>\n\n<p>That is nowhere near (10/30)*(9/29) but will almost always be very close to 9/29.</p>\n" }, { "answer_id": 2111082, "score": 3, "text": "<p>Well, we will do the following:</p>\n\n<p>First, fix a colour. This is done in one of three ways.</p>\n\n<p>Then, we are now going to calculate the probability that both you and your friend pick that colour ball out of the bag. </p>\n\n<p>For you, that probability is <span class=\"math-container\">$\\frac {10}{30}=\\frac 13$</span>, and for your friend, it is <span class=\"math-container\">$\\frac 9{29}$</span>. So the answer is <span class=\"math-container\">$3 \\times \\frac{1}{3} \\times \\frac {9}{29} = \\frac{9}{29}$</span>, which is, your answer.</p>\n\n<p>So you are correct. I find it honestly atrocious (although everybody is entitled to a mistake) to see that after numerous reviews, these mistakes are present in textbooks. It is elementary probability, so if you would have believed both the book and your professor, you would have gone down the wrong path. So, I should say, it was good you asked this question.</p>\n\n<p>EDIT: As you may already have noticed, the textbook's answer would have been correct if you had fixed the colour of the ball before drawing. But then, that's no excuse for such a terrible mistake.</p>\n" }, { "answer_id": 2111116, "score": 0, "text": "<p>If your teacher still does not believe, perhaps you can try a simulation. This does not prove that you are right, but it is illustrative. The following code will work in R, but you could do something similar with other languages</p>\n\n<pre><code># Make a vector showing the colors of marbles\ncolor=rep(c(1,2,3), each=10)\n\n# Decide how many times your want to draw out two marbles\nnSamples = 10000\n# Make a vector to hold the results\nMatch = rep(NA, nSamples)\n\n# Draw out two marbles without replacement nSamples times\nfor(i in 1:nSamples){\n sampleTemp = sample(color, 2, replace=FALSE)\n Match[i] = sampleTemp[1] == sampleTemp[2]\n}\n\n# Check the fraction of samples in which the colors match\nsum(Match)/nSamples\n</code></pre>\n\n<p>That is nowhere near (10/30)*(9/29) but will almost always be very close to 9/29.</p>\n" }, { "answer_id": 2111082, "score": 3, "text": "<p>Well, we will do the following:</p>\n\n<p>First, fix a colour. This is done in one of three ways.</p>\n\n<p>Then, we are now going to calculate the probability that both you and your friend pick that colour ball out of the bag. </p>\n\n<p>For you, that probability is <span class=\"math-container\">$\\frac {10}{30}=\\frac 13$</span>, and for your friend, it is <span class=\"math-container\">$\\frac 9{29}$</span>. So the answer is <span class=\"math-container\">$3 \\times \\frac{1}{3} \\times \\frac {9}{29} = \\frac{9}{29}$</span>, which is, your answer.</p>\n\n<p>So you are correct. I find it honestly atrocious (although everybody is entitled to a mistake) to see that after numerous reviews, these mistakes are present in textbooks. It is elementary probability, so if you would have believed both the book and your professor, you would have gone down the wrong path. So, I should say, it was good you asked this question.</p>\n\n<p>EDIT: As you may already have noticed, the textbook's answer would have been correct if you had fixed the colour of the ball before drawing. But then, that's no excuse for such a terrible mistake.</p>\n" }, { "answer_id": 2111116, "score": 0, "text": "<p>If your teacher still does not believe, perhaps you can try a simulation. This does not prove that you are right, but it is illustrative. The following code will work in R, but you could do something similar with other languages</p>\n\n<pre><code># Make a vector showing the colors of marbles\ncolor=rep(c(1,2,3), each=10)\n\n# Decide how many times your want to draw out two marbles\nnSamples = 10000\n# Make a vector to hold the results\nMatch = rep(NA, nSamples)\n\n# Draw out two marbles without replacement nSamples times\nfor(i in 1:nSamples){\n sampleTemp = sample(color, 2, replace=FALSE)\n Match[i] = sampleTemp[1] == sampleTemp[2]\n}\n\n# Check the fraction of samples in which the colors match\nsum(Match)/nSamples\n</code></pre>\n\n<p>That is nowhere near (10/30)*(9/29) but will almost always be very close to 9/29.</p>\n" }, { "answer_id": 2111082, "score": 3, "text": "<p>Well, we will do the following:</p>\n\n<p>First, fix a colour. This is done in one of three ways.</p>\n\n<p>Then, we are now going to calculate the probability that both you and your friend pick that colour ball out of the bag. </p>\n\n<p>For you, that probability is <span class=\"math-container\">$\\frac {10}{30}=\\frac 13$</span>, and for your friend, it is <span class=\"math-container\">$\\frac 9{29}$</span>. So the answer is <span class=\"math-container\">$3 \\times \\frac{1}{3} \\times \\frac {9}{29} = \\frac{9}{29}$</span>, which is, your answer.</p>\n\n<p>So you are correct. I find it honestly atrocious (although everybody is entitled to a mistake) to see that after numerous reviews, these mistakes are present in textbooks. It is elementary probability, so if you would have believed both the book and your professor, you would have gone down the wrong path. So, I should say, it was good you asked this question.</p>\n\n<p>EDIT: As you may already have noticed, the textbook's answer would have been correct if you had fixed the colour of the ball before drawing. But then, that's no excuse for such a terrible mistake.</p>\n" }, { "answer_id": 2111116, "score": 0, "text": "<p>If your teacher still does not believe, perhaps you can try a simulation. This does not prove that you are right, but it is illustrative. The following code will work in R, but you could do something similar with other languages</p>\n\n<pre><code># Make a vector showing the colors of marbles\ncolor=rep(c(1,2,3), each=10)\n\n# Decide how many times your want to draw out two marbles\nnSamples = 10000\n# Make a vector to hold the results\nMatch = rep(NA, nSamples)\n\n# Draw out two marbles without replacement nSamples times\nfor(i in 1:nSamples){\n sampleTemp = sample(color, 2, replace=FALSE)\n Match[i] = sampleTemp[1] == sampleTemp[2]\n}\n\n# Check the fraction of samples in which the colors match\nsum(Match)/nSamples\n</code></pre>\n\n<p>That is nowhere near (10/30)*(9/29) but will almost always be very close to 9/29.</p>\n" }, { "answer_id": 2111082, "score": 3, "text": "<p>Well, we will do the following:</p>\n\n<p>First, fix a colour. This is done in one of three ways.</p>\n\n<p>Then, we are now going to calculate the probability that both you and your friend pick that colour ball out of the bag. </p>\n\n<p>For you, that probability is <span class=\"math-container\">$\\frac {10}{30}=\\frac 13$</span>, and for your friend, it is <span class=\"math-container\">$\\frac 9{29}$</span>. So the answer is <span class=\"math-container\">$3 \\times \\frac{1}{3} \\times \\frac {9}{29} = \\frac{9}{29}$</span>, which is, your answer.</p>\n\n<p>So you are correct. I find it honestly atrocious (although everybody is entitled to a mistake) to see that after numerous reviews, these mistakes are present in textbooks. It is elementary probability, so if you would have believed both the book and your professor, you would have gone down the wrong path. So, I should say, it was good you asked this question.</p>\n\n<p>EDIT: As you may already have noticed, the textbook's answer would have been correct if you had fixed the colour of the ball before drawing. But then, that's no excuse for such a terrible mistake.</p>\n" }, { "answer_id": 2111116, "score": 0, "text": "<p>If your teacher still does not believe, perhaps you can try a simulation. This does not prove that you are right, but it is illustrative. The following code will work in R, but you could do something similar with other languages</p>\n\n<pre><code># Make a vector showing the colors of marbles\ncolor=rep(c(1,2,3), each=10)\n\n# Decide how many times your want to draw out two marbles\nnSamples = 10000\n# Make a vector to hold the results\nMatch = rep(NA, nSamples)\n\n# Draw out two marbles without replacement nSamples times\nfor(i in 1:nSamples){\n sampleTemp = sample(color, 2, replace=FALSE)\n Match[i] = sampleTemp[1] == sampleTemp[2]\n}\n\n# Check the fraction of samples in which the colors match\nsum(Match)/nSamples\n</code></pre>\n\n<p>That is nowhere near (10/30)*(9/29) but will almost always be very close to 9/29.</p>\n" }, { "answer_id": 2111082, "score": 3, "text": "<p>Well, we will do the following:</p>\n\n<p>First, fix a colour. This is done in one of three ways.</p>\n\n<p>Then, we are now going to calculate the probability that both you and your friend pick that colour ball out of the bag. </p>\n\n<p>For you, that probability is <span class=\"math-container\">$\\frac {10}{30}=\\frac 13$</span>, and for your friend, it is <span class=\"math-container\">$\\frac 9{29}$</span>. So the answer is <span class=\"math-container\">$3 \\times \\frac{1}{3} \\times \\frac {9}{29} = \\frac{9}{29}$</span>, which is, your answer.</p>\n\n<p>So you are correct. I find it honestly atrocious (although everybody is entitled to a mistake) to see that after numerous reviews, these mistakes are present in textbooks. It is elementary probability, so if you would have believed both the book and your professor, you would have gone down the wrong path. So, I should say, it was good you asked this question.</p>\n\n<p>EDIT: As you may already have noticed, the textbook's answer would have been correct if you had fixed the colour of the ball before drawing. But then, that's no excuse for such a terrible mistake.</p>\n" }, { "answer_id": 2111116, "score": 0, "text": "<p>If your teacher still does not believe, perhaps you can try a simulation. This does not prove that you are right, but it is illustrative. The following code will work in R, but you could do something similar with other languages</p>\n\n<pre><code># Make a vector showing the colors of marbles\ncolor=rep(c(1,2,3), each=10)\n\n# Decide how many times your want to draw out two marbles\nnSamples = 10000\n# Make a vector to hold the results\nMatch = rep(NA, nSamples)\n\n# Draw out two marbles without replacement nSamples times\nfor(i in 1:nSamples){\n sampleTemp = sample(color, 2, replace=FALSE)\n Match[i] = sampleTemp[1] == sampleTemp[2]\n}\n\n# Check the fraction of samples in which the colors match\nsum(Match)/nSamples\n</code></pre>\n\n<p>That is nowhere near (10/30)*(9/29) but will almost always be very close to 9/29.</p>\n" } ]
Question: Probability of drawing the same color marble twice in a bag of $10$ red, $10$ orange, $10$ green. Answer 1: <p>Well, we will do the following:</p> <p>First, fix a colour. This is done in one of three ways.</p> <p>Then, we are now going to calculate the probability that both you and your friend pick that colour ball out of the bag. </p> <p>For you, that probability is <span class="math-container">$\frac {10}{30}=\frac 13$</span>, and for your friend, it is <span class="math-container">$\frac 9{29}$</span>. So the answer is <span class="math-container">$3 \times \frac{1}{3} \times \frac {9}{29} = \frac{9}{29}$</span>, which is, your answer.</p> <p>So you are correct. I find it honestly atrocious (although everybody is entitled to a mistake) to see that after numerous reviews, these mistakes are present in textbooks. It is elementary probability, so if you would have believed both the book and your professor, you would have gone down the wrong path. So, I should say, it was good you asked this question.</p> <p>EDIT: As you may already have noticed, the textbook's answer would have been correct if you had fixed the colour of the ball before drawing. But then, that's no excuse for such a terrible mistake.</p> Answer 2: <p>If your teacher still does not believe, perhaps you can try a simulation. This does not prove that you are right, but it is illustrative. The following code will work in R, but you could do something similar with other languages</p> <pre><code># Make a vector showing the colors of marbles color=rep(c(1,2,3), each=10) # Decide how many times your want to draw out two marbles nSamples = 10000 # Make a vector to hold the results Match = rep(NA, nSamples) # Draw out two marbles without replacement nSamples times for(i in 1:nSamples){ sampleTemp = sample(color, 2, replace=FALSE) Match[i] = sampleTemp[1] == sampleTemp[2] } # Check the fraction of samples in which the colors match sum(Match)/nSamples </code></pre> <p>That is nowhere near (10/30)*(9/29) but will almost always be very close to 9/29.</p> Answer 3: <p>Well, we will do the following:</p> <p>First, fix a colour. This is done in one of three ways.</p> <p>Then, we are now going to calculate the probability that both you and your friend pick that colour ball out of the bag. </p> <p>For you, that probability is <span class="math-container">$\frac {10}{30}=\frac 13$</span>, and for your friend, it is <span class="math-container">$\frac 9{29}$</span>. So the answer is <span class="math-container">$3 \times \frac{1}{3} \times \frac {9}{29} = \frac{9}{29}$</span>, which is, your answer.</p> <p>So you are correct. I find it honestly atrocious (although everybody is entitled to a mistake) to see that after numerous reviews, these mistakes are present in textbooks. It is elementary probability, so if you would have believed both the book and your professor, you would have gone down the wrong path. So, I should say, it was good you asked this question.</p> <p>EDIT: As you may already have noticed, the textbook's answer would have been correct if you had fixed the colour of the ball before drawing. But then, that's no excuse for such a terrible mistake.</p> Answer 4: <p>If your teacher still does not believe, perhaps you can try a simulation. This does not prove that you are right, but it is illustrative. The following code will work in R, but you could do something similar with other languages</p> <pre><code># Make a vector showing the colors of marbles color=rep(c(1,2,3), each=10) # Decide how many times your want to draw out two marbles nSamples = 10000 # Make a vector to hold the results Match = rep(NA, nSamples) # Draw out two marbles without replacement nSamples times for(i in 1:nSamples){ sampleTemp = sample(color, 2, replace=FALSE) Match[i] = sampleTemp[1] == sampleTemp[2] } # Check the fraction of samples in which the colors match sum(Match)/nSamples </code></pre> <p>That is nowhere near (10/30)*(9/29) but will almost always be very close to 9/29.</p> Answer 5: <p>Well, we will do the following:</p> <p>First, fix a colour. This is done in one of three ways.</p> <p>Then, we are now going to calculate the probability that both you and your friend pick that colour ball out of the bag. </p> <p>For you, that probability is <span class="math-container">$\frac {10}{30}=\frac 13$</span>, and for your friend, it is <span class="math-container">$\frac 9{29}$</span>. So the answer is <span class="math-container">$3 \times \frac{1}{3} \times \frac {9}{29} = \frac{9}{29}$</span>, which is, your answer.</p> <p>So you are correct. I find it honestly atrocious (although everybody is entitled to a mistake) to see that after numerous reviews, these mistakes are present in textbooks. It is elementary probability, so if you would have believed both the book and your professor, you would have gone down the wrong path. So, I should say, it was good you asked this question.</p> <p>EDIT: As you may already have noticed, the textbook's answer would have been correct if you had fixed the colour of the ball before drawing. But then, that's no excuse for such a terrible mistake.</p> Answer 6: <p>If your teacher still does not believe, perhaps you can try a simulation. This does not prove that you are right, but it is illustrative. The following code will work in R, but you could do something similar with other languages</p> <pre><code># Make a vector showing the colors of marbles color=rep(c(1,2,3), each=10) # Decide how many times your want to draw out two marbles nSamples = 10000 # Make a vector to hold the results Match = rep(NA, nSamples) # Draw out two marbles without replacement nSamples times for(i in 1:nSamples){ sampleTemp = sample(color, 2, replace=FALSE) Match[i] = sampleTemp[1] == sampleTemp[2] } # Check the fraction of samples in which the colors match sum(Match)/nSamples </code></pre> <p>That is nowhere near (10/30)*(9/29) but will almost always be very close to 9/29.</p> Answer 7: <p>Well, we will do the following:</p> <p>First, fix a colour. This is done in one of three ways.</p> <p>Then, we are now going to calculate the probability that both you and your friend pick that colour ball out of the bag. </p> <p>For you, that probability is <span class="math-container">$\frac {10}{30}=\frac 13$</span>, and for your friend, it is <span class="math-container">$\frac 9{29}$</span>. So the answer is <span class="math-container">$3 \times \frac{1}{3} \times \frac {9}{29} = \frac{9}{29}$</span>, which is, your answer.</p> <p>So you are correct. I find it honestly atrocious (although everybody is entitled to a mistake) to see that after numerous reviews, these mistakes are present in textbooks. It is elementary probability, so if you would have believed both the book and your professor, you would have gone down the wrong path. So, I should say, it was good you asked this question.</p> <p>EDIT: As you may already have noticed, the textbook's answer would have been correct if you had fixed the colour of the ball before drawing. But then, that's no excuse for such a terrible mistake.</p> Answer 8: <p>If your teacher still does not believe, perhaps you can try a simulation. This does not prove that you are right, but it is illustrative. The following code will work in R, but you could do something similar with other languages</p> <pre><code># Make a vector showing the colors of marbles color=rep(c(1,2,3), each=10) # Decide how many times your want to draw out two marbles nSamples = 10000 # Make a vector to hold the results Match = rep(NA, nSamples) # Draw out two marbles without replacement nSamples times for(i in 1:nSamples){ sampleTemp = sample(color, 2, replace=FALSE) Match[i] = sampleTemp[1] == sampleTemp[2] } # Check the fraction of samples in which the colors match sum(Match)/nSamples </code></pre> <p>That is nowhere near (10/30)*(9/29) but will almost always be very close to 9/29.</p> Answer 9: <p>Well, we will do the following:</p> <p>First, fix a colour. This is done in one of three ways.</p> <p>Then, we are now going to calculate the probability that both you and your friend pick that colour ball out of the bag. </p> <p>For you, that probability is <span class="math-container">$\frac {10}{30}=\frac 13$</span>, and for your friend, it is <span class="math-container">$\frac 9{29}$</span>. So the answer is <span class="math-container">$3 \times \frac{1}{3} \times \frac {9}{29} = \frac{9}{29}$</span>, which is, your answer.</p> <p>So you are correct. I find it honestly atrocious (although everybody is entitled to a mistake) to see that after numerous reviews, these mistakes are present in textbooks. It is elementary probability, so if you would have believed both the book and your professor, you would have gone down the wrong path. So, I should say, it was good you asked this question.</p> <p>EDIT: As you may already have noticed, the textbook's answer would have been correct if you had fixed the colour of the ball before drawing. But then, that's no excuse for such a terrible mistake.</p> Answer 10: <p>If your teacher still does not believe, perhaps you can try a simulation. This does not prove that you are right, but it is illustrative. The following code will work in R, but you could do something similar with other languages</p> <pre><code># Make a vector showing the colors of marbles color=rep(c(1,2,3), each=10) # Decide how many times your want to draw out two marbles nSamples = 10000 # Make a vector to hold the results Match = rep(NA, nSamples) # Draw out two marbles without replacement nSamples times for(i in 1:nSamples){ sampleTemp = sample(color, 2, replace=FALSE) Match[i] = sampleTemp[1] == sampleTemp[2] } # Check the fraction of samples in which the colors match sum(Match)/nSamples </code></pre> <p>That is nowhere near (10/30)*(9/29) but will almost always be very close to 9/29.</p> Answer 11: <p>Well, we will do the following:</p> <p>First, fix a colour. This is done in one of three ways.</p> <p>Then, we are now going to calculate the probability that both you and your friend pick that colour ball out of the bag. </p> <p>For you, that probability is <span class="math-container">$\frac {10}{30}=\frac 13$</span>, and for your friend, it is <span class="math-container">$\frac 9{29}$</span>. So the answer is <span class="math-container">$3 \times \frac{1}{3} \times \frac {9}{29} = \frac{9}{29}$</span>, which is, your answer.</p> <p>So you are correct. I find it honestly atrocious (although everybody is entitled to a mistake) to see that after numerous reviews, these mistakes are present in textbooks. It is elementary probability, so if you would have believed both the book and your professor, you would have gone down the wrong path. So, I should say, it was good you asked this question.</p> <p>EDIT: As you may already have noticed, the textbook's answer would have been correct if you had fixed the colour of the ball before drawing. But then, that's no excuse for such a terrible mistake.</p> Answer 12: <p>If your teacher still does not believe, perhaps you can try a simulation. This does not prove that you are right, but it is illustrative. The following code will work in R, but you could do something similar with other languages</p> <pre><code># Make a vector showing the colors of marbles color=rep(c(1,2,3), each=10) # Decide how many times your want to draw out two marbles nSamples = 10000 # Make a vector to hold the results Match = rep(NA, nSamples) # Draw out two marbles without replacement nSamples times for(i in 1:nSamples){ sampleTemp = sample(color, 2, replace=FALSE) Match[i] = sampleTemp[1] == sampleTemp[2] } # Check the fraction of samples in which the colors match sum(Match)/nSamples </code></pre> <p>That is nowhere near (10/30)*(9/29) but will almost always be very close to 9/29.</p>
$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})...$
2,111,471
5
https://math.stackexchange.com/questions/2111471/1-frac121-frac131-frac141-frac15
<p>Is there any well-known value (Or Approximation) for this?</p> <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})...$$</p> <p>we know that it converges as $$\sum_{i=2}^{\infty}\frac{(-1)^{i+1}}{i}=ln2-1$$</p> <p>So there is a trivial upper bound $\frac{2}{e}$ for it. Is there any better result? In addition is there any similar result for</p> <p>$$(1-\frac{1}{2})(1-\frac{1}{4})(1-\frac{1}{8})(1-\frac{1}{16})...$$ or $$(1+\frac{1}{2})(1+\frac{1}{4})(1+\frac{1}{8})(1+\frac{1}{16})...$$</p>
[ { "answer_id": 2111472, "score": 7, "text": "<p>HINT:</p>\n\n<p>For the first question,\n$$\\left(1+\\dfrac1{2n+1}\\right)\\left(1-\\dfrac1{2n+2}\\right)=1$$</p>\n\n<p>$$\\prod_{r=1}^n\\left(1-\\dfrac1{2^r}\\right)=\\dfrac{\\prod_{r=1}^n(2^r-1)}{2^{1+2+\\cdots+n}}$$</p>\n" }, { "answer_id": 2111568, "score": 2, "text": "<p>$$(1-\\frac{1}{2})(1+\\frac{1}{3})(1-\\frac{1}{4})(1+\\frac{1}{5})(1-\\frac{1}{6})(1+\\frac{1}{7})... $$\nExcept the first term, consider consecutive couples of terms :</p>\n\n<p>$(1+\\frac{1}{3})(1-\\frac{1}{4})= (\\frac{4}{3})(\\frac{3}{4})= 1$</p>\n\n<p>$(1+\\frac{1}{5})(1-\\frac{1}{6})= (\\frac{6}{5})(\\frac{5}{6})= 1$</p>\n\n<p>$(1+\\frac{1}{7})(1-\\frac{1}{8})= (\\frac{8}{7})(\\frac{7}{8})= 1$</p>\n\n<p>And so on</p>\n\n<p>All couples $=1$ . So, only the first term remains : $(1-\\frac{1}{2})=\\frac{1}{2}$</p>\n\n<p>Finally</p>\n\n<p>$$(1-\\frac{1}{2})(1+\\frac{1}{3})(1-\\frac{1}{4})(1+\\frac{1}{5})(1-\\frac{1}{6})(1+\\frac{1}{7})... = \\frac{1}{2}$$</p>\n" }, { "answer_id": 2111472, "score": 7, "text": "<p>HINT:</p>\n\n<p>For the first question,\n$$\\left(1+\\dfrac1{2n+1}\\right)\\left(1-\\dfrac1{2n+2}\\right)=1$$</p>\n\n<p>$$\\prod_{r=1}^n\\left(1-\\dfrac1{2^r}\\right)=\\dfrac{\\prod_{r=1}^n(2^r-1)}{2^{1+2+\\cdots+n}}$$</p>\n" }, { "answer_id": 2111568, "score": 2, "text": "<p>$$(1-\\frac{1}{2})(1+\\frac{1}{3})(1-\\frac{1}{4})(1+\\frac{1}{5})(1-\\frac{1}{6})(1+\\frac{1}{7})... $$\nExcept the first term, consider consecutive couples of terms :</p>\n\n<p>$(1+\\frac{1}{3})(1-\\frac{1}{4})= (\\frac{4}{3})(\\frac{3}{4})= 1$</p>\n\n<p>$(1+\\frac{1}{5})(1-\\frac{1}{6})= (\\frac{6}{5})(\\frac{5}{6})= 1$</p>\n\n<p>$(1+\\frac{1}{7})(1-\\frac{1}{8})= (\\frac{8}{7})(\\frac{7}{8})= 1$</p>\n\n<p>And so on</p>\n\n<p>All couples $=1$ . So, only the first term remains : $(1-\\frac{1}{2})=\\frac{1}{2}$</p>\n\n<p>Finally</p>\n\n<p>$$(1-\\frac{1}{2})(1+\\frac{1}{3})(1-\\frac{1}{4})(1+\\frac{1}{5})(1-\\frac{1}{6})(1+\\frac{1}{7})... = \\frac{1}{2}$$</p>\n" }, { "answer_id": 2111472, "score": 7, "text": "<p>HINT:</p>\n\n<p>For the first question,\n$$\\left(1+\\dfrac1{2n+1}\\right)\\left(1-\\dfrac1{2n+2}\\right)=1$$</p>\n\n<p>$$\\prod_{r=1}^n\\left(1-\\dfrac1{2^r}\\right)=\\dfrac{\\prod_{r=1}^n(2^r-1)}{2^{1+2+\\cdots+n}}$$</p>\n" }, { "answer_id": 2111568, "score": 2, "text": "<p>$$(1-\\frac{1}{2})(1+\\frac{1}{3})(1-\\frac{1}{4})(1+\\frac{1}{5})(1-\\frac{1}{6})(1+\\frac{1}{7})... $$\nExcept the first term, consider consecutive couples of terms :</p>\n\n<p>$(1+\\frac{1}{3})(1-\\frac{1}{4})= (\\frac{4}{3})(\\frac{3}{4})= 1$</p>\n\n<p>$(1+\\frac{1}{5})(1-\\frac{1}{6})= (\\frac{6}{5})(\\frac{5}{6})= 1$</p>\n\n<p>$(1+\\frac{1}{7})(1-\\frac{1}{8})= (\\frac{8}{7})(\\frac{7}{8})= 1$</p>\n\n<p>And so on</p>\n\n<p>All couples $=1$ . So, only the first term remains : $(1-\\frac{1}{2})=\\frac{1}{2}$</p>\n\n<p>Finally</p>\n\n<p>$$(1-\\frac{1}{2})(1+\\frac{1}{3})(1-\\frac{1}{4})(1+\\frac{1}{5})(1-\\frac{1}{6})(1+\\frac{1}{7})... = \\frac{1}{2}$$</p>\n" }, { "answer_id": 2111472, "score": 7, "text": "<p>HINT:</p>\n\n<p>For the first question,\n$$\\left(1+\\dfrac1{2n+1}\\right)\\left(1-\\dfrac1{2n+2}\\right)=1$$</p>\n\n<p>$$\\prod_{r=1}^n\\left(1-\\dfrac1{2^r}\\right)=\\dfrac{\\prod_{r=1}^n(2^r-1)}{2^{1+2+\\cdots+n}}$$</p>\n" }, { "answer_id": 2111568, "score": 2, "text": "<p>$$(1-\\frac{1}{2})(1+\\frac{1}{3})(1-\\frac{1}{4})(1+\\frac{1}{5})(1-\\frac{1}{6})(1+\\frac{1}{7})... $$\nExcept the first term, consider consecutive couples of terms :</p>\n\n<p>$(1+\\frac{1}{3})(1-\\frac{1}{4})= (\\frac{4}{3})(\\frac{3}{4})= 1$</p>\n\n<p>$(1+\\frac{1}{5})(1-\\frac{1}{6})= (\\frac{6}{5})(\\frac{5}{6})= 1$</p>\n\n<p>$(1+\\frac{1}{7})(1-\\frac{1}{8})= (\\frac{8}{7})(\\frac{7}{8})= 1$</p>\n\n<p>And so on</p>\n\n<p>All couples $=1$ . So, only the first term remains : $(1-\\frac{1}{2})=\\frac{1}{2}$</p>\n\n<p>Finally</p>\n\n<p>$$(1-\\frac{1}{2})(1+\\frac{1}{3})(1-\\frac{1}{4})(1+\\frac{1}{5})(1-\\frac{1}{6})(1+\\frac{1}{7})... = \\frac{1}{2}$$</p>\n" }, { "answer_id": 2111472, "score": 7, "text": "<p>HINT:</p>\n\n<p>For the first question,\n$$\\left(1+\\dfrac1{2n+1}\\right)\\left(1-\\dfrac1{2n+2}\\right)=1$$</p>\n\n<p>$$\\prod_{r=1}^n\\left(1-\\dfrac1{2^r}\\right)=\\dfrac{\\prod_{r=1}^n(2^r-1)}{2^{1+2+\\cdots+n}}$$</p>\n" }, { "answer_id": 2111568, "score": 2, "text": "<p>$$(1-\\frac{1}{2})(1+\\frac{1}{3})(1-\\frac{1}{4})(1+\\frac{1}{5})(1-\\frac{1}{6})(1+\\frac{1}{7})... $$\nExcept the first term, consider consecutive couples of terms :</p>\n\n<p>$(1+\\frac{1}{3})(1-\\frac{1}{4})= (\\frac{4}{3})(\\frac{3}{4})= 1$</p>\n\n<p>$(1+\\frac{1}{5})(1-\\frac{1}{6})= (\\frac{6}{5})(\\frac{5}{6})= 1$</p>\n\n<p>$(1+\\frac{1}{7})(1-\\frac{1}{8})= (\\frac{8}{7})(\\frac{7}{8})= 1$</p>\n\n<p>And so on</p>\n\n<p>All couples $=1$ . So, only the first term remains : $(1-\\frac{1}{2})=\\frac{1}{2}$</p>\n\n<p>Finally</p>\n\n<p>$$(1-\\frac{1}{2})(1+\\frac{1}{3})(1-\\frac{1}{4})(1+\\frac{1}{5})(1-\\frac{1}{6})(1+\\frac{1}{7})... = \\frac{1}{2}$$</p>\n" }, { "answer_id": 2111472, "score": 7, "text": "<p>HINT:</p>\n\n<p>For the first question,\n$$\\left(1+\\dfrac1{2n+1}\\right)\\left(1-\\dfrac1{2n+2}\\right)=1$$</p>\n\n<p>$$\\prod_{r=1}^n\\left(1-\\dfrac1{2^r}\\right)=\\dfrac{\\prod_{r=1}^n(2^r-1)}{2^{1+2+\\cdots+n}}$$</p>\n" }, { "answer_id": 2111568, "score": 2, "text": "<p>$$(1-\\frac{1}{2})(1+\\frac{1}{3})(1-\\frac{1}{4})(1+\\frac{1}{5})(1-\\frac{1}{6})(1+\\frac{1}{7})... $$\nExcept the first term, consider consecutive couples of terms :</p>\n\n<p>$(1+\\frac{1}{3})(1-\\frac{1}{4})= (\\frac{4}{3})(\\frac{3}{4})= 1$</p>\n\n<p>$(1+\\frac{1}{5})(1-\\frac{1}{6})= (\\frac{6}{5})(\\frac{5}{6})= 1$</p>\n\n<p>$(1+\\frac{1}{7})(1-\\frac{1}{8})= (\\frac{8}{7})(\\frac{7}{8})= 1$</p>\n\n<p>And so on</p>\n\n<p>All couples $=1$ . So, only the first term remains : $(1-\\frac{1}{2})=\\frac{1}{2}$</p>\n\n<p>Finally</p>\n\n<p>$$(1-\\frac{1}{2})(1+\\frac{1}{3})(1-\\frac{1}{4})(1+\\frac{1}{5})(1-\\frac{1}{6})(1+\\frac{1}{7})... = \\frac{1}{2}$$</p>\n" } ]
Question: $(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})...$ Answer 1: <p>HINT:</p> <p>For the first question, $$\left(1+\dfrac1{2n+1}\right)\left(1-\dfrac1{2n+2}\right)=1$$</p> <p>$$\prod_{r=1}^n\left(1-\dfrac1{2^r}\right)=\dfrac{\prod_{r=1}^n(2^r-1)}{2^{1+2+\cdots+n}}$$</p> Answer 2: <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... $$ Except the first term, consider consecutive couples of terms :</p> <p>$(1+\frac{1}{3})(1-\frac{1}{4})= (\frac{4}{3})(\frac{3}{4})= 1$</p> <p>$(1+\frac{1}{5})(1-\frac{1}{6})= (\frac{6}{5})(\frac{5}{6})= 1$</p> <p>$(1+\frac{1}{7})(1-\frac{1}{8})= (\frac{8}{7})(\frac{7}{8})= 1$</p> <p>And so on</p> <p>All couples $=1$ . So, only the first term remains : $(1-\frac{1}{2})=\frac{1}{2}$</p> <p>Finally</p> <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... = \frac{1}{2}$$</p> Answer 3: <p>HINT:</p> <p>For the first question, $$\left(1+\dfrac1{2n+1}\right)\left(1-\dfrac1{2n+2}\right)=1$$</p> <p>$$\prod_{r=1}^n\left(1-\dfrac1{2^r}\right)=\dfrac{\prod_{r=1}^n(2^r-1)}{2^{1+2+\cdots+n}}$$</p> Answer 4: <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... $$ Except the first term, consider consecutive couples of terms :</p> <p>$(1+\frac{1}{3})(1-\frac{1}{4})= (\frac{4}{3})(\frac{3}{4})= 1$</p> <p>$(1+\frac{1}{5})(1-\frac{1}{6})= (\frac{6}{5})(\frac{5}{6})= 1$</p> <p>$(1+\frac{1}{7})(1-\frac{1}{8})= (\frac{8}{7})(\frac{7}{8})= 1$</p> <p>And so on</p> <p>All couples $=1$ . So, only the first term remains : $(1-\frac{1}{2})=\frac{1}{2}$</p> <p>Finally</p> <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... = \frac{1}{2}$$</p> Answer 5: <p>HINT:</p> <p>For the first question, $$\left(1+\dfrac1{2n+1}\right)\left(1-\dfrac1{2n+2}\right)=1$$</p> <p>$$\prod_{r=1}^n\left(1-\dfrac1{2^r}\right)=\dfrac{\prod_{r=1}^n(2^r-1)}{2^{1+2+\cdots+n}}$$</p> Answer 6: <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... $$ Except the first term, consider consecutive couples of terms :</p> <p>$(1+\frac{1}{3})(1-\frac{1}{4})= (\frac{4}{3})(\frac{3}{4})= 1$</p> <p>$(1+\frac{1}{5})(1-\frac{1}{6})= (\frac{6}{5})(\frac{5}{6})= 1$</p> <p>$(1+\frac{1}{7})(1-\frac{1}{8})= (\frac{8}{7})(\frac{7}{8})= 1$</p> <p>And so on</p> <p>All couples $=1$ . So, only the first term remains : $(1-\frac{1}{2})=\frac{1}{2}$</p> <p>Finally</p> <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... = \frac{1}{2}$$</p> Answer 7: <p>HINT:</p> <p>For the first question, $$\left(1+\dfrac1{2n+1}\right)\left(1-\dfrac1{2n+2}\right)=1$$</p> <p>$$\prod_{r=1}^n\left(1-\dfrac1{2^r}\right)=\dfrac{\prod_{r=1}^n(2^r-1)}{2^{1+2+\cdots+n}}$$</p> Answer 8: <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... $$ Except the first term, consider consecutive couples of terms :</p> <p>$(1+\frac{1}{3})(1-\frac{1}{4})= (\frac{4}{3})(\frac{3}{4})= 1$</p> <p>$(1+\frac{1}{5})(1-\frac{1}{6})= (\frac{6}{5})(\frac{5}{6})= 1$</p> <p>$(1+\frac{1}{7})(1-\frac{1}{8})= (\frac{8}{7})(\frac{7}{8})= 1$</p> <p>And so on</p> <p>All couples $=1$ . So, only the first term remains : $(1-\frac{1}{2})=\frac{1}{2}$</p> <p>Finally</p> <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... = \frac{1}{2}$$</p> Answer 9: <p>HINT:</p> <p>For the first question, $$\left(1+\dfrac1{2n+1}\right)\left(1-\dfrac1{2n+2}\right)=1$$</p> <p>$$\prod_{r=1}^n\left(1-\dfrac1{2^r}\right)=\dfrac{\prod_{r=1}^n(2^r-1)}{2^{1+2+\cdots+n}}$$</p> Answer 10: <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... $$ Except the first term, consider consecutive couples of terms :</p> <p>$(1+\frac{1}{3})(1-\frac{1}{4})= (\frac{4}{3})(\frac{3}{4})= 1$</p> <p>$(1+\frac{1}{5})(1-\frac{1}{6})= (\frac{6}{5})(\frac{5}{6})= 1$</p> <p>$(1+\frac{1}{7})(1-\frac{1}{8})= (\frac{8}{7})(\frac{7}{8})= 1$</p> <p>And so on</p> <p>All couples $=1$ . So, only the first term remains : $(1-\frac{1}{2})=\frac{1}{2}$</p> <p>Finally</p> <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... = \frac{1}{2}$$</p> Answer 11: <p>HINT:</p> <p>For the first question, $$\left(1+\dfrac1{2n+1}\right)\left(1-\dfrac1{2n+2}\right)=1$$</p> <p>$$\prod_{r=1}^n\left(1-\dfrac1{2^r}\right)=\dfrac{\prod_{r=1}^n(2^r-1)}{2^{1+2+\cdots+n}}$$</p> Answer 12: <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... $$ Except the first term, consider consecutive couples of terms :</p> <p>$(1+\frac{1}{3})(1-\frac{1}{4})= (\frac{4}{3})(\frac{3}{4})= 1$</p> <p>$(1+\frac{1}{5})(1-\frac{1}{6})= (\frac{6}{5})(\frac{5}{6})= 1$</p> <p>$(1+\frac{1}{7})(1-\frac{1}{8})= (\frac{8}{7})(\frac{7}{8})= 1$</p> <p>And so on</p> <p>All couples $=1$ . So, only the first term remains : $(1-\frac{1}{2})=\frac{1}{2}$</p> <p>Finally</p> <p>$$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... = \frac{1}{2}$$</p>
Probability of double matchings
2,111,685
5
https://math.stackexchange.com/questions/2111685/probability-of-double-matchings
<p>Yesterday, I solved this problem question from probability theory: Two similar decks of $N$ distinct cards are matched against a similar target deck. Find the probability of exactly $m \leq N$ matches.</p> <p>I proceeded in the following manner. Let $A_i$ denote the event that $i^{\text{th}}$ card is matched (from both the decks) against the target deck. Therefore, let $$P(\text{E = exactly $m$ match occurs})(N,m)$$ (don't mind the bad notation please) then</p> <p>$$P(N,m) = S_m - \binom{m+1}{m}S_{m+1} + \binom{m+2}{m}S_{m+2} + \ldots + \binom{N}{m} S_{N}$$</p> <p>where $S_1 = \sum_{1\leq i \leq N} P(A_i)$, $S_2 = \sum_{1\leq i \lt j \leq N} P(A_i \cap A_j) \ldots$</p> <p>Clearly, we have $$S_{m+k} = \binom{N}{m+k} \frac{(N-m-k)!^2}{N!^2}$$ Therefore, \begin{align*} P(N,m) &amp;= \sum_{k=0}^{N-m} (-1)^k \binom{m+k}{m} \binom{N}{m+k} \frac{(N-m-k)!^2}{N!^2} \\ &amp;= \frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!} \end{align*}</p> <p>After obtaining the above expression, I thought if there exists some nice closed formula for the series. So I plugged it on W|A but it doesn't returns one (in terms of elementary functions).</p> <p>Next, I started wondering, how does this probability function behaves as $N \rightarrow \infty$. Because this limit might be actual translation of some real world phenomena (although that is something to be considered about, later on).</p> <p>So, I first tried to check for $m=0$</p> <p>\begin{align*} \lim_{N \rightarrow \infty} P(N,0) &amp;= \lim_{N \rightarrow \infty} \frac{1}{N!} \sum_{k=0}^{N} (-1)^k \frac{(N-k)!}{k!} \\ &amp;= \lim_{N \rightarrow \infty} \left(1 - \frac{1}{N} + \frac{1}{2!}\frac{1}{N(N-1)} - \ldots \right) \end{align*}</p> <p>It doesn't strikes me on how to solve this limit, as I cannot evaluate the limit pointwise since it is an infinite sum. So I thought of setting up a recurrence (which may help?). This is what I found:</p> <p>$$P(N+2,0) = P(N+1,0) + \frac{P(N,0)}{(N+1)(N+2)} + \frac{(-1)^{N}}{(N+2)!^2}$$</p> <p>But again, I still couldn't figure out much. I even expressed this as an integral (just because sometimes, it does help) and then tried to do some manipulations, but still no clue</p> <p>$$P(N,0) = (N+1) \int_0^1 \sum_{k=0}^N\frac{ t^k(1-t)^{N-k}}{k!^2} \mathrm{d}t$$ </p> <p>So these are the questions, I am trying to find a solution to:</p> <ol> <li><p>Is there any nice closed form for the expression?</p></li> <li><p>How does the probability function, which I derived, behaves when $N \rightarrow \infty$ for a fixed $m$?</p></li> <li><p>What happens as $N \rightarrow \infty$ and $m \rightarrow \infty$?</p></li> </ol> <p>Any help would be appreciated.</p> <p><strong>Edit 1</strong>: I figured out that $P(N,0) \rightarrow 1$ as $n \rightarrow \infty$ with some computations but I guess, it still requires a rigorous proof.</p>
[ { "answer_id": 2119361, "score": 3, "text": "<p>Let $X_k = \\mathbf{1}_{A_k}$, $k=1,\\dots,N$, be the variable indicating a (double) match in the $k$th card so that $X = X_1+\\dots + X_N$ is the number of matches. By linearity, \n$$\n\\mathbb{E}[X] = \\sum_{k=1}^N \\mathbb{E}[X_k] = N\\cdot \\frac{1}{N^2} = \\frac{1}{N}.\n$$\nBy the Markov inequality, $\\mathbb{P}(X\\ge 1) \\le \\mathbb{E}[X] \\to 0$, $N\\to\\infty$. Therefore, $P(N,0)\\to 1$, $N\\to\\infty$.</p>\n" }, { "answer_id": 2119070, "score": 2, "text": "<p>I don't see how to get more nice form of the expression, but I'll try to answer questions 2 and 3. So we have this:\n$$P(N,m)=\\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!}$$\nNote that the absolute value of the term of the sum decreases with growth of $k$. But if we have sum of the form $$A_n=a_0-a_1+a_2-a_3+\\dots+(-1)^na_n$$\nwith\n$$a_0&gt;a_1&gt;\\dots&gt;a_n&gt;0$$\nthen this sum clearly lies between $a_0-a_1$ and $a_0$. To prove this, just note that $A_{i+2}$ lies between $A_i$ and $A_{i+1}$ and use induction to show that $A_i$ lies between $A_0=a_0$ and $A_1=a_0-a_1$ then.</p>\n\n<p>For $m=0$ we have $A_1=1-1/N\\le P(N,m)\\le1=A_0$, which means that the limit is $1$. All limits for $m&gt;0$ then are zeros, because we are working with probabilities of disjoint events. However, it is easy to see this explicitly, because in these cases $A_0\\le 1/N$. The same argument applies if both $N\\to\\infty$ and $m\\to\\infty$: it doesn't really matter, because as soon as $m&gt;0$ we obtain $P(N,m)\\le 1/N$, thus $P(N,m)\\to 0$</p>\n" }, { "answer_id": 2117729, "score": 1, "text": "<p>As you correctly noted, the limit</p>\n\n<p>$$\\lim_{ N \\rightarrow \\infty} P(N,m )= \\lim_{ N \\rightarrow \\infty} \\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!}$$</p>\n\n<p>for $m=0 \\,$ reduces to</p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\frac{1}{N!} \\sum_{k=0}^{N} (-1)^k \\frac{(N-k)!}{k!} \n=\\lim_{N \\rightarrow \\infty} \\left( 1 - \\frac{1}{N} + \\frac{1}{2!}\\frac{1}{N(N-1)}- \\frac{1}{3!}\\frac{1}{N(N-1)(N-2)} ... \\right)$$</p>\n\n<p>Note that this sum of $N+1\\,$ terms tends to $1$ as $N \\rightarrow \\infty \\,\\,$. This can be shown in several ways. For example, if we group all terms starting from that in the third position in $\\lfloor (N-1)/2 \\rfloor \\,\\,$ consecutive pairs (we can neglect the last term if $N -1\\,$ is odd) , these give a (positive) sum $S $ of the form</p>\n\n<p>$$ S= \\frac {3!(N-2) -2!) (N-3)!}{2!3! N!} + \\frac {5!(N-5) -4!) (N-5)!}{4!5! N!} ... $$</p>\n\n<p>which satisfies </p>\n\n<p>$$S= \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac {\\left( k!(N-k+1) -(k-1)! \\right) (N-k)!}{(k-1)!k! N!} \\\\ &lt; \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac{(N-k+1)(N-k)!}{(k-1)! N!} \\\\ = \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac{(N-k+1)!}{(k-1)! N!} \\\\ &lt; \\frac{ \\lfloor (N-2 )/2 \\rfloor }{2!N(N-1)} $$</p>\n\n<p>where the last step takes into account that the terms of the sum in second to last expression progressively decreases (and then all terms after the first one are smaller than it). The last expression clearly tends to zero for $N \\rightarrow \\infty \\,\\,$, implying that $S $ converges to $0$ as well. So the initial limit reduces to </p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\left ( 1 - \\frac{1}{N} \\right)=1 $$</p>\n\n<p>Now let us consider the case $m&gt;0\\,$. For $m=1\\,\\,$, the limit of the sum reduces to</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,1) = \\\\ \\lim_{N \\rightarrow \\infty} \\frac{1}{N!} \\sum_{k=0}^{N-1} (-1)^k \\frac{(N-1-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\left(\\frac {(N-1)!}{N}- \\frac{(N-2)!}{N!} + \\frac{(N-3)!}{2! N!} - \\frac{(N-4)!}{3! N!}\\ldots \\right)$$</p>\n\n<p>For $m=2\\,\\,$, the limit of the sum reduces to</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,2) = \\\\ \\lim_{N \\rightarrow \\infty} \\frac {1}{2!} \\frac{1}{N!} \\sum_{k=0}^{N-2} (-1)^k \\frac{(N-2-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\frac {1}{2!} \\left(\\frac {(N-2)!}{N}- \\frac{(N-3)!}{N!} + \\frac{(N-4)!}{2! N!} - \\frac{(N-5)!}{3! N!}\\ldots \\right)$$</p>\n\n<p>and so on. In the general case we get</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,m) = \\lim_{N \\rightarrow \\infty} \\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\frac {1!}{m!} \\left(\\frac {(N-m)!}{N!}- \\frac{(N-m-1)!}{N!} + \\frac{(N-m-2)!}{2!N!}-\\frac{(N-m-3)!}{3!N!} + \\ldots \\right)$$</p>\n\n<p>To study this limit, we can procede again as above, by grouping the terms, starting from the third, in $\\lfloor (N-m-2) /2 \\rfloor $ consecutive pairs. We obtain a positive sum $S'$ of the form</p>\n\n<p>$$ S'= \\frac {3!(N-m-2) -2!) (N-m-3)!}{2!3! N!} + \\frac {5!(N-m-4) -4!) (N-m-5)!}{4!5! N!} ... $$</p>\n\n<p>which satisfies </p>\n\n<p>$$S'= \\small{ \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac {\\left( k!(N-m-k+1) -(k-1)! \\right) (N-m-k)!}{(k-1)!k! N!} \\\\ &lt; \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac{(N-m-k+1)(N-m-k)!}{(k-1)! N!} \\\\ = \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac{(N-m-k+1)!}{(k-1)! N!} \\\\ &lt; \\frac{ \\lfloor (N-m-2)/2 \\rfloor (N-m-2)! }{2!N!} } $$</p>\n\n<p>where again the last step takes into account that, in the sum shown in the second to last step, all terms after the first one (i.e., that corresponding to $k=3$) are smaller than it. As above, the last expression clearly tends to zero for $N \\rightarrow \\infty \\,\\,$, and this implies that $S' $ converges to $0$ as well. So the initial limit for the general case reduces to </p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\frac {1!}{m!} \\left(\\frac {(N-m)!}{N!}- \\frac{(N-m-1)!}{N!} \\right) $$</p>\n\n<p>which, for $m \\geq 1 \\,\\,$, converges to $0$.</p>\n\n<p>Therefore, we can conclude that</p>\n\n<p>$$\\lim_{N \\rightarrow \\infty} P (N,m) = \\begin{cases} 1 \\,\\,\\,\\, \\text{if $m=0$} \\\\ 0 \\,\\,\\,\\, \\text{if $1 \\leq m \\leq N$} \\end{cases}$$</p>\n\n<p>This can be interpreted as follows: for $N \\rightarrow \\infty\\,\\, $ , the probability of getting any number of matches from our two decks tends to zero, so we have probability $1$ to get no matches. A calculation by WA for the case $m=0\\,\\,$, performed by setting $N=1000\\,\\,$, is shown <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%281000-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%201000&amp;x=12&amp;y=7\" rel=\"nofollow noreferrer\">here</a>. Similar calculations for the cases $m=1$ and $m=2$ are shown <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%28999-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20999&amp;x=0&amp;y=0\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F2%20*%201%2F1000%21%20%28-1%29%5Ek%20%28998-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20998&amp;x=0&amp;y=0\" rel=\"nofollow noreferrer\">here</a>. The values provided by WA confirm the results obtained above (unfortunately, WA seems to have some problem to calculate the sums for values of $N&gt;1000$).</p>\n\n<p>Lastly, regarding the first question, in my opinion there are no particular reasons to think that a closed form necessarily exists.</p>\n" }, { "answer_id": 2119361, "score": 3, "text": "<p>Let $X_k = \\mathbf{1}_{A_k}$, $k=1,\\dots,N$, be the variable indicating a (double) match in the $k$th card so that $X = X_1+\\dots + X_N$ is the number of matches. By linearity, \n$$\n\\mathbb{E}[X] = \\sum_{k=1}^N \\mathbb{E}[X_k] = N\\cdot \\frac{1}{N^2} = \\frac{1}{N}.\n$$\nBy the Markov inequality, $\\mathbb{P}(X\\ge 1) \\le \\mathbb{E}[X] \\to 0$, $N\\to\\infty$. Therefore, $P(N,0)\\to 1$, $N\\to\\infty$.</p>\n" }, { "answer_id": 2119070, "score": 2, "text": "<p>I don't see how to get more nice form of the expression, but I'll try to answer questions 2 and 3. So we have this:\n$$P(N,m)=\\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!}$$\nNote that the absolute value of the term of the sum decreases with growth of $k$. But if we have sum of the form $$A_n=a_0-a_1+a_2-a_3+\\dots+(-1)^na_n$$\nwith\n$$a_0&gt;a_1&gt;\\dots&gt;a_n&gt;0$$\nthen this sum clearly lies between $a_0-a_1$ and $a_0$. To prove this, just note that $A_{i+2}$ lies between $A_i$ and $A_{i+1}$ and use induction to show that $A_i$ lies between $A_0=a_0$ and $A_1=a_0-a_1$ then.</p>\n\n<p>For $m=0$ we have $A_1=1-1/N\\le P(N,m)\\le1=A_0$, which means that the limit is $1$. All limits for $m&gt;0$ then are zeros, because we are working with probabilities of disjoint events. However, it is easy to see this explicitly, because in these cases $A_0\\le 1/N$. The same argument applies if both $N\\to\\infty$ and $m\\to\\infty$: it doesn't really matter, because as soon as $m&gt;0$ we obtain $P(N,m)\\le 1/N$, thus $P(N,m)\\to 0$</p>\n" }, { "answer_id": 2117729, "score": 1, "text": "<p>As you correctly noted, the limit</p>\n\n<p>$$\\lim_{ N \\rightarrow \\infty} P(N,m )= \\lim_{ N \\rightarrow \\infty} \\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!}$$</p>\n\n<p>for $m=0 \\,$ reduces to</p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\frac{1}{N!} \\sum_{k=0}^{N} (-1)^k \\frac{(N-k)!}{k!} \n=\\lim_{N \\rightarrow \\infty} \\left( 1 - \\frac{1}{N} + \\frac{1}{2!}\\frac{1}{N(N-1)}- \\frac{1}{3!}\\frac{1}{N(N-1)(N-2)} ... \\right)$$</p>\n\n<p>Note that this sum of $N+1\\,$ terms tends to $1$ as $N \\rightarrow \\infty \\,\\,$. This can be shown in several ways. For example, if we group all terms starting from that in the third position in $\\lfloor (N-1)/2 \\rfloor \\,\\,$ consecutive pairs (we can neglect the last term if $N -1\\,$ is odd) , these give a (positive) sum $S $ of the form</p>\n\n<p>$$ S= \\frac {3!(N-2) -2!) (N-3)!}{2!3! N!} + \\frac {5!(N-5) -4!) (N-5)!}{4!5! N!} ... $$</p>\n\n<p>which satisfies </p>\n\n<p>$$S= \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac {\\left( k!(N-k+1) -(k-1)! \\right) (N-k)!}{(k-1)!k! N!} \\\\ &lt; \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac{(N-k+1)(N-k)!}{(k-1)! N!} \\\\ = \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac{(N-k+1)!}{(k-1)! N!} \\\\ &lt; \\frac{ \\lfloor (N-2 )/2 \\rfloor }{2!N(N-1)} $$</p>\n\n<p>where the last step takes into account that the terms of the sum in second to last expression progressively decreases (and then all terms after the first one are smaller than it). The last expression clearly tends to zero for $N \\rightarrow \\infty \\,\\,$, implying that $S $ converges to $0$ as well. So the initial limit reduces to </p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\left ( 1 - \\frac{1}{N} \\right)=1 $$</p>\n\n<p>Now let us consider the case $m&gt;0\\,$. For $m=1\\,\\,$, the limit of the sum reduces to</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,1) = \\\\ \\lim_{N \\rightarrow \\infty} \\frac{1}{N!} \\sum_{k=0}^{N-1} (-1)^k \\frac{(N-1-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\left(\\frac {(N-1)!}{N}- \\frac{(N-2)!}{N!} + \\frac{(N-3)!}{2! N!} - \\frac{(N-4)!}{3! N!}\\ldots \\right)$$</p>\n\n<p>For $m=2\\,\\,$, the limit of the sum reduces to</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,2) = \\\\ \\lim_{N \\rightarrow \\infty} \\frac {1}{2!} \\frac{1}{N!} \\sum_{k=0}^{N-2} (-1)^k \\frac{(N-2-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\frac {1}{2!} \\left(\\frac {(N-2)!}{N}- \\frac{(N-3)!}{N!} + \\frac{(N-4)!}{2! N!} - \\frac{(N-5)!}{3! N!}\\ldots \\right)$$</p>\n\n<p>and so on. In the general case we get</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,m) = \\lim_{N \\rightarrow \\infty} \\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\frac {1!}{m!} \\left(\\frac {(N-m)!}{N!}- \\frac{(N-m-1)!}{N!} + \\frac{(N-m-2)!}{2!N!}-\\frac{(N-m-3)!}{3!N!} + \\ldots \\right)$$</p>\n\n<p>To study this limit, we can procede again as above, by grouping the terms, starting from the third, in $\\lfloor (N-m-2) /2 \\rfloor $ consecutive pairs. We obtain a positive sum $S'$ of the form</p>\n\n<p>$$ S'= \\frac {3!(N-m-2) -2!) (N-m-3)!}{2!3! N!} + \\frac {5!(N-m-4) -4!) (N-m-5)!}{4!5! N!} ... $$</p>\n\n<p>which satisfies </p>\n\n<p>$$S'= \\small{ \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac {\\left( k!(N-m-k+1) -(k-1)! \\right) (N-m-k)!}{(k-1)!k! N!} \\\\ &lt; \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac{(N-m-k+1)(N-m-k)!}{(k-1)! N!} \\\\ = \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac{(N-m-k+1)!}{(k-1)! N!} \\\\ &lt; \\frac{ \\lfloor (N-m-2)/2 \\rfloor (N-m-2)! }{2!N!} } $$</p>\n\n<p>where again the last step takes into account that, in the sum shown in the second to last step, all terms after the first one (i.e., that corresponding to $k=3$) are smaller than it. As above, the last expression clearly tends to zero for $N \\rightarrow \\infty \\,\\,$, and this implies that $S' $ converges to $0$ as well. So the initial limit for the general case reduces to </p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\frac {1!}{m!} \\left(\\frac {(N-m)!}{N!}- \\frac{(N-m-1)!}{N!} \\right) $$</p>\n\n<p>which, for $m \\geq 1 \\,\\,$, converges to $0$.</p>\n\n<p>Therefore, we can conclude that</p>\n\n<p>$$\\lim_{N \\rightarrow \\infty} P (N,m) = \\begin{cases} 1 \\,\\,\\,\\, \\text{if $m=0$} \\\\ 0 \\,\\,\\,\\, \\text{if $1 \\leq m \\leq N$} \\end{cases}$$</p>\n\n<p>This can be interpreted as follows: for $N \\rightarrow \\infty\\,\\, $ , the probability of getting any number of matches from our two decks tends to zero, so we have probability $1$ to get no matches. A calculation by WA for the case $m=0\\,\\,$, performed by setting $N=1000\\,\\,$, is shown <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%281000-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%201000&amp;x=12&amp;y=7\" rel=\"nofollow noreferrer\">here</a>. Similar calculations for the cases $m=1$ and $m=2$ are shown <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%28999-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20999&amp;x=0&amp;y=0\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F2%20*%201%2F1000%21%20%28-1%29%5Ek%20%28998-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20998&amp;x=0&amp;y=0\" rel=\"nofollow noreferrer\">here</a>. The values provided by WA confirm the results obtained above (unfortunately, WA seems to have some problem to calculate the sums for values of $N&gt;1000$).</p>\n\n<p>Lastly, regarding the first question, in my opinion there are no particular reasons to think that a closed form necessarily exists.</p>\n" }, { "answer_id": 2119361, "score": 3, "text": "<p>Let $X_k = \\mathbf{1}_{A_k}$, $k=1,\\dots,N$, be the variable indicating a (double) match in the $k$th card so that $X = X_1+\\dots + X_N$ is the number of matches. By linearity, \n$$\n\\mathbb{E}[X] = \\sum_{k=1}^N \\mathbb{E}[X_k] = N\\cdot \\frac{1}{N^2} = \\frac{1}{N}.\n$$\nBy the Markov inequality, $\\mathbb{P}(X\\ge 1) \\le \\mathbb{E}[X] \\to 0$, $N\\to\\infty$. Therefore, $P(N,0)\\to 1$, $N\\to\\infty$.</p>\n" }, { "answer_id": 2119070, "score": 2, "text": "<p>I don't see how to get more nice form of the expression, but I'll try to answer questions 2 and 3. So we have this:\n$$P(N,m)=\\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!}$$\nNote that the absolute value of the term of the sum decreases with growth of $k$. But if we have sum of the form $$A_n=a_0-a_1+a_2-a_3+\\dots+(-1)^na_n$$\nwith\n$$a_0&gt;a_1&gt;\\dots&gt;a_n&gt;0$$\nthen this sum clearly lies between $a_0-a_1$ and $a_0$. To prove this, just note that $A_{i+2}$ lies between $A_i$ and $A_{i+1}$ and use induction to show that $A_i$ lies between $A_0=a_0$ and $A_1=a_0-a_1$ then.</p>\n\n<p>For $m=0$ we have $A_1=1-1/N\\le P(N,m)\\le1=A_0$, which means that the limit is $1$. All limits for $m&gt;0$ then are zeros, because we are working with probabilities of disjoint events. However, it is easy to see this explicitly, because in these cases $A_0\\le 1/N$. The same argument applies if both $N\\to\\infty$ and $m\\to\\infty$: it doesn't really matter, because as soon as $m&gt;0$ we obtain $P(N,m)\\le 1/N$, thus $P(N,m)\\to 0$</p>\n" }, { "answer_id": 2117729, "score": 1, "text": "<p>As you correctly noted, the limit</p>\n\n<p>$$\\lim_{ N \\rightarrow \\infty} P(N,m )= \\lim_{ N \\rightarrow \\infty} \\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!}$$</p>\n\n<p>for $m=0 \\,$ reduces to</p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\frac{1}{N!} \\sum_{k=0}^{N} (-1)^k \\frac{(N-k)!}{k!} \n=\\lim_{N \\rightarrow \\infty} \\left( 1 - \\frac{1}{N} + \\frac{1}{2!}\\frac{1}{N(N-1)}- \\frac{1}{3!}\\frac{1}{N(N-1)(N-2)} ... \\right)$$</p>\n\n<p>Note that this sum of $N+1\\,$ terms tends to $1$ as $N \\rightarrow \\infty \\,\\,$. This can be shown in several ways. For example, if we group all terms starting from that in the third position in $\\lfloor (N-1)/2 \\rfloor \\,\\,$ consecutive pairs (we can neglect the last term if $N -1\\,$ is odd) , these give a (positive) sum $S $ of the form</p>\n\n<p>$$ S= \\frac {3!(N-2) -2!) (N-3)!}{2!3! N!} + \\frac {5!(N-5) -4!) (N-5)!}{4!5! N!} ... $$</p>\n\n<p>which satisfies </p>\n\n<p>$$S= \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac {\\left( k!(N-k+1) -(k-1)! \\right) (N-k)!}{(k-1)!k! N!} \\\\ &lt; \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac{(N-k+1)(N-k)!}{(k-1)! N!} \\\\ = \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac{(N-k+1)!}{(k-1)! N!} \\\\ &lt; \\frac{ \\lfloor (N-2 )/2 \\rfloor }{2!N(N-1)} $$</p>\n\n<p>where the last step takes into account that the terms of the sum in second to last expression progressively decreases (and then all terms after the first one are smaller than it). The last expression clearly tends to zero for $N \\rightarrow \\infty \\,\\,$, implying that $S $ converges to $0$ as well. So the initial limit reduces to </p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\left ( 1 - \\frac{1}{N} \\right)=1 $$</p>\n\n<p>Now let us consider the case $m&gt;0\\,$. For $m=1\\,\\,$, the limit of the sum reduces to</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,1) = \\\\ \\lim_{N \\rightarrow \\infty} \\frac{1}{N!} \\sum_{k=0}^{N-1} (-1)^k \\frac{(N-1-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\left(\\frac {(N-1)!}{N}- \\frac{(N-2)!}{N!} + \\frac{(N-3)!}{2! N!} - \\frac{(N-4)!}{3! N!}\\ldots \\right)$$</p>\n\n<p>For $m=2\\,\\,$, the limit of the sum reduces to</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,2) = \\\\ \\lim_{N \\rightarrow \\infty} \\frac {1}{2!} \\frac{1}{N!} \\sum_{k=0}^{N-2} (-1)^k \\frac{(N-2-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\frac {1}{2!} \\left(\\frac {(N-2)!}{N}- \\frac{(N-3)!}{N!} + \\frac{(N-4)!}{2! N!} - \\frac{(N-5)!}{3! N!}\\ldots \\right)$$</p>\n\n<p>and so on. In the general case we get</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,m) = \\lim_{N \\rightarrow \\infty} \\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\frac {1!}{m!} \\left(\\frac {(N-m)!}{N!}- \\frac{(N-m-1)!}{N!} + \\frac{(N-m-2)!}{2!N!}-\\frac{(N-m-3)!}{3!N!} + \\ldots \\right)$$</p>\n\n<p>To study this limit, we can procede again as above, by grouping the terms, starting from the third, in $\\lfloor (N-m-2) /2 \\rfloor $ consecutive pairs. We obtain a positive sum $S'$ of the form</p>\n\n<p>$$ S'= \\frac {3!(N-m-2) -2!) (N-m-3)!}{2!3! N!} + \\frac {5!(N-m-4) -4!) (N-m-5)!}{4!5! N!} ... $$</p>\n\n<p>which satisfies </p>\n\n<p>$$S'= \\small{ \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac {\\left( k!(N-m-k+1) -(k-1)! \\right) (N-m-k)!}{(k-1)!k! N!} \\\\ &lt; \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac{(N-m-k+1)(N-m-k)!}{(k-1)! N!} \\\\ = \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac{(N-m-k+1)!}{(k-1)! N!} \\\\ &lt; \\frac{ \\lfloor (N-m-2)/2 \\rfloor (N-m-2)! }{2!N!} } $$</p>\n\n<p>where again the last step takes into account that, in the sum shown in the second to last step, all terms after the first one (i.e., that corresponding to $k=3$) are smaller than it. As above, the last expression clearly tends to zero for $N \\rightarrow \\infty \\,\\,$, and this implies that $S' $ converges to $0$ as well. So the initial limit for the general case reduces to </p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\frac {1!}{m!} \\left(\\frac {(N-m)!}{N!}- \\frac{(N-m-1)!}{N!} \\right) $$</p>\n\n<p>which, for $m \\geq 1 \\,\\,$, converges to $0$.</p>\n\n<p>Therefore, we can conclude that</p>\n\n<p>$$\\lim_{N \\rightarrow \\infty} P (N,m) = \\begin{cases} 1 \\,\\,\\,\\, \\text{if $m=0$} \\\\ 0 \\,\\,\\,\\, \\text{if $1 \\leq m \\leq N$} \\end{cases}$$</p>\n\n<p>This can be interpreted as follows: for $N \\rightarrow \\infty\\,\\, $ , the probability of getting any number of matches from our two decks tends to zero, so we have probability $1$ to get no matches. A calculation by WA for the case $m=0\\,\\,$, performed by setting $N=1000\\,\\,$, is shown <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%281000-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%201000&amp;x=12&amp;y=7\" rel=\"nofollow noreferrer\">here</a>. Similar calculations for the cases $m=1$ and $m=2$ are shown <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%28999-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20999&amp;x=0&amp;y=0\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F2%20*%201%2F1000%21%20%28-1%29%5Ek%20%28998-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20998&amp;x=0&amp;y=0\" rel=\"nofollow noreferrer\">here</a>. The values provided by WA confirm the results obtained above (unfortunately, WA seems to have some problem to calculate the sums for values of $N&gt;1000$).</p>\n\n<p>Lastly, regarding the first question, in my opinion there are no particular reasons to think that a closed form necessarily exists.</p>\n" }, { "answer_id": 2119361, "score": 3, "text": "<p>Let $X_k = \\mathbf{1}_{A_k}$, $k=1,\\dots,N$, be the variable indicating a (double) match in the $k$th card so that $X = X_1+\\dots + X_N$ is the number of matches. By linearity, \n$$\n\\mathbb{E}[X] = \\sum_{k=1}^N \\mathbb{E}[X_k] = N\\cdot \\frac{1}{N^2} = \\frac{1}{N}.\n$$\nBy the Markov inequality, $\\mathbb{P}(X\\ge 1) \\le \\mathbb{E}[X] \\to 0$, $N\\to\\infty$. Therefore, $P(N,0)\\to 1$, $N\\to\\infty$.</p>\n" }, { "answer_id": 2119070, "score": 2, "text": "<p>I don't see how to get more nice form of the expression, but I'll try to answer questions 2 and 3. So we have this:\n$$P(N,m)=\\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!}$$\nNote that the absolute value of the term of the sum decreases with growth of $k$. But if we have sum of the form $$A_n=a_0-a_1+a_2-a_3+\\dots+(-1)^na_n$$\nwith\n$$a_0&gt;a_1&gt;\\dots&gt;a_n&gt;0$$\nthen this sum clearly lies between $a_0-a_1$ and $a_0$. To prove this, just note that $A_{i+2}$ lies between $A_i$ and $A_{i+1}$ and use induction to show that $A_i$ lies between $A_0=a_0$ and $A_1=a_0-a_1$ then.</p>\n\n<p>For $m=0$ we have $A_1=1-1/N\\le P(N,m)\\le1=A_0$, which means that the limit is $1$. All limits for $m&gt;0$ then are zeros, because we are working with probabilities of disjoint events. However, it is easy to see this explicitly, because in these cases $A_0\\le 1/N$. The same argument applies if both $N\\to\\infty$ and $m\\to\\infty$: it doesn't really matter, because as soon as $m&gt;0$ we obtain $P(N,m)\\le 1/N$, thus $P(N,m)\\to 0$</p>\n" }, { "answer_id": 2117729, "score": 1, "text": "<p>As you correctly noted, the limit</p>\n\n<p>$$\\lim_{ N \\rightarrow \\infty} P(N,m )= \\lim_{ N \\rightarrow \\infty} \\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!}$$</p>\n\n<p>for $m=0 \\,$ reduces to</p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\frac{1}{N!} \\sum_{k=0}^{N} (-1)^k \\frac{(N-k)!}{k!} \n=\\lim_{N \\rightarrow \\infty} \\left( 1 - \\frac{1}{N} + \\frac{1}{2!}\\frac{1}{N(N-1)}- \\frac{1}{3!}\\frac{1}{N(N-1)(N-2)} ... \\right)$$</p>\n\n<p>Note that this sum of $N+1\\,$ terms tends to $1$ as $N \\rightarrow \\infty \\,\\,$. This can be shown in several ways. For example, if we group all terms starting from that in the third position in $\\lfloor (N-1)/2 \\rfloor \\,\\,$ consecutive pairs (we can neglect the last term if $N -1\\,$ is odd) , these give a (positive) sum $S $ of the form</p>\n\n<p>$$ S= \\frac {3!(N-2) -2!) (N-3)!}{2!3! N!} + \\frac {5!(N-5) -4!) (N-5)!}{4!5! N!} ... $$</p>\n\n<p>which satisfies </p>\n\n<p>$$S= \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac {\\left( k!(N-k+1) -(k-1)! \\right) (N-k)!}{(k-1)!k! N!} \\\\ &lt; \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac{(N-k+1)(N-k)!}{(k-1)! N!} \\\\ = \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac{(N-k+1)!}{(k-1)! N!} \\\\ &lt; \\frac{ \\lfloor (N-2 )/2 \\rfloor }{2!N(N-1)} $$</p>\n\n<p>where the last step takes into account that the terms of the sum in second to last expression progressively decreases (and then all terms after the first one are smaller than it). The last expression clearly tends to zero for $N \\rightarrow \\infty \\,\\,$, implying that $S $ converges to $0$ as well. So the initial limit reduces to </p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\left ( 1 - \\frac{1}{N} \\right)=1 $$</p>\n\n<p>Now let us consider the case $m&gt;0\\,$. For $m=1\\,\\,$, the limit of the sum reduces to</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,1) = \\\\ \\lim_{N \\rightarrow \\infty} \\frac{1}{N!} \\sum_{k=0}^{N-1} (-1)^k \\frac{(N-1-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\left(\\frac {(N-1)!}{N}- \\frac{(N-2)!}{N!} + \\frac{(N-3)!}{2! N!} - \\frac{(N-4)!}{3! N!}\\ldots \\right)$$</p>\n\n<p>For $m=2\\,\\,$, the limit of the sum reduces to</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,2) = \\\\ \\lim_{N \\rightarrow \\infty} \\frac {1}{2!} \\frac{1}{N!} \\sum_{k=0}^{N-2} (-1)^k \\frac{(N-2-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\frac {1}{2!} \\left(\\frac {(N-2)!}{N}- \\frac{(N-3)!}{N!} + \\frac{(N-4)!}{2! N!} - \\frac{(N-5)!}{3! N!}\\ldots \\right)$$</p>\n\n<p>and so on. In the general case we get</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,m) = \\lim_{N \\rightarrow \\infty} \\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\frac {1!}{m!} \\left(\\frac {(N-m)!}{N!}- \\frac{(N-m-1)!}{N!} + \\frac{(N-m-2)!}{2!N!}-\\frac{(N-m-3)!}{3!N!} + \\ldots \\right)$$</p>\n\n<p>To study this limit, we can procede again as above, by grouping the terms, starting from the third, in $\\lfloor (N-m-2) /2 \\rfloor $ consecutive pairs. We obtain a positive sum $S'$ of the form</p>\n\n<p>$$ S'= \\frac {3!(N-m-2) -2!) (N-m-3)!}{2!3! N!} + \\frac {5!(N-m-4) -4!) (N-m-5)!}{4!5! N!} ... $$</p>\n\n<p>which satisfies </p>\n\n<p>$$S'= \\small{ \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac {\\left( k!(N-m-k+1) -(k-1)! \\right) (N-m-k)!}{(k-1)!k! N!} \\\\ &lt; \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac{(N-m-k+1)(N-m-k)!}{(k-1)! N!} \\\\ = \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac{(N-m-k+1)!}{(k-1)! N!} \\\\ &lt; \\frac{ \\lfloor (N-m-2)/2 \\rfloor (N-m-2)! }{2!N!} } $$</p>\n\n<p>where again the last step takes into account that, in the sum shown in the second to last step, all terms after the first one (i.e., that corresponding to $k=3$) are smaller than it. As above, the last expression clearly tends to zero for $N \\rightarrow \\infty \\,\\,$, and this implies that $S' $ converges to $0$ as well. So the initial limit for the general case reduces to </p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\frac {1!}{m!} \\left(\\frac {(N-m)!}{N!}- \\frac{(N-m-1)!}{N!} \\right) $$</p>\n\n<p>which, for $m \\geq 1 \\,\\,$, converges to $0$.</p>\n\n<p>Therefore, we can conclude that</p>\n\n<p>$$\\lim_{N \\rightarrow \\infty} P (N,m) = \\begin{cases} 1 \\,\\,\\,\\, \\text{if $m=0$} \\\\ 0 \\,\\,\\,\\, \\text{if $1 \\leq m \\leq N$} \\end{cases}$$</p>\n\n<p>This can be interpreted as follows: for $N \\rightarrow \\infty\\,\\, $ , the probability of getting any number of matches from our two decks tends to zero, so we have probability $1$ to get no matches. A calculation by WA for the case $m=0\\,\\,$, performed by setting $N=1000\\,\\,$, is shown <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%281000-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%201000&amp;x=12&amp;y=7\" rel=\"nofollow noreferrer\">here</a>. Similar calculations for the cases $m=1$ and $m=2$ are shown <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%28999-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20999&amp;x=0&amp;y=0\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F2%20*%201%2F1000%21%20%28-1%29%5Ek%20%28998-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20998&amp;x=0&amp;y=0\" rel=\"nofollow noreferrer\">here</a>. The values provided by WA confirm the results obtained above (unfortunately, WA seems to have some problem to calculate the sums for values of $N&gt;1000$).</p>\n\n<p>Lastly, regarding the first question, in my opinion there are no particular reasons to think that a closed form necessarily exists.</p>\n" }, { "answer_id": 2119361, "score": 3, "text": "<p>Let $X_k = \\mathbf{1}_{A_k}$, $k=1,\\dots,N$, be the variable indicating a (double) match in the $k$th card so that $X = X_1+\\dots + X_N$ is the number of matches. By linearity, \n$$\n\\mathbb{E}[X] = \\sum_{k=1}^N \\mathbb{E}[X_k] = N\\cdot \\frac{1}{N^2} = \\frac{1}{N}.\n$$\nBy the Markov inequality, $\\mathbb{P}(X\\ge 1) \\le \\mathbb{E}[X] \\to 0$, $N\\to\\infty$. Therefore, $P(N,0)\\to 1$, $N\\to\\infty$.</p>\n" }, { "answer_id": 2119070, "score": 2, "text": "<p>I don't see how to get more nice form of the expression, but I'll try to answer questions 2 and 3. So we have this:\n$$P(N,m)=\\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!}$$\nNote that the absolute value of the term of the sum decreases with growth of $k$. But if we have sum of the form $$A_n=a_0-a_1+a_2-a_3+\\dots+(-1)^na_n$$\nwith\n$$a_0&gt;a_1&gt;\\dots&gt;a_n&gt;0$$\nthen this sum clearly lies between $a_0-a_1$ and $a_0$. To prove this, just note that $A_{i+2}$ lies between $A_i$ and $A_{i+1}$ and use induction to show that $A_i$ lies between $A_0=a_0$ and $A_1=a_0-a_1$ then.</p>\n\n<p>For $m=0$ we have $A_1=1-1/N\\le P(N,m)\\le1=A_0$, which means that the limit is $1$. All limits for $m&gt;0$ then are zeros, because we are working with probabilities of disjoint events. However, it is easy to see this explicitly, because in these cases $A_0\\le 1/N$. The same argument applies if both $N\\to\\infty$ and $m\\to\\infty$: it doesn't really matter, because as soon as $m&gt;0$ we obtain $P(N,m)\\le 1/N$, thus $P(N,m)\\to 0$</p>\n" }, { "answer_id": 2117729, "score": 1, "text": "<p>As you correctly noted, the limit</p>\n\n<p>$$\\lim_{ N \\rightarrow \\infty} P(N,m )= \\lim_{ N \\rightarrow \\infty} \\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!}$$</p>\n\n<p>for $m=0 \\,$ reduces to</p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\frac{1}{N!} \\sum_{k=0}^{N} (-1)^k \\frac{(N-k)!}{k!} \n=\\lim_{N \\rightarrow \\infty} \\left( 1 - \\frac{1}{N} + \\frac{1}{2!}\\frac{1}{N(N-1)}- \\frac{1}{3!}\\frac{1}{N(N-1)(N-2)} ... \\right)$$</p>\n\n<p>Note that this sum of $N+1\\,$ terms tends to $1$ as $N \\rightarrow \\infty \\,\\,$. This can be shown in several ways. For example, if we group all terms starting from that in the third position in $\\lfloor (N-1)/2 \\rfloor \\,\\,$ consecutive pairs (we can neglect the last term if $N -1\\,$ is odd) , these give a (positive) sum $S $ of the form</p>\n\n<p>$$ S= \\frac {3!(N-2) -2!) (N-3)!}{2!3! N!} + \\frac {5!(N-5) -4!) (N-5)!}{4!5! N!} ... $$</p>\n\n<p>which satisfies </p>\n\n<p>$$S= \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac {\\left( k!(N-k+1) -(k-1)! \\right) (N-k)!}{(k-1)!k! N!} \\\\ &lt; \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac{(N-k+1)(N-k)!}{(k-1)! N!} \\\\ = \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N} \\frac{(N-k+1)!}{(k-1)! N!} \\\\ &lt; \\frac{ \\lfloor (N-2 )/2 \\rfloor }{2!N(N-1)} $$</p>\n\n<p>where the last step takes into account that the terms of the sum in second to last expression progressively decreases (and then all terms after the first one are smaller than it). The last expression clearly tends to zero for $N \\rightarrow \\infty \\,\\,$, implying that $S $ converges to $0$ as well. So the initial limit reduces to </p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\left ( 1 - \\frac{1}{N} \\right)=1 $$</p>\n\n<p>Now let us consider the case $m&gt;0\\,$. For $m=1\\,\\,$, the limit of the sum reduces to</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,1) = \\\\ \\lim_{N \\rightarrow \\infty} \\frac{1}{N!} \\sum_{k=0}^{N-1} (-1)^k \\frac{(N-1-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\left(\\frac {(N-1)!}{N}- \\frac{(N-2)!}{N!} + \\frac{(N-3)!}{2! N!} - \\frac{(N-4)!}{3! N!}\\ldots \\right)$$</p>\n\n<p>For $m=2\\,\\,$, the limit of the sum reduces to</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,2) = \\\\ \\lim_{N \\rightarrow \\infty} \\frac {1}{2!} \\frac{1}{N!} \\sum_{k=0}^{N-2} (-1)^k \\frac{(N-2-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\frac {1}{2!} \\left(\\frac {(N-2)!}{N}- \\frac{(N-3)!}{N!} + \\frac{(N-4)!}{2! N!} - \\frac{(N-5)!}{3! N!}\\ldots \\right)$$</p>\n\n<p>and so on. In the general case we get</p>\n\n<p>$$= \\lim_{N \\rightarrow \\infty} P(N,m) = \\lim_{N \\rightarrow \\infty} \\frac{1}{m!} \\frac{1}{N!} \\sum_{k=0}^{N-m} (-1)^k \\frac{(N-m-k)!}{k!} \\\\\n= \\lim_{N \\rightarrow \\infty} \\frac {1!}{m!} \\left(\\frac {(N-m)!}{N!}- \\frac{(N-m-1)!}{N!} + \\frac{(N-m-2)!}{2!N!}-\\frac{(N-m-3)!}{3!N!} + \\ldots \\right)$$</p>\n\n<p>To study this limit, we can procede again as above, by grouping the terms, starting from the third, in $\\lfloor (N-m-2) /2 \\rfloor $ consecutive pairs. We obtain a positive sum $S'$ of the form</p>\n\n<p>$$ S'= \\frac {3!(N-m-2) -2!) (N-m-3)!}{2!3! N!} + \\frac {5!(N-m-4) -4!) (N-m-5)!}{4!5! N!} ... $$</p>\n\n<p>which satisfies </p>\n\n<p>$$S'= \\small{ \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac {\\left( k!(N-m-k+1) -(k-1)! \\right) (N-m-k)!}{(k-1)!k! N!} \\\\ &lt; \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac{(N-m-k+1)(N-m-k)!}{(k-1)! N!} \\\\ = \\sum_{\\substack {k=3 \\\\ k \\text {odd}}}^{N-m} \\frac{(N-m-k+1)!}{(k-1)! N!} \\\\ &lt; \\frac{ \\lfloor (N-m-2)/2 \\rfloor (N-m-2)! }{2!N!} } $$</p>\n\n<p>where again the last step takes into account that, in the sum shown in the second to last step, all terms after the first one (i.e., that corresponding to $k=3$) are smaller than it. As above, the last expression clearly tends to zero for $N \\rightarrow \\infty \\,\\,$, and this implies that $S' $ converges to $0$ as well. So the initial limit for the general case reduces to </p>\n\n<p>$$ \\lim_{N \\rightarrow \\infty} \\frac {1!}{m!} \\left(\\frac {(N-m)!}{N!}- \\frac{(N-m-1)!}{N!} \\right) $$</p>\n\n<p>which, for $m \\geq 1 \\,\\,$, converges to $0$.</p>\n\n<p>Therefore, we can conclude that</p>\n\n<p>$$\\lim_{N \\rightarrow \\infty} P (N,m) = \\begin{cases} 1 \\,\\,\\,\\, \\text{if $m=0$} \\\\ 0 \\,\\,\\,\\, \\text{if $1 \\leq m \\leq N$} \\end{cases}$$</p>\n\n<p>This can be interpreted as follows: for $N \\rightarrow \\infty\\,\\, $ , the probability of getting any number of matches from our two decks tends to zero, so we have probability $1$ to get no matches. A calculation by WA for the case $m=0\\,\\,$, performed by setting $N=1000\\,\\,$, is shown <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%281000-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%201000&amp;x=12&amp;y=7\" rel=\"nofollow noreferrer\">here</a>. Similar calculations for the cases $m=1$ and $m=2$ are shown <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%28999-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20999&amp;x=0&amp;y=0\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://m.wolframalpha.com/input/?i=sum%201%2F2%20*%201%2F1000%21%20%28-1%29%5Ek%20%28998-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20998&amp;x=0&amp;y=0\" rel=\"nofollow noreferrer\">here</a>. The values provided by WA confirm the results obtained above (unfortunately, WA seems to have some problem to calculate the sums for values of $N&gt;1000$).</p>\n\n<p>Lastly, regarding the first question, in my opinion there are no particular reasons to think that a closed form necessarily exists.</p>\n" } ]
Question: Probability of double matchings Answer 1: <p>Let $X_k = \mathbf{1}_{A_k}$, $k=1,\dots,N$, be the variable indicating a (double) match in the $k$th card so that $X = X_1+\dots + X_N$ is the number of matches. By linearity, $$ \mathbb{E}[X] = \sum_{k=1}^N \mathbb{E}[X_k] = N\cdot \frac{1}{N^2} = \frac{1}{N}. $$ By the Markov inequality, $\mathbb{P}(X\ge 1) \le \mathbb{E}[X] \to 0$, $N\to\infty$. Therefore, $P(N,0)\to 1$, $N\to\infty$.</p> Answer 2: <p>I don't see how to get more nice form of the expression, but I'll try to answer questions 2 and 3. So we have this: $$P(N,m)=\frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!}$$ Note that the absolute value of the term of the sum decreases with growth of $k$. But if we have sum of the form $$A_n=a_0-a_1+a_2-a_3+\dots+(-1)^na_n$$ with $$a_0&gt;a_1&gt;\dots&gt;a_n&gt;0$$ then this sum clearly lies between $a_0-a_1$ and $a_0$. To prove this, just note that $A_{i+2}$ lies between $A_i$ and $A_{i+1}$ and use induction to show that $A_i$ lies between $A_0=a_0$ and $A_1=a_0-a_1$ then.</p> <p>For $m=0$ we have $A_1=1-1/N\le P(N,m)\le1=A_0$, which means that the limit is $1$. All limits for $m&gt;0$ then are zeros, because we are working with probabilities of disjoint events. However, it is easy to see this explicitly, because in these cases $A_0\le 1/N$. The same argument applies if both $N\to\infty$ and $m\to\infty$: it doesn't really matter, because as soon as $m&gt;0$ we obtain $P(N,m)\le 1/N$, thus $P(N,m)\to 0$</p> Answer 3: <p>As you correctly noted, the limit</p> <p>$$\lim_{ N \rightarrow \infty} P(N,m )= \lim_{ N \rightarrow \infty} \frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!}$$</p> <p>for $m=0 \,$ reduces to</p> <p>$$ \lim_{N \rightarrow \infty} \frac{1}{N!} \sum_{k=0}^{N} (-1)^k \frac{(N-k)!}{k!} =\lim_{N \rightarrow \infty} \left( 1 - \frac{1}{N} + \frac{1}{2!}\frac{1}{N(N-1)}- \frac{1}{3!}\frac{1}{N(N-1)(N-2)} ... \right)$$</p> <p>Note that this sum of $N+1\,$ terms tends to $1$ as $N \rightarrow \infty \,\,$. This can be shown in several ways. For example, if we group all terms starting from that in the third position in $\lfloor (N-1)/2 \rfloor \,\,$ consecutive pairs (we can neglect the last term if $N -1\,$ is odd) , these give a (positive) sum $S $ of the form</p> <p>$$ S= \frac {3!(N-2) -2!) (N-3)!}{2!3! N!} + \frac {5!(N-5) -4!) (N-5)!}{4!5! N!} ... $$</p> <p>which satisfies </p> <p>$$S= \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac {\left( k!(N-k+1) -(k-1)! \right) (N-k)!}{(k-1)!k! N!} \\ &lt; \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac{(N-k+1)(N-k)!}{(k-1)! N!} \\ = \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac{(N-k+1)!}{(k-1)! N!} \\ &lt; \frac{ \lfloor (N-2 )/2 \rfloor }{2!N(N-1)} $$</p> <p>where the last step takes into account that the terms of the sum in second to last expression progressively decreases (and then all terms after the first one are smaller than it). The last expression clearly tends to zero for $N \rightarrow \infty \,\,$, implying that $S $ converges to $0$ as well. So the initial limit reduces to </p> <p>$$ \lim_{N \rightarrow \infty} \left ( 1 - \frac{1}{N} \right)=1 $$</p> <p>Now let us consider the case $m&gt;0\,$. For $m=1\,\,$, the limit of the sum reduces to</p> <p>$$= \lim_{N \rightarrow \infty} P(N,1) = \\ \lim_{N \rightarrow \infty} \frac{1}{N!} \sum_{k=0}^{N-1} (-1)^k \frac{(N-1-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \left(\frac {(N-1)!}{N}- \frac{(N-2)!}{N!} + \frac{(N-3)!}{2! N!} - \frac{(N-4)!}{3! N!}\ldots \right)$$</p> <p>For $m=2\,\,$, the limit of the sum reduces to</p> <p>$$= \lim_{N \rightarrow \infty} P(N,2) = \\ \lim_{N \rightarrow \infty} \frac {1}{2!} \frac{1}{N!} \sum_{k=0}^{N-2} (-1)^k \frac{(N-2-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \frac {1}{2!} \left(\frac {(N-2)!}{N}- \frac{(N-3)!}{N!} + \frac{(N-4)!}{2! N!} - \frac{(N-5)!}{3! N!}\ldots \right)$$</p> <p>and so on. In the general case we get</p> <p>$$= \lim_{N \rightarrow \infty} P(N,m) = \lim_{N \rightarrow \infty} \frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \frac {1!}{m!} \left(\frac {(N-m)!}{N!}- \frac{(N-m-1)!}{N!} + \frac{(N-m-2)!}{2!N!}-\frac{(N-m-3)!}{3!N!} + \ldots \right)$$</p> <p>To study this limit, we can procede again as above, by grouping the terms, starting from the third, in $\lfloor (N-m-2) /2 \rfloor $ consecutive pairs. We obtain a positive sum $S'$ of the form</p> <p>$$ S'= \frac {3!(N-m-2) -2!) (N-m-3)!}{2!3! N!} + \frac {5!(N-m-4) -4!) (N-m-5)!}{4!5! N!} ... $$</p> <p>which satisfies </p> <p>$$S'= \small{ \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac {\left( k!(N-m-k+1) -(k-1)! \right) (N-m-k)!}{(k-1)!k! N!} \\ &lt; \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac{(N-m-k+1)(N-m-k)!}{(k-1)! N!} \\ = \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac{(N-m-k+1)!}{(k-1)! N!} \\ &lt; \frac{ \lfloor (N-m-2)/2 \rfloor (N-m-2)! }{2!N!} } $$</p> <p>where again the last step takes into account that, in the sum shown in the second to last step, all terms after the first one (i.e., that corresponding to $k=3$) are smaller than it. As above, the last expression clearly tends to zero for $N \rightarrow \infty \,\,$, and this implies that $S' $ converges to $0$ as well. So the initial limit for the general case reduces to </p> <p>$$ \lim_{N \rightarrow \infty} \frac {1!}{m!} \left(\frac {(N-m)!}{N!}- \frac{(N-m-1)!}{N!} \right) $$</p> <p>which, for $m \geq 1 \,\,$, converges to $0$.</p> <p>Therefore, we can conclude that</p> <p>$$\lim_{N \rightarrow \infty} P (N,m) = \begin{cases} 1 \,\,\,\, \text{if $m=0$} \\ 0 \,\,\,\, \text{if $1 \leq m \leq N$} \end{cases}$$</p> <p>This can be interpreted as follows: for $N \rightarrow \infty\,\, $ , the probability of getting any number of matches from our two decks tends to zero, so we have probability $1$ to get no matches. A calculation by WA for the case $m=0\,\,$, performed by setting $N=1000\,\,$, is shown <a href="http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%281000-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%201000&amp;x=12&amp;y=7" rel="nofollow noreferrer">here</a>. Similar calculations for the cases $m=1$ and $m=2$ are shown <a href="http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%28999-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20999&amp;x=0&amp;y=0" rel="nofollow noreferrer">here</a> and <a href="http://m.wolframalpha.com/input/?i=sum%201%2F2%20*%201%2F1000%21%20%28-1%29%5Ek%20%28998-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20998&amp;x=0&amp;y=0" rel="nofollow noreferrer">here</a>. The values provided by WA confirm the results obtained above (unfortunately, WA seems to have some problem to calculate the sums for values of $N&gt;1000$).</p> <p>Lastly, regarding the first question, in my opinion there are no particular reasons to think that a closed form necessarily exists.</p> Answer 4: <p>Let $X_k = \mathbf{1}_{A_k}$, $k=1,\dots,N$, be the variable indicating a (double) match in the $k$th card so that $X = X_1+\dots + X_N$ is the number of matches. By linearity, $$ \mathbb{E}[X] = \sum_{k=1}^N \mathbb{E}[X_k] = N\cdot \frac{1}{N^2} = \frac{1}{N}. $$ By the Markov inequality, $\mathbb{P}(X\ge 1) \le \mathbb{E}[X] \to 0$, $N\to\infty$. Therefore, $P(N,0)\to 1$, $N\to\infty$.</p> Answer 5: <p>I don't see how to get more nice form of the expression, but I'll try to answer questions 2 and 3. So we have this: $$P(N,m)=\frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!}$$ Note that the absolute value of the term of the sum decreases with growth of $k$. But if we have sum of the form $$A_n=a_0-a_1+a_2-a_3+\dots+(-1)^na_n$$ with $$a_0&gt;a_1&gt;\dots&gt;a_n&gt;0$$ then this sum clearly lies between $a_0-a_1$ and $a_0$. To prove this, just note that $A_{i+2}$ lies between $A_i$ and $A_{i+1}$ and use induction to show that $A_i$ lies between $A_0=a_0$ and $A_1=a_0-a_1$ then.</p> <p>For $m=0$ we have $A_1=1-1/N\le P(N,m)\le1=A_0$, which means that the limit is $1$. All limits for $m&gt;0$ then are zeros, because we are working with probabilities of disjoint events. However, it is easy to see this explicitly, because in these cases $A_0\le 1/N$. The same argument applies if both $N\to\infty$ and $m\to\infty$: it doesn't really matter, because as soon as $m&gt;0$ we obtain $P(N,m)\le 1/N$, thus $P(N,m)\to 0$</p> Answer 6: <p>As you correctly noted, the limit</p> <p>$$\lim_{ N \rightarrow \infty} P(N,m )= \lim_{ N \rightarrow \infty} \frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!}$$</p> <p>for $m=0 \,$ reduces to</p> <p>$$ \lim_{N \rightarrow \infty} \frac{1}{N!} \sum_{k=0}^{N} (-1)^k \frac{(N-k)!}{k!} =\lim_{N \rightarrow \infty} \left( 1 - \frac{1}{N} + \frac{1}{2!}\frac{1}{N(N-1)}- \frac{1}{3!}\frac{1}{N(N-1)(N-2)} ... \right)$$</p> <p>Note that this sum of $N+1\,$ terms tends to $1$ as $N \rightarrow \infty \,\,$. This can be shown in several ways. For example, if we group all terms starting from that in the third position in $\lfloor (N-1)/2 \rfloor \,\,$ consecutive pairs (we can neglect the last term if $N -1\,$ is odd) , these give a (positive) sum $S $ of the form</p> <p>$$ S= \frac {3!(N-2) -2!) (N-3)!}{2!3! N!} + \frac {5!(N-5) -4!) (N-5)!}{4!5! N!} ... $$</p> <p>which satisfies </p> <p>$$S= \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac {\left( k!(N-k+1) -(k-1)! \right) (N-k)!}{(k-1)!k! N!} \\ &lt; \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac{(N-k+1)(N-k)!}{(k-1)! N!} \\ = \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac{(N-k+1)!}{(k-1)! N!} \\ &lt; \frac{ \lfloor (N-2 )/2 \rfloor }{2!N(N-1)} $$</p> <p>where the last step takes into account that the terms of the sum in second to last expression progressively decreases (and then all terms after the first one are smaller than it). The last expression clearly tends to zero for $N \rightarrow \infty \,\,$, implying that $S $ converges to $0$ as well. So the initial limit reduces to </p> <p>$$ \lim_{N \rightarrow \infty} \left ( 1 - \frac{1}{N} \right)=1 $$</p> <p>Now let us consider the case $m&gt;0\,$. For $m=1\,\,$, the limit of the sum reduces to</p> <p>$$= \lim_{N \rightarrow \infty} P(N,1) = \\ \lim_{N \rightarrow \infty} \frac{1}{N!} \sum_{k=0}^{N-1} (-1)^k \frac{(N-1-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \left(\frac {(N-1)!}{N}- \frac{(N-2)!}{N!} + \frac{(N-3)!}{2! N!} - \frac{(N-4)!}{3! N!}\ldots \right)$$</p> <p>For $m=2\,\,$, the limit of the sum reduces to</p> <p>$$= \lim_{N \rightarrow \infty} P(N,2) = \\ \lim_{N \rightarrow \infty} \frac {1}{2!} \frac{1}{N!} \sum_{k=0}^{N-2} (-1)^k \frac{(N-2-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \frac {1}{2!} \left(\frac {(N-2)!}{N}- \frac{(N-3)!}{N!} + \frac{(N-4)!}{2! N!} - \frac{(N-5)!}{3! N!}\ldots \right)$$</p> <p>and so on. In the general case we get</p> <p>$$= \lim_{N \rightarrow \infty} P(N,m) = \lim_{N \rightarrow \infty} \frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \frac {1!}{m!} \left(\frac {(N-m)!}{N!}- \frac{(N-m-1)!}{N!} + \frac{(N-m-2)!}{2!N!}-\frac{(N-m-3)!}{3!N!} + \ldots \right)$$</p> <p>To study this limit, we can procede again as above, by grouping the terms, starting from the third, in $\lfloor (N-m-2) /2 \rfloor $ consecutive pairs. We obtain a positive sum $S'$ of the form</p> <p>$$ S'= \frac {3!(N-m-2) -2!) (N-m-3)!}{2!3! N!} + \frac {5!(N-m-4) -4!) (N-m-5)!}{4!5! N!} ... $$</p> <p>which satisfies </p> <p>$$S'= \small{ \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac {\left( k!(N-m-k+1) -(k-1)! \right) (N-m-k)!}{(k-1)!k! N!} \\ &lt; \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac{(N-m-k+1)(N-m-k)!}{(k-1)! N!} \\ = \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac{(N-m-k+1)!}{(k-1)! N!} \\ &lt; \frac{ \lfloor (N-m-2)/2 \rfloor (N-m-2)! }{2!N!} } $$</p> <p>where again the last step takes into account that, in the sum shown in the second to last step, all terms after the first one (i.e., that corresponding to $k=3$) are smaller than it. As above, the last expression clearly tends to zero for $N \rightarrow \infty \,\,$, and this implies that $S' $ converges to $0$ as well. So the initial limit for the general case reduces to </p> <p>$$ \lim_{N \rightarrow \infty} \frac {1!}{m!} \left(\frac {(N-m)!}{N!}- \frac{(N-m-1)!}{N!} \right) $$</p> <p>which, for $m \geq 1 \,\,$, converges to $0$.</p> <p>Therefore, we can conclude that</p> <p>$$\lim_{N \rightarrow \infty} P (N,m) = \begin{cases} 1 \,\,\,\, \text{if $m=0$} \\ 0 \,\,\,\, \text{if $1 \leq m \leq N$} \end{cases}$$</p> <p>This can be interpreted as follows: for $N \rightarrow \infty\,\, $ , the probability of getting any number of matches from our two decks tends to zero, so we have probability $1$ to get no matches. A calculation by WA for the case $m=0\,\,$, performed by setting $N=1000\,\,$, is shown <a href="http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%281000-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%201000&amp;x=12&amp;y=7" rel="nofollow noreferrer">here</a>. Similar calculations for the cases $m=1$ and $m=2$ are shown <a href="http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%28999-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20999&amp;x=0&amp;y=0" rel="nofollow noreferrer">here</a> and <a href="http://m.wolframalpha.com/input/?i=sum%201%2F2%20*%201%2F1000%21%20%28-1%29%5Ek%20%28998-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20998&amp;x=0&amp;y=0" rel="nofollow noreferrer">here</a>. The values provided by WA confirm the results obtained above (unfortunately, WA seems to have some problem to calculate the sums for values of $N&gt;1000$).</p> <p>Lastly, regarding the first question, in my opinion there are no particular reasons to think that a closed form necessarily exists.</p> Answer 7: <p>Let $X_k = \mathbf{1}_{A_k}$, $k=1,\dots,N$, be the variable indicating a (double) match in the $k$th card so that $X = X_1+\dots + X_N$ is the number of matches. By linearity, $$ \mathbb{E}[X] = \sum_{k=1}^N \mathbb{E}[X_k] = N\cdot \frac{1}{N^2} = \frac{1}{N}. $$ By the Markov inequality, $\mathbb{P}(X\ge 1) \le \mathbb{E}[X] \to 0$, $N\to\infty$. Therefore, $P(N,0)\to 1$, $N\to\infty$.</p> Answer 8: <p>I don't see how to get more nice form of the expression, but I'll try to answer questions 2 and 3. So we have this: $$P(N,m)=\frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!}$$ Note that the absolute value of the term of the sum decreases with growth of $k$. But if we have sum of the form $$A_n=a_0-a_1+a_2-a_3+\dots+(-1)^na_n$$ with $$a_0&gt;a_1&gt;\dots&gt;a_n&gt;0$$ then this sum clearly lies between $a_0-a_1$ and $a_0$. To prove this, just note that $A_{i+2}$ lies between $A_i$ and $A_{i+1}$ and use induction to show that $A_i$ lies between $A_0=a_0$ and $A_1=a_0-a_1$ then.</p> <p>For $m=0$ we have $A_1=1-1/N\le P(N,m)\le1=A_0$, which means that the limit is $1$. All limits for $m&gt;0$ then are zeros, because we are working with probabilities of disjoint events. However, it is easy to see this explicitly, because in these cases $A_0\le 1/N$. The same argument applies if both $N\to\infty$ and $m\to\infty$: it doesn't really matter, because as soon as $m&gt;0$ we obtain $P(N,m)\le 1/N$, thus $P(N,m)\to 0$</p> Answer 9: <p>As you correctly noted, the limit</p> <p>$$\lim_{ N \rightarrow \infty} P(N,m )= \lim_{ N \rightarrow \infty} \frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!}$$</p> <p>for $m=0 \,$ reduces to</p> <p>$$ \lim_{N \rightarrow \infty} \frac{1}{N!} \sum_{k=0}^{N} (-1)^k \frac{(N-k)!}{k!} =\lim_{N \rightarrow \infty} \left( 1 - \frac{1}{N} + \frac{1}{2!}\frac{1}{N(N-1)}- \frac{1}{3!}\frac{1}{N(N-1)(N-2)} ... \right)$$</p> <p>Note that this sum of $N+1\,$ terms tends to $1$ as $N \rightarrow \infty \,\,$. This can be shown in several ways. For example, if we group all terms starting from that in the third position in $\lfloor (N-1)/2 \rfloor \,\,$ consecutive pairs (we can neglect the last term if $N -1\,$ is odd) , these give a (positive) sum $S $ of the form</p> <p>$$ S= \frac {3!(N-2) -2!) (N-3)!}{2!3! N!} + \frac {5!(N-5) -4!) (N-5)!}{4!5! N!} ... $$</p> <p>which satisfies </p> <p>$$S= \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac {\left( k!(N-k+1) -(k-1)! \right) (N-k)!}{(k-1)!k! N!} \\ &lt; \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac{(N-k+1)(N-k)!}{(k-1)! N!} \\ = \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac{(N-k+1)!}{(k-1)! N!} \\ &lt; \frac{ \lfloor (N-2 )/2 \rfloor }{2!N(N-1)} $$</p> <p>where the last step takes into account that the terms of the sum in second to last expression progressively decreases (and then all terms after the first one are smaller than it). The last expression clearly tends to zero for $N \rightarrow \infty \,\,$, implying that $S $ converges to $0$ as well. So the initial limit reduces to </p> <p>$$ \lim_{N \rightarrow \infty} \left ( 1 - \frac{1}{N} \right)=1 $$</p> <p>Now let us consider the case $m&gt;0\,$. For $m=1\,\,$, the limit of the sum reduces to</p> <p>$$= \lim_{N \rightarrow \infty} P(N,1) = \\ \lim_{N \rightarrow \infty} \frac{1}{N!} \sum_{k=0}^{N-1} (-1)^k \frac{(N-1-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \left(\frac {(N-1)!}{N}- \frac{(N-2)!}{N!} + \frac{(N-3)!}{2! N!} - \frac{(N-4)!}{3! N!}\ldots \right)$$</p> <p>For $m=2\,\,$, the limit of the sum reduces to</p> <p>$$= \lim_{N \rightarrow \infty} P(N,2) = \\ \lim_{N \rightarrow \infty} \frac {1}{2!} \frac{1}{N!} \sum_{k=0}^{N-2} (-1)^k \frac{(N-2-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \frac {1}{2!} \left(\frac {(N-2)!}{N}- \frac{(N-3)!}{N!} + \frac{(N-4)!}{2! N!} - \frac{(N-5)!}{3! N!}\ldots \right)$$</p> <p>and so on. In the general case we get</p> <p>$$= \lim_{N \rightarrow \infty} P(N,m) = \lim_{N \rightarrow \infty} \frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \frac {1!}{m!} \left(\frac {(N-m)!}{N!}- \frac{(N-m-1)!}{N!} + \frac{(N-m-2)!}{2!N!}-\frac{(N-m-3)!}{3!N!} + \ldots \right)$$</p> <p>To study this limit, we can procede again as above, by grouping the terms, starting from the third, in $\lfloor (N-m-2) /2 \rfloor $ consecutive pairs. We obtain a positive sum $S'$ of the form</p> <p>$$ S'= \frac {3!(N-m-2) -2!) (N-m-3)!}{2!3! N!} + \frac {5!(N-m-4) -4!) (N-m-5)!}{4!5! N!} ... $$</p> <p>which satisfies </p> <p>$$S'= \small{ \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac {\left( k!(N-m-k+1) -(k-1)! \right) (N-m-k)!}{(k-1)!k! N!} \\ &lt; \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac{(N-m-k+1)(N-m-k)!}{(k-1)! N!} \\ = \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac{(N-m-k+1)!}{(k-1)! N!} \\ &lt; \frac{ \lfloor (N-m-2)/2 \rfloor (N-m-2)! }{2!N!} } $$</p> <p>where again the last step takes into account that, in the sum shown in the second to last step, all terms after the first one (i.e., that corresponding to $k=3$) are smaller than it. As above, the last expression clearly tends to zero for $N \rightarrow \infty \,\,$, and this implies that $S' $ converges to $0$ as well. So the initial limit for the general case reduces to </p> <p>$$ \lim_{N \rightarrow \infty} \frac {1!}{m!} \left(\frac {(N-m)!}{N!}- \frac{(N-m-1)!}{N!} \right) $$</p> <p>which, for $m \geq 1 \,\,$, converges to $0$.</p> <p>Therefore, we can conclude that</p> <p>$$\lim_{N \rightarrow \infty} P (N,m) = \begin{cases} 1 \,\,\,\, \text{if $m=0$} \\ 0 \,\,\,\, \text{if $1 \leq m \leq N$} \end{cases}$$</p> <p>This can be interpreted as follows: for $N \rightarrow \infty\,\, $ , the probability of getting any number of matches from our two decks tends to zero, so we have probability $1$ to get no matches. A calculation by WA for the case $m=0\,\,$, performed by setting $N=1000\,\,$, is shown <a href="http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%281000-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%201000&amp;x=12&amp;y=7" rel="nofollow noreferrer">here</a>. Similar calculations for the cases $m=1$ and $m=2$ are shown <a href="http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%28999-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20999&amp;x=0&amp;y=0" rel="nofollow noreferrer">here</a> and <a href="http://m.wolframalpha.com/input/?i=sum%201%2F2%20*%201%2F1000%21%20%28-1%29%5Ek%20%28998-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20998&amp;x=0&amp;y=0" rel="nofollow noreferrer">here</a>. The values provided by WA confirm the results obtained above (unfortunately, WA seems to have some problem to calculate the sums for values of $N&gt;1000$).</p> <p>Lastly, regarding the first question, in my opinion there are no particular reasons to think that a closed form necessarily exists.</p> Answer 10: <p>Let $X_k = \mathbf{1}_{A_k}$, $k=1,\dots,N$, be the variable indicating a (double) match in the $k$th card so that $X = X_1+\dots + X_N$ is the number of matches. By linearity, $$ \mathbb{E}[X] = \sum_{k=1}^N \mathbb{E}[X_k] = N\cdot \frac{1}{N^2} = \frac{1}{N}. $$ By the Markov inequality, $\mathbb{P}(X\ge 1) \le \mathbb{E}[X] \to 0$, $N\to\infty$. Therefore, $P(N,0)\to 1$, $N\to\infty$.</p> Answer 11: <p>I don't see how to get more nice form of the expression, but I'll try to answer questions 2 and 3. So we have this: $$P(N,m)=\frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!}$$ Note that the absolute value of the term of the sum decreases with growth of $k$. But if we have sum of the form $$A_n=a_0-a_1+a_2-a_3+\dots+(-1)^na_n$$ with $$a_0&gt;a_1&gt;\dots&gt;a_n&gt;0$$ then this sum clearly lies between $a_0-a_1$ and $a_0$. To prove this, just note that $A_{i+2}$ lies between $A_i$ and $A_{i+1}$ and use induction to show that $A_i$ lies between $A_0=a_0$ and $A_1=a_0-a_1$ then.</p> <p>For $m=0$ we have $A_1=1-1/N\le P(N,m)\le1=A_0$, which means that the limit is $1$. All limits for $m&gt;0$ then are zeros, because we are working with probabilities of disjoint events. However, it is easy to see this explicitly, because in these cases $A_0\le 1/N$. The same argument applies if both $N\to\infty$ and $m\to\infty$: it doesn't really matter, because as soon as $m&gt;0$ we obtain $P(N,m)\le 1/N$, thus $P(N,m)\to 0$</p> Answer 12: <p>As you correctly noted, the limit</p> <p>$$\lim_{ N \rightarrow \infty} P(N,m )= \lim_{ N \rightarrow \infty} \frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!}$$</p> <p>for $m=0 \,$ reduces to</p> <p>$$ \lim_{N \rightarrow \infty} \frac{1}{N!} \sum_{k=0}^{N} (-1)^k \frac{(N-k)!}{k!} =\lim_{N \rightarrow \infty} \left( 1 - \frac{1}{N} + \frac{1}{2!}\frac{1}{N(N-1)}- \frac{1}{3!}\frac{1}{N(N-1)(N-2)} ... \right)$$</p> <p>Note that this sum of $N+1\,$ terms tends to $1$ as $N \rightarrow \infty \,\,$. This can be shown in several ways. For example, if we group all terms starting from that in the third position in $\lfloor (N-1)/2 \rfloor \,\,$ consecutive pairs (we can neglect the last term if $N -1\,$ is odd) , these give a (positive) sum $S $ of the form</p> <p>$$ S= \frac {3!(N-2) -2!) (N-3)!}{2!3! N!} + \frac {5!(N-5) -4!) (N-5)!}{4!5! N!} ... $$</p> <p>which satisfies </p> <p>$$S= \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac {\left( k!(N-k+1) -(k-1)! \right) (N-k)!}{(k-1)!k! N!} \\ &lt; \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac{(N-k+1)(N-k)!}{(k-1)! N!} \\ = \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac{(N-k+1)!}{(k-1)! N!} \\ &lt; \frac{ \lfloor (N-2 )/2 \rfloor }{2!N(N-1)} $$</p> <p>where the last step takes into account that the terms of the sum in second to last expression progressively decreases (and then all terms after the first one are smaller than it). The last expression clearly tends to zero for $N \rightarrow \infty \,\,$, implying that $S $ converges to $0$ as well. So the initial limit reduces to </p> <p>$$ \lim_{N \rightarrow \infty} \left ( 1 - \frac{1}{N} \right)=1 $$</p> <p>Now let us consider the case $m&gt;0\,$. For $m=1\,\,$, the limit of the sum reduces to</p> <p>$$= \lim_{N \rightarrow \infty} P(N,1) = \\ \lim_{N \rightarrow \infty} \frac{1}{N!} \sum_{k=0}^{N-1} (-1)^k \frac{(N-1-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \left(\frac {(N-1)!}{N}- \frac{(N-2)!}{N!} + \frac{(N-3)!}{2! N!} - \frac{(N-4)!}{3! N!}\ldots \right)$$</p> <p>For $m=2\,\,$, the limit of the sum reduces to</p> <p>$$= \lim_{N \rightarrow \infty} P(N,2) = \\ \lim_{N \rightarrow \infty} \frac {1}{2!} \frac{1}{N!} \sum_{k=0}^{N-2} (-1)^k \frac{(N-2-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \frac {1}{2!} \left(\frac {(N-2)!}{N}- \frac{(N-3)!}{N!} + \frac{(N-4)!}{2! N!} - \frac{(N-5)!}{3! N!}\ldots \right)$$</p> <p>and so on. In the general case we get</p> <p>$$= \lim_{N \rightarrow \infty} P(N,m) = \lim_{N \rightarrow \infty} \frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \frac {1!}{m!} \left(\frac {(N-m)!}{N!}- \frac{(N-m-1)!}{N!} + \frac{(N-m-2)!}{2!N!}-\frac{(N-m-3)!}{3!N!} + \ldots \right)$$</p> <p>To study this limit, we can procede again as above, by grouping the terms, starting from the third, in $\lfloor (N-m-2) /2 \rfloor $ consecutive pairs. We obtain a positive sum $S'$ of the form</p> <p>$$ S'= \frac {3!(N-m-2) -2!) (N-m-3)!}{2!3! N!} + \frac {5!(N-m-4) -4!) (N-m-5)!}{4!5! N!} ... $$</p> <p>which satisfies </p> <p>$$S'= \small{ \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac {\left( k!(N-m-k+1) -(k-1)! \right) (N-m-k)!}{(k-1)!k! N!} \\ &lt; \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac{(N-m-k+1)(N-m-k)!}{(k-1)! N!} \\ = \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac{(N-m-k+1)!}{(k-1)! N!} \\ &lt; \frac{ \lfloor (N-m-2)/2 \rfloor (N-m-2)! }{2!N!} } $$</p> <p>where again the last step takes into account that, in the sum shown in the second to last step, all terms after the first one (i.e., that corresponding to $k=3$) are smaller than it. As above, the last expression clearly tends to zero for $N \rightarrow \infty \,\,$, and this implies that $S' $ converges to $0$ as well. So the initial limit for the general case reduces to </p> <p>$$ \lim_{N \rightarrow \infty} \frac {1!}{m!} \left(\frac {(N-m)!}{N!}- \frac{(N-m-1)!}{N!} \right) $$</p> <p>which, for $m \geq 1 \,\,$, converges to $0$.</p> <p>Therefore, we can conclude that</p> <p>$$\lim_{N \rightarrow \infty} P (N,m) = \begin{cases} 1 \,\,\,\, \text{if $m=0$} \\ 0 \,\,\,\, \text{if $1 \leq m \leq N$} \end{cases}$$</p> <p>This can be interpreted as follows: for $N \rightarrow \infty\,\, $ , the probability of getting any number of matches from our two decks tends to zero, so we have probability $1$ to get no matches. A calculation by WA for the case $m=0\,\,$, performed by setting $N=1000\,\,$, is shown <a href="http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%281000-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%201000&amp;x=12&amp;y=7" rel="nofollow noreferrer">here</a>. Similar calculations for the cases $m=1$ and $m=2$ are shown <a href="http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%28999-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20999&amp;x=0&amp;y=0" rel="nofollow noreferrer">here</a> and <a href="http://m.wolframalpha.com/input/?i=sum%201%2F2%20*%201%2F1000%21%20%28-1%29%5Ek%20%28998-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20998&amp;x=0&amp;y=0" rel="nofollow noreferrer">here</a>. The values provided by WA confirm the results obtained above (unfortunately, WA seems to have some problem to calculate the sums for values of $N&gt;1000$).</p> <p>Lastly, regarding the first question, in my opinion there are no particular reasons to think that a closed form necessarily exists.</p> Answer 13: <p>Let $X_k = \mathbf{1}_{A_k}$, $k=1,\dots,N$, be the variable indicating a (double) match in the $k$th card so that $X = X_1+\dots + X_N$ is the number of matches. By linearity, $$ \mathbb{E}[X] = \sum_{k=1}^N \mathbb{E}[X_k] = N\cdot \frac{1}{N^2} = \frac{1}{N}. $$ By the Markov inequality, $\mathbb{P}(X\ge 1) \le \mathbb{E}[X] \to 0$, $N\to\infty$. Therefore, $P(N,0)\to 1$, $N\to\infty$.</p> Answer 14: <p>I don't see how to get more nice form of the expression, but I'll try to answer questions 2 and 3. So we have this: $$P(N,m)=\frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!}$$ Note that the absolute value of the term of the sum decreases with growth of $k$. But if we have sum of the form $$A_n=a_0-a_1+a_2-a_3+\dots+(-1)^na_n$$ with $$a_0&gt;a_1&gt;\dots&gt;a_n&gt;0$$ then this sum clearly lies between $a_0-a_1$ and $a_0$. To prove this, just note that $A_{i+2}$ lies between $A_i$ and $A_{i+1}$ and use induction to show that $A_i$ lies between $A_0=a_0$ and $A_1=a_0-a_1$ then.</p> <p>For $m=0$ we have $A_1=1-1/N\le P(N,m)\le1=A_0$, which means that the limit is $1$. All limits for $m&gt;0$ then are zeros, because we are working with probabilities of disjoint events. However, it is easy to see this explicitly, because in these cases $A_0\le 1/N$. The same argument applies if both $N\to\infty$ and $m\to\infty$: it doesn't really matter, because as soon as $m&gt;0$ we obtain $P(N,m)\le 1/N$, thus $P(N,m)\to 0$</p> Answer 15: <p>As you correctly noted, the limit</p> <p>$$\lim_{ N \rightarrow \infty} P(N,m )= \lim_{ N \rightarrow \infty} \frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!}$$</p> <p>for $m=0 \,$ reduces to</p> <p>$$ \lim_{N \rightarrow \infty} \frac{1}{N!} \sum_{k=0}^{N} (-1)^k \frac{(N-k)!}{k!} =\lim_{N \rightarrow \infty} \left( 1 - \frac{1}{N} + \frac{1}{2!}\frac{1}{N(N-1)}- \frac{1}{3!}\frac{1}{N(N-1)(N-2)} ... \right)$$</p> <p>Note that this sum of $N+1\,$ terms tends to $1$ as $N \rightarrow \infty \,\,$. This can be shown in several ways. For example, if we group all terms starting from that in the third position in $\lfloor (N-1)/2 \rfloor \,\,$ consecutive pairs (we can neglect the last term if $N -1\,$ is odd) , these give a (positive) sum $S $ of the form</p> <p>$$ S= \frac {3!(N-2) -2!) (N-3)!}{2!3! N!} + \frac {5!(N-5) -4!) (N-5)!}{4!5! N!} ... $$</p> <p>which satisfies </p> <p>$$S= \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac {\left( k!(N-k+1) -(k-1)! \right) (N-k)!}{(k-1)!k! N!} \\ &lt; \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac{(N-k+1)(N-k)!}{(k-1)! N!} \\ = \sum_{\substack {k=3 \\ k \text {odd}}}^{N} \frac{(N-k+1)!}{(k-1)! N!} \\ &lt; \frac{ \lfloor (N-2 )/2 \rfloor }{2!N(N-1)} $$</p> <p>where the last step takes into account that the terms of the sum in second to last expression progressively decreases (and then all terms after the first one are smaller than it). The last expression clearly tends to zero for $N \rightarrow \infty \,\,$, implying that $S $ converges to $0$ as well. So the initial limit reduces to </p> <p>$$ \lim_{N \rightarrow \infty} \left ( 1 - \frac{1}{N} \right)=1 $$</p> <p>Now let us consider the case $m&gt;0\,$. For $m=1\,\,$, the limit of the sum reduces to</p> <p>$$= \lim_{N \rightarrow \infty} P(N,1) = \\ \lim_{N \rightarrow \infty} \frac{1}{N!} \sum_{k=0}^{N-1} (-1)^k \frac{(N-1-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \left(\frac {(N-1)!}{N}- \frac{(N-2)!}{N!} + \frac{(N-3)!}{2! N!} - \frac{(N-4)!}{3! N!}\ldots \right)$$</p> <p>For $m=2\,\,$, the limit of the sum reduces to</p> <p>$$= \lim_{N \rightarrow \infty} P(N,2) = \\ \lim_{N \rightarrow \infty} \frac {1}{2!} \frac{1}{N!} \sum_{k=0}^{N-2} (-1)^k \frac{(N-2-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \frac {1}{2!} \left(\frac {(N-2)!}{N}- \frac{(N-3)!}{N!} + \frac{(N-4)!}{2! N!} - \frac{(N-5)!}{3! N!}\ldots \right)$$</p> <p>and so on. In the general case we get</p> <p>$$= \lim_{N \rightarrow \infty} P(N,m) = \lim_{N \rightarrow \infty} \frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!} \\ = \lim_{N \rightarrow \infty} \frac {1!}{m!} \left(\frac {(N-m)!}{N!}- \frac{(N-m-1)!}{N!} + \frac{(N-m-2)!}{2!N!}-\frac{(N-m-3)!}{3!N!} + \ldots \right)$$</p> <p>To study this limit, we can procede again as above, by grouping the terms, starting from the third, in $\lfloor (N-m-2) /2 \rfloor $ consecutive pairs. We obtain a positive sum $S'$ of the form</p> <p>$$ S'= \frac {3!(N-m-2) -2!) (N-m-3)!}{2!3! N!} + \frac {5!(N-m-4) -4!) (N-m-5)!}{4!5! N!} ... $$</p> <p>which satisfies </p> <p>$$S'= \small{ \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac {\left( k!(N-m-k+1) -(k-1)! \right) (N-m-k)!}{(k-1)!k! N!} \\ &lt; \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac{(N-m-k+1)(N-m-k)!}{(k-1)! N!} \\ = \sum_{\substack {k=3 \\ k \text {odd}}}^{N-m} \frac{(N-m-k+1)!}{(k-1)! N!} \\ &lt; \frac{ \lfloor (N-m-2)/2 \rfloor (N-m-2)! }{2!N!} } $$</p> <p>where again the last step takes into account that, in the sum shown in the second to last step, all terms after the first one (i.e., that corresponding to $k=3$) are smaller than it. As above, the last expression clearly tends to zero for $N \rightarrow \infty \,\,$, and this implies that $S' $ converges to $0$ as well. So the initial limit for the general case reduces to </p> <p>$$ \lim_{N \rightarrow \infty} \frac {1!}{m!} \left(\frac {(N-m)!}{N!}- \frac{(N-m-1)!}{N!} \right) $$</p> <p>which, for $m \geq 1 \,\,$, converges to $0$.</p> <p>Therefore, we can conclude that</p> <p>$$\lim_{N \rightarrow \infty} P (N,m) = \begin{cases} 1 \,\,\,\, \text{if $m=0$} \\ 0 \,\,\,\, \text{if $1 \leq m \leq N$} \end{cases}$$</p> <p>This can be interpreted as follows: for $N \rightarrow \infty\,\, $ , the probability of getting any number of matches from our two decks tends to zero, so we have probability $1$ to get no matches. A calculation by WA for the case $m=0\,\,$, performed by setting $N=1000\,\,$, is shown <a href="http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%281000-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%201000&amp;x=12&amp;y=7" rel="nofollow noreferrer">here</a>. Similar calculations for the cases $m=1$ and $m=2$ are shown <a href="http://m.wolframalpha.com/input/?i=sum%201%2F1000%21%20%28-1%29%5Ek%20%28999-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20999&amp;x=0&amp;y=0" rel="nofollow noreferrer">here</a> and <a href="http://m.wolframalpha.com/input/?i=sum%201%2F2%20*%201%2F1000%21%20%28-1%29%5Ek%20%28998-k%29%21%2Fk%21%2C%20%20%20k%3D0%20to%20998&amp;x=0&amp;y=0" rel="nofollow noreferrer">here</a>. The values provided by WA confirm the results obtained above (unfortunately, WA seems to have some problem to calculate the sums for values of $N&gt;1000$).</p> <p>Lastly, regarding the first question, in my opinion there are no particular reasons to think that a closed form necessarily exists.</p>
Proof about inclusion of subspaces
2,111,830
5
https://math.stackexchange.com/questions/2111830/proof-about-inclusion-of-subspaces
<p>I have the following question and I'm not sure my proof is correct/approached correctly:</p> <blockquote> <p>Let $V$ be a $n$-dimensional vector space, let $U_i \subset V$ be subspaces of V for $i = 1,2,\dots,r$ where $$U_1 \subset U_2 \subset \dots \subset U_r$$ If $r&gt;n+1$ then there exists an $i&lt;r$ for which $U_i = U_{i+1}$</p> </blockquote> <p>I was thinking something along the lines of: the subsets are strict thus 'moving' from $U_i$ to $U_{i+1}$ increases the dimension by one, which means, when reaching $U_r$, your dimension is greater than $n+1$ but this is not possible since $U_r$ is a subspace of $V$ and thus has dimension at most $n$.</p> <p>Is this a good approach to this proof? I'm not sure because of the strict inclusions: is it true for every case that $dim(U_i) &lt; dim(U_{i+1})$?</p> <p>I was maybe thinking of proving this via induction, but I'm not sure how that would work.</p> <p>Thanks in advance.</p> <p>EDIT: Updated subset-strictness</p>
[ { "answer_id": 2111840, "score": 5, "text": "<p>Your proof is (essentially) correct.</p>\n\n<p>In the second line of the paragraph you want</p>\n\n<blockquote>\n <p>increases the dimension by <em>at least</em> one</p>\n</blockquote>\n\n<p>For example, consider \n$$\n\\{0\\} \\subseteq \\text{$x$-$y$-plane} \\subseteq \\ldots\n$$\nin $\\mathbb{R}^3$.</p>\n\n<p>Your instructor might want you to prove that claim about dimensions, or might be willing to take it as known, depending on what you've done in class.</p>\n\n<p>You don't need induction.</p>\n\n<p>Note: although the question you're asking is clear, the statement </p>\n\n<blockquote>\n <p>$$U_1 \\subsetneq U_2 \\subsetneq \\dots \\subsetneq U_r$$ If $r&gt;n+1$ ...</p>\n</blockquote>\n\n<p>isn't the right way to phrase it. The strict inequalities should be weak $\\subseteq$. Then you're to show that at least one of them is an equality.</p>\n" }, { "answer_id": 2111838, "score": 3, "text": "<p>Hints following your own work:</p>\n\n<p>=== Since $\\;U_k\\subsetneq U_{k+1}\\;$ , we get that <em>always</em> $\\;\\dim U_{k+1}\\ge \\dim U_k+1\\;$ , and from here $\\;\\dim U_r\\ge n\\;$ ..</p>\n\n<p>=== If $\\;W\\le V\\;$ are finite dimensional linear spaces and $\\;\\dim W=\\dim V\\;$ , then $\\;W=V\\;$ .</p>\n" }, { "answer_id": 2111890, "score": 3, "text": "<p>Your idea is very good, although not well expressed; but also the statement is not well written. Let's state it properly.</p>\n\n<blockquote>\n <p>Let $V$ be a $n$-dimensional vector space, let $U_i \\subseteq V$ be subspaces of $V$ for $i = 1,2,\\dots,r$ where $$U_1 \\subseteq U_2 \\subseteq \\dots \\subseteq U_r$$\n If $r&gt;n+1$ then there exists an $i&lt;r$ for which $U_i = U_{i+1}=\\dots=U_r$</p>\n</blockquote>\n\n<p>Let's prove, by induction on $r$, that, given a sequence of subspaces\n$$\nU_1\\subsetneq U_2\\subsetneq\\dots\\subsetneq U_r\n$$\nthen $\\dim U_r\\ge r-1$. We only use the fact that $U\\subsetneq U'$ implies $\\dim U'\\ge 1+\\dim U$.</p>\n\n<p>If $r=1$, this is trivial. Assume it holds for $r-1$. Then, by induction hypothesis, $\\dim U_{r-1}\\ge r-2$ and so $\\dim U_r\\ge (r-2)+1=r-1$.</p>\n\n<p>So, if we have\n$$\nU_1\\subseteq U_2\\subseteq\\dots\\subseteq U_r\n$$\nand $r&gt;n+1$, the above result proves that not all inclusions are proper. </p>\n\n<p>Let $j$ be the maximum index such that $U_{j}\\subsetneq U_{j+1}$ (or $j=0$ if all inclusions are equalities): by the result above, $j&lt;r-1$. Now, for $i=j+1$, $U_i=U_{i+1}=\\dots=U_r$.</p>\n" }, { "answer_id": 2111840, "score": 5, "text": "<p>Your proof is (essentially) correct.</p>\n\n<p>In the second line of the paragraph you want</p>\n\n<blockquote>\n <p>increases the dimension by <em>at least</em> one</p>\n</blockquote>\n\n<p>For example, consider \n$$\n\\{0\\} \\subseteq \\text{$x$-$y$-plane} \\subseteq \\ldots\n$$\nin $\\mathbb{R}^3$.</p>\n\n<p>Your instructor might want you to prove that claim about dimensions, or might be willing to take it as known, depending on what you've done in class.</p>\n\n<p>You don't need induction.</p>\n\n<p>Note: although the question you're asking is clear, the statement </p>\n\n<blockquote>\n <p>$$U_1 \\subsetneq U_2 \\subsetneq \\dots \\subsetneq U_r$$ If $r&gt;n+1$ ...</p>\n</blockquote>\n\n<p>isn't the right way to phrase it. The strict inequalities should be weak $\\subseteq$. Then you're to show that at least one of them is an equality.</p>\n" }, { "answer_id": 2111838, "score": 3, "text": "<p>Hints following your own work:</p>\n\n<p>=== Since $\\;U_k\\subsetneq U_{k+1}\\;$ , we get that <em>always</em> $\\;\\dim U_{k+1}\\ge \\dim U_k+1\\;$ , and from here $\\;\\dim U_r\\ge n\\;$ ..</p>\n\n<p>=== If $\\;W\\le V\\;$ are finite dimensional linear spaces and $\\;\\dim W=\\dim V\\;$ , then $\\;W=V\\;$ .</p>\n" }, { "answer_id": 2111890, "score": 3, "text": "<p>Your idea is very good, although not well expressed; but also the statement is not well written. Let's state it properly.</p>\n\n<blockquote>\n <p>Let $V$ be a $n$-dimensional vector space, let $U_i \\subseteq V$ be subspaces of $V$ for $i = 1,2,\\dots,r$ where $$U_1 \\subseteq U_2 \\subseteq \\dots \\subseteq U_r$$\n If $r&gt;n+1$ then there exists an $i&lt;r$ for which $U_i = U_{i+1}=\\dots=U_r$</p>\n</blockquote>\n\n<p>Let's prove, by induction on $r$, that, given a sequence of subspaces\n$$\nU_1\\subsetneq U_2\\subsetneq\\dots\\subsetneq U_r\n$$\nthen $\\dim U_r\\ge r-1$. We only use the fact that $U\\subsetneq U'$ implies $\\dim U'\\ge 1+\\dim U$.</p>\n\n<p>If $r=1$, this is trivial. Assume it holds for $r-1$. Then, by induction hypothesis, $\\dim U_{r-1}\\ge r-2$ and so $\\dim U_r\\ge (r-2)+1=r-1$.</p>\n\n<p>So, if we have\n$$\nU_1\\subseteq U_2\\subseteq\\dots\\subseteq U_r\n$$\nand $r&gt;n+1$, the above result proves that not all inclusions are proper. </p>\n\n<p>Let $j$ be the maximum index such that $U_{j}\\subsetneq U_{j+1}$ (or $j=0$ if all inclusions are equalities): by the result above, $j&lt;r-1$. Now, for $i=j+1$, $U_i=U_{i+1}=\\dots=U_r$.</p>\n" }, { "answer_id": 2111840, "score": 5, "text": "<p>Your proof is (essentially) correct.</p>\n\n<p>In the second line of the paragraph you want</p>\n\n<blockquote>\n <p>increases the dimension by <em>at least</em> one</p>\n</blockquote>\n\n<p>For example, consider \n$$\n\\{0\\} \\subseteq \\text{$x$-$y$-plane} \\subseteq \\ldots\n$$\nin $\\mathbb{R}^3$.</p>\n\n<p>Your instructor might want you to prove that claim about dimensions, or might be willing to take it as known, depending on what you've done in class.</p>\n\n<p>You don't need induction.</p>\n\n<p>Note: although the question you're asking is clear, the statement </p>\n\n<blockquote>\n <p>$$U_1 \\subsetneq U_2 \\subsetneq \\dots \\subsetneq U_r$$ If $r&gt;n+1$ ...</p>\n</blockquote>\n\n<p>isn't the right way to phrase it. The strict inequalities should be weak $\\subseteq$. Then you're to show that at least one of them is an equality.</p>\n" }, { "answer_id": 2111838, "score": 3, "text": "<p>Hints following your own work:</p>\n\n<p>=== Since $\\;U_k\\subsetneq U_{k+1}\\;$ , we get that <em>always</em> $\\;\\dim U_{k+1}\\ge \\dim U_k+1\\;$ , and from here $\\;\\dim U_r\\ge n\\;$ ..</p>\n\n<p>=== If $\\;W\\le V\\;$ are finite dimensional linear spaces and $\\;\\dim W=\\dim V\\;$ , then $\\;W=V\\;$ .</p>\n" }, { "answer_id": 2111890, "score": 3, "text": "<p>Your idea is very good, although not well expressed; but also the statement is not well written. Let's state it properly.</p>\n\n<blockquote>\n <p>Let $V$ be a $n$-dimensional vector space, let $U_i \\subseteq V$ be subspaces of $V$ for $i = 1,2,\\dots,r$ where $$U_1 \\subseteq U_2 \\subseteq \\dots \\subseteq U_r$$\n If $r&gt;n+1$ then there exists an $i&lt;r$ for which $U_i = U_{i+1}=\\dots=U_r$</p>\n</blockquote>\n\n<p>Let's prove, by induction on $r$, that, given a sequence of subspaces\n$$\nU_1\\subsetneq U_2\\subsetneq\\dots\\subsetneq U_r\n$$\nthen $\\dim U_r\\ge r-1$. We only use the fact that $U\\subsetneq U'$ implies $\\dim U'\\ge 1+\\dim U$.</p>\n\n<p>If $r=1$, this is trivial. Assume it holds for $r-1$. Then, by induction hypothesis, $\\dim U_{r-1}\\ge r-2$ and so $\\dim U_r\\ge (r-2)+1=r-1$.</p>\n\n<p>So, if we have\n$$\nU_1\\subseteq U_2\\subseteq\\dots\\subseteq U_r\n$$\nand $r&gt;n+1$, the above result proves that not all inclusions are proper. </p>\n\n<p>Let $j$ be the maximum index such that $U_{j}\\subsetneq U_{j+1}$ (or $j=0$ if all inclusions are equalities): by the result above, $j&lt;r-1$. Now, for $i=j+1$, $U_i=U_{i+1}=\\dots=U_r$.</p>\n" }, { "answer_id": 2111840, "score": 5, "text": "<p>Your proof is (essentially) correct.</p>\n\n<p>In the second line of the paragraph you want</p>\n\n<blockquote>\n <p>increases the dimension by <em>at least</em> one</p>\n</blockquote>\n\n<p>For example, consider \n$$\n\\{0\\} \\subseteq \\text{$x$-$y$-plane} \\subseteq \\ldots\n$$\nin $\\mathbb{R}^3$.</p>\n\n<p>Your instructor might want you to prove that claim about dimensions, or might be willing to take it as known, depending on what you've done in class.</p>\n\n<p>You don't need induction.</p>\n\n<p>Note: although the question you're asking is clear, the statement </p>\n\n<blockquote>\n <p>$$U_1 \\subsetneq U_2 \\subsetneq \\dots \\subsetneq U_r$$ If $r&gt;n+1$ ...</p>\n</blockquote>\n\n<p>isn't the right way to phrase it. The strict inequalities should be weak $\\subseteq$. Then you're to show that at least one of them is an equality.</p>\n" }, { "answer_id": 2111838, "score": 3, "text": "<p>Hints following your own work:</p>\n\n<p>=== Since $\\;U_k\\subsetneq U_{k+1}\\;$ , we get that <em>always</em> $\\;\\dim U_{k+1}\\ge \\dim U_k+1\\;$ , and from here $\\;\\dim U_r\\ge n\\;$ ..</p>\n\n<p>=== If $\\;W\\le V\\;$ are finite dimensional linear spaces and $\\;\\dim W=\\dim V\\;$ , then $\\;W=V\\;$ .</p>\n" }, { "answer_id": 2111890, "score": 3, "text": "<p>Your idea is very good, although not well expressed; but also the statement is not well written. Let's state it properly.</p>\n\n<blockquote>\n <p>Let $V$ be a $n$-dimensional vector space, let $U_i \\subseteq V$ be subspaces of $V$ for $i = 1,2,\\dots,r$ where $$U_1 \\subseteq U_2 \\subseteq \\dots \\subseteq U_r$$\n If $r&gt;n+1$ then there exists an $i&lt;r$ for which $U_i = U_{i+1}=\\dots=U_r$</p>\n</blockquote>\n\n<p>Let's prove, by induction on $r$, that, given a sequence of subspaces\n$$\nU_1\\subsetneq U_2\\subsetneq\\dots\\subsetneq U_r\n$$\nthen $\\dim U_r\\ge r-1$. We only use the fact that $U\\subsetneq U'$ implies $\\dim U'\\ge 1+\\dim U$.</p>\n\n<p>If $r=1$, this is trivial. Assume it holds for $r-1$. Then, by induction hypothesis, $\\dim U_{r-1}\\ge r-2$ and so $\\dim U_r\\ge (r-2)+1=r-1$.</p>\n\n<p>So, if we have\n$$\nU_1\\subseteq U_2\\subseteq\\dots\\subseteq U_r\n$$\nand $r&gt;n+1$, the above result proves that not all inclusions are proper. </p>\n\n<p>Let $j$ be the maximum index such that $U_{j}\\subsetneq U_{j+1}$ (or $j=0$ if all inclusions are equalities): by the result above, $j&lt;r-1$. Now, for $i=j+1$, $U_i=U_{i+1}=\\dots=U_r$.</p>\n" }, { "answer_id": 2111840, "score": 5, "text": "<p>Your proof is (essentially) correct.</p>\n\n<p>In the second line of the paragraph you want</p>\n\n<blockquote>\n <p>increases the dimension by <em>at least</em> one</p>\n</blockquote>\n\n<p>For example, consider \n$$\n\\{0\\} \\subseteq \\text{$x$-$y$-plane} \\subseteq \\ldots\n$$\nin $\\mathbb{R}^3$.</p>\n\n<p>Your instructor might want you to prove that claim about dimensions, or might be willing to take it as known, depending on what you've done in class.</p>\n\n<p>You don't need induction.</p>\n\n<p>Note: although the question you're asking is clear, the statement </p>\n\n<blockquote>\n <p>$$U_1 \\subsetneq U_2 \\subsetneq \\dots \\subsetneq U_r$$ If $r&gt;n+1$ ...</p>\n</blockquote>\n\n<p>isn't the right way to phrase it. The strict inequalities should be weak $\\subseteq$. Then you're to show that at least one of them is an equality.</p>\n" }, { "answer_id": 2111838, "score": 3, "text": "<p>Hints following your own work:</p>\n\n<p>=== Since $\\;U_k\\subsetneq U_{k+1}\\;$ , we get that <em>always</em> $\\;\\dim U_{k+1}\\ge \\dim U_k+1\\;$ , and from here $\\;\\dim U_r\\ge n\\;$ ..</p>\n\n<p>=== If $\\;W\\le V\\;$ are finite dimensional linear spaces and $\\;\\dim W=\\dim V\\;$ , then $\\;W=V\\;$ .</p>\n" }, { "answer_id": 2111890, "score": 3, "text": "<p>Your idea is very good, although not well expressed; but also the statement is not well written. Let's state it properly.</p>\n\n<blockquote>\n <p>Let $V$ be a $n$-dimensional vector space, let $U_i \\subseteq V$ be subspaces of $V$ for $i = 1,2,\\dots,r$ where $$U_1 \\subseteq U_2 \\subseteq \\dots \\subseteq U_r$$\n If $r&gt;n+1$ then there exists an $i&lt;r$ for which $U_i = U_{i+1}=\\dots=U_r$</p>\n</blockquote>\n\n<p>Let's prove, by induction on $r$, that, given a sequence of subspaces\n$$\nU_1\\subsetneq U_2\\subsetneq\\dots\\subsetneq U_r\n$$\nthen $\\dim U_r\\ge r-1$. We only use the fact that $U\\subsetneq U'$ implies $\\dim U'\\ge 1+\\dim U$.</p>\n\n<p>If $r=1$, this is trivial. Assume it holds for $r-1$. Then, by induction hypothesis, $\\dim U_{r-1}\\ge r-2$ and so $\\dim U_r\\ge (r-2)+1=r-1$.</p>\n\n<p>So, if we have\n$$\nU_1\\subseteq U_2\\subseteq\\dots\\subseteq U_r\n$$\nand $r&gt;n+1$, the above result proves that not all inclusions are proper. </p>\n\n<p>Let $j$ be the maximum index such that $U_{j}\\subsetneq U_{j+1}$ (or $j=0$ if all inclusions are equalities): by the result above, $j&lt;r-1$. Now, for $i=j+1$, $U_i=U_{i+1}=\\dots=U_r$.</p>\n" } ]
Question: Proof about inclusion of subspaces Answer 1: <p>Your proof is (essentially) correct.</p> <p>In the second line of the paragraph you want</p> <blockquote> <p>increases the dimension by <em>at least</em> one</p> </blockquote> <p>For example, consider $$ \{0\} \subseteq \text{$x$-$y$-plane} \subseteq \ldots $$ in $\mathbb{R}^3$.</p> <p>Your instructor might want you to prove that claim about dimensions, or might be willing to take it as known, depending on what you've done in class.</p> <p>You don't need induction.</p> <p>Note: although the question you're asking is clear, the statement </p> <blockquote> <p>$$U_1 \subsetneq U_2 \subsetneq \dots \subsetneq U_r$$ If $r&gt;n+1$ ...</p> </blockquote> <p>isn't the right way to phrase it. The strict inequalities should be weak $\subseteq$. Then you're to show that at least one of them is an equality.</p> Answer 2: <p>Hints following your own work:</p> <p>=== Since $\;U_k\subsetneq U_{k+1}\;$ , we get that <em>always</em> $\;\dim U_{k+1}\ge \dim U_k+1\;$ , and from here $\;\dim U_r\ge n\;$ ..</p> <p>=== If $\;W\le V\;$ are finite dimensional linear spaces and $\;\dim W=\dim V\;$ , then $\;W=V\;$ .</p> Answer 3: <p>Your idea is very good, although not well expressed; but also the statement is not well written. Let's state it properly.</p> <blockquote> <p>Let $V$ be a $n$-dimensional vector space, let $U_i \subseteq V$ be subspaces of $V$ for $i = 1,2,\dots,r$ where $$U_1 \subseteq U_2 \subseteq \dots \subseteq U_r$$ If $r&gt;n+1$ then there exists an $i&lt;r$ for which $U_i = U_{i+1}=\dots=U_r$</p> </blockquote> <p>Let's prove, by induction on $r$, that, given a sequence of subspaces $$ U_1\subsetneq U_2\subsetneq\dots\subsetneq U_r $$ then $\dim U_r\ge r-1$. We only use the fact that $U\subsetneq U'$ implies $\dim U'\ge 1+\dim U$.</p> <p>If $r=1$, this is trivial. Assume it holds for $r-1$. Then, by induction hypothesis, $\dim U_{r-1}\ge r-2$ and so $\dim U_r\ge (r-2)+1=r-1$.</p> <p>So, if we have $$ U_1\subseteq U_2\subseteq\dots\subseteq U_r $$ and $r&gt;n+1$, the above result proves that not all inclusions are proper. </p> <p>Let $j$ be the maximum index such that $U_{j}\subsetneq U_{j+1}$ (or $j=0$ if all inclusions are equalities): by the result above, $j&lt;r-1$. Now, for $i=j+1$, $U_i=U_{i+1}=\dots=U_r$.</p> Answer 4: <p>Your proof is (essentially) correct.</p> <p>In the second line of the paragraph you want</p> <blockquote> <p>increases the dimension by <em>at least</em> one</p> </blockquote> <p>For example, consider $$ \{0\} \subseteq \text{$x$-$y$-plane} \subseteq \ldots $$ in $\mathbb{R}^3$.</p> <p>Your instructor might want you to prove that claim about dimensions, or might be willing to take it as known, depending on what you've done in class.</p> <p>You don't need induction.</p> <p>Note: although the question you're asking is clear, the statement </p> <blockquote> <p>$$U_1 \subsetneq U_2 \subsetneq \dots \subsetneq U_r$$ If $r&gt;n+1$ ...</p> </blockquote> <p>isn't the right way to phrase it. The strict inequalities should be weak $\subseteq$. Then you're to show that at least one of them is an equality.</p> Answer 5: <p>Hints following your own work:</p> <p>=== Since $\;U_k\subsetneq U_{k+1}\;$ , we get that <em>always</em> $\;\dim U_{k+1}\ge \dim U_k+1\;$ , and from here $\;\dim U_r\ge n\;$ ..</p> <p>=== If $\;W\le V\;$ are finite dimensional linear spaces and $\;\dim W=\dim V\;$ , then $\;W=V\;$ .</p> Answer 6: <p>Your idea is very good, although not well expressed; but also the statement is not well written. Let's state it properly.</p> <blockquote> <p>Let $V$ be a $n$-dimensional vector space, let $U_i \subseteq V$ be subspaces of $V$ for $i = 1,2,\dots,r$ where $$U_1 \subseteq U_2 \subseteq \dots \subseteq U_r$$ If $r&gt;n+1$ then there exists an $i&lt;r$ for which $U_i = U_{i+1}=\dots=U_r$</p> </blockquote> <p>Let's prove, by induction on $r$, that, given a sequence of subspaces $$ U_1\subsetneq U_2\subsetneq\dots\subsetneq U_r $$ then $\dim U_r\ge r-1$. We only use the fact that $U\subsetneq U'$ implies $\dim U'\ge 1+\dim U$.</p> <p>If $r=1$, this is trivial. Assume it holds for $r-1$. Then, by induction hypothesis, $\dim U_{r-1}\ge r-2$ and so $\dim U_r\ge (r-2)+1=r-1$.</p> <p>So, if we have $$ U_1\subseteq U_2\subseteq\dots\subseteq U_r $$ and $r&gt;n+1$, the above result proves that not all inclusions are proper. </p> <p>Let $j$ be the maximum index such that $U_{j}\subsetneq U_{j+1}$ (or $j=0$ if all inclusions are equalities): by the result above, $j&lt;r-1$. Now, for $i=j+1$, $U_i=U_{i+1}=\dots=U_r$.</p> Answer 7: <p>Your proof is (essentially) correct.</p> <p>In the second line of the paragraph you want</p> <blockquote> <p>increases the dimension by <em>at least</em> one</p> </blockquote> <p>For example, consider $$ \{0\} \subseteq \text{$x$-$y$-plane} \subseteq \ldots $$ in $\mathbb{R}^3$.</p> <p>Your instructor might want you to prove that claim about dimensions, or might be willing to take it as known, depending on what you've done in class.</p> <p>You don't need induction.</p> <p>Note: although the question you're asking is clear, the statement </p> <blockquote> <p>$$U_1 \subsetneq U_2 \subsetneq \dots \subsetneq U_r$$ If $r&gt;n+1$ ...</p> </blockquote> <p>isn't the right way to phrase it. The strict inequalities should be weak $\subseteq$. Then you're to show that at least one of them is an equality.</p> Answer 8: <p>Hints following your own work:</p> <p>=== Since $\;U_k\subsetneq U_{k+1}\;$ , we get that <em>always</em> $\;\dim U_{k+1}\ge \dim U_k+1\;$ , and from here $\;\dim U_r\ge n\;$ ..</p> <p>=== If $\;W\le V\;$ are finite dimensional linear spaces and $\;\dim W=\dim V\;$ , then $\;W=V\;$ .</p> Answer 9: <p>Your idea is very good, although not well expressed; but also the statement is not well written. Let's state it properly.</p> <blockquote> <p>Let $V$ be a $n$-dimensional vector space, let $U_i \subseteq V$ be subspaces of $V$ for $i = 1,2,\dots,r$ where $$U_1 \subseteq U_2 \subseteq \dots \subseteq U_r$$ If $r&gt;n+1$ then there exists an $i&lt;r$ for which $U_i = U_{i+1}=\dots=U_r$</p> </blockquote> <p>Let's prove, by induction on $r$, that, given a sequence of subspaces $$ U_1\subsetneq U_2\subsetneq\dots\subsetneq U_r $$ then $\dim U_r\ge r-1$. We only use the fact that $U\subsetneq U'$ implies $\dim U'\ge 1+\dim U$.</p> <p>If $r=1$, this is trivial. Assume it holds for $r-1$. Then, by induction hypothesis, $\dim U_{r-1}\ge r-2$ and so $\dim U_r\ge (r-2)+1=r-1$.</p> <p>So, if we have $$ U_1\subseteq U_2\subseteq\dots\subseteq U_r $$ and $r&gt;n+1$, the above result proves that not all inclusions are proper. </p> <p>Let $j$ be the maximum index such that $U_{j}\subsetneq U_{j+1}$ (or $j=0$ if all inclusions are equalities): by the result above, $j&lt;r-1$. Now, for $i=j+1$, $U_i=U_{i+1}=\dots=U_r$.</p> Answer 10: <p>Your proof is (essentially) correct.</p> <p>In the second line of the paragraph you want</p> <blockquote> <p>increases the dimension by <em>at least</em> one</p> </blockquote> <p>For example, consider $$ \{0\} \subseteq \text{$x$-$y$-plane} \subseteq \ldots $$ in $\mathbb{R}^3$.</p> <p>Your instructor might want you to prove that claim about dimensions, or might be willing to take it as known, depending on what you've done in class.</p> <p>You don't need induction.</p> <p>Note: although the question you're asking is clear, the statement </p> <blockquote> <p>$$U_1 \subsetneq U_2 \subsetneq \dots \subsetneq U_r$$ If $r&gt;n+1$ ...</p> </blockquote> <p>isn't the right way to phrase it. The strict inequalities should be weak $\subseteq$. Then you're to show that at least one of them is an equality.</p> Answer 11: <p>Hints following your own work:</p> <p>=== Since $\;U_k\subsetneq U_{k+1}\;$ , we get that <em>always</em> $\;\dim U_{k+1}\ge \dim U_k+1\;$ , and from here $\;\dim U_r\ge n\;$ ..</p> <p>=== If $\;W\le V\;$ are finite dimensional linear spaces and $\;\dim W=\dim V\;$ , then $\;W=V\;$ .</p> Answer 12: <p>Your idea is very good, although not well expressed; but also the statement is not well written. Let's state it properly.</p> <blockquote> <p>Let $V$ be a $n$-dimensional vector space, let $U_i \subseteq V$ be subspaces of $V$ for $i = 1,2,\dots,r$ where $$U_1 \subseteq U_2 \subseteq \dots \subseteq U_r$$ If $r&gt;n+1$ then there exists an $i&lt;r$ for which $U_i = U_{i+1}=\dots=U_r$</p> </blockquote> <p>Let's prove, by induction on $r$, that, given a sequence of subspaces $$ U_1\subsetneq U_2\subsetneq\dots\subsetneq U_r $$ then $\dim U_r\ge r-1$. We only use the fact that $U\subsetneq U'$ implies $\dim U'\ge 1+\dim U$.</p> <p>If $r=1$, this is trivial. Assume it holds for $r-1$. Then, by induction hypothesis, $\dim U_{r-1}\ge r-2$ and so $\dim U_r\ge (r-2)+1=r-1$.</p> <p>So, if we have $$ U_1\subseteq U_2\subseteq\dots\subseteq U_r $$ and $r&gt;n+1$, the above result proves that not all inclusions are proper. </p> <p>Let $j$ be the maximum index such that $U_{j}\subsetneq U_{j+1}$ (or $j=0$ if all inclusions are equalities): by the result above, $j&lt;r-1$. Now, for $i=j+1$, $U_i=U_{i+1}=\dots=U_r$.</p> Answer 13: <p>Your proof is (essentially) correct.</p> <p>In the second line of the paragraph you want</p> <blockquote> <p>increases the dimension by <em>at least</em> one</p> </blockquote> <p>For example, consider $$ \{0\} \subseteq \text{$x$-$y$-plane} \subseteq \ldots $$ in $\mathbb{R}^3$.</p> <p>Your instructor might want you to prove that claim about dimensions, or might be willing to take it as known, depending on what you've done in class.</p> <p>You don't need induction.</p> <p>Note: although the question you're asking is clear, the statement </p> <blockquote> <p>$$U_1 \subsetneq U_2 \subsetneq \dots \subsetneq U_r$$ If $r&gt;n+1$ ...</p> </blockquote> <p>isn't the right way to phrase it. The strict inequalities should be weak $\subseteq$. Then you're to show that at least one of them is an equality.</p> Answer 14: <p>Hints following your own work:</p> <p>=== Since $\;U_k\subsetneq U_{k+1}\;$ , we get that <em>always</em> $\;\dim U_{k+1}\ge \dim U_k+1\;$ , and from here $\;\dim U_r\ge n\;$ ..</p> <p>=== If $\;W\le V\;$ are finite dimensional linear spaces and $\;\dim W=\dim V\;$ , then $\;W=V\;$ .</p> Answer 15: <p>Your idea is very good, although not well expressed; but also the statement is not well written. Let's state it properly.</p> <blockquote> <p>Let $V$ be a $n$-dimensional vector space, let $U_i \subseteq V$ be subspaces of $V$ for $i = 1,2,\dots,r$ where $$U_1 \subseteq U_2 \subseteq \dots \subseteq U_r$$ If $r&gt;n+1$ then there exists an $i&lt;r$ for which $U_i = U_{i+1}=\dots=U_r$</p> </blockquote> <p>Let's prove, by induction on $r$, that, given a sequence of subspaces $$ U_1\subsetneq U_2\subsetneq\dots\subsetneq U_r $$ then $\dim U_r\ge r-1$. We only use the fact that $U\subsetneq U'$ implies $\dim U'\ge 1+\dim U$.</p> <p>If $r=1$, this is trivial. Assume it holds for $r-1$. Then, by induction hypothesis, $\dim U_{r-1}\ge r-2$ and so $\dim U_r\ge (r-2)+1=r-1$.</p> <p>So, if we have $$ U_1\subseteq U_2\subseteq\dots\subseteq U_r $$ and $r&gt;n+1$, the above result proves that not all inclusions are proper. </p> <p>Let $j$ be the maximum index such that $U_{j}\subsetneq U_{j+1}$ (or $j=0$ if all inclusions are equalities): by the result above, $j&lt;r-1$. Now, for $i=j+1$, $U_i=U_{i+1}=\dots=U_r$.</p>
Prove a convex and concave function can have at most 2 solutions
2,112,108
5
https://math.stackexchange.com/questions/2112108/prove-a-convex-and-concave-function-can-have-at-most-2-solutions
<p>Let $ a,b \in \mathbb{R}$</p> <p>Let $f(x) : [a,b] \rightarrow \mathbb{R} $ is a continuous function in $ [a ,b] $, differentiable and strictly <strong>convex</strong> in $ (a, b) $</p> <p>and $g(x) : [a,b] \rightarrow \mathbb{R} $ is a continuous function in $ [a ,b] $, differentiable and strictly <strong>concave</strong> in $ (a, b) $</p> <p>How can I prove the intersection of $ f $ and $ g $ can have a maximum of two roots $f(x)-g(x)=0 $ ?</p>
[ { "answer_id": 2112132, "score": 6, "text": "<p><span class=\"math-container\">$h(x)=f(x)-g(x)$</span> is a strictly convex function in <span class=\"math-container\">$[a,b]$</span>.</p>\n\n<p>So we just have to prove that a convex function has at most <span class=\"math-container\">$2$</span> roots in <span class=\"math-container\">$[a,b]$</span>.</p>\n\n<p>If <span class=\"math-container\">$h(x)$</span> has <span class=\"math-container\">$3$</span> roots let's say, <span class=\"math-container\">$p,q,r$</span> then there is <span class=\"math-container\">$c \\in (p,q)$</span> such that <span class=\"math-container\">$h'(c)=0$</span> (by Rolle's Theorem) and there is <span class=\"math-container\">$d \\in (q,r)$</span> such that <span class=\"math-container\">$h'(d)=0$</span>.</p>\n\n<p>But it means that there is <span class=\"math-container\">$e \\in (c,d)$</span> such that <span class=\"math-container\">$(h')'(e)=0$</span>, which is a contradiction because <span class=\"math-container\">$h''(x)&gt;0$</span>. </p>\n" }, { "answer_id": 2112130, "score": 5, "text": "<p>If there were three distinct roots $x_1&lt;x_2&lt;x_3$ of $f(x)-g(x)=0$ then by choosing $\\lambda_2$ so that $x_2=\\lambda_2x_1+(1-\\lambda_2)x_3$ we would have\n$$\n(f-g)(x_2)=0=\\lambda_2(f-g)(x_1)+(1-\\lambda_2)(f-g)(x_3),\n$$\nwhich would be in contradiction with the strict convexity of $f-g$.</p>\n" }, { "answer_id": 2112128, "score": 4, "text": "<p>If $f(x)$ is strictly convex and $g(x)$ is strictly concave, then $f(x) - g(x)$ is also strictly convex (the negative of a concave function is convex).</p>\n\n<p>A strictly convex function has its derivative monotonically increasing and so can have at most two zeroes.</p>\n\n<p>This comes about because a differentiable function with three zeroes on a closed interval $[a,b]$ must have at least two extrema on $(a,b)$ (a consequence of Rolle's Theorem, I think) but having two extrema violates monotonicity of the derivative.</p>\n" }, { "answer_id": 2112132, "score": 6, "text": "<p><span class=\"math-container\">$h(x)=f(x)-g(x)$</span> is a strictly convex function in <span class=\"math-container\">$[a,b]$</span>.</p>\n\n<p>So we just have to prove that a convex function has at most <span class=\"math-container\">$2$</span> roots in <span class=\"math-container\">$[a,b]$</span>.</p>\n\n<p>If <span class=\"math-container\">$h(x)$</span> has <span class=\"math-container\">$3$</span> roots let's say, <span class=\"math-container\">$p,q,r$</span> then there is <span class=\"math-container\">$c \\in (p,q)$</span> such that <span class=\"math-container\">$h'(c)=0$</span> (by Rolle's Theorem) and there is <span class=\"math-container\">$d \\in (q,r)$</span> such that <span class=\"math-container\">$h'(d)=0$</span>.</p>\n\n<p>But it means that there is <span class=\"math-container\">$e \\in (c,d)$</span> such that <span class=\"math-container\">$(h')'(e)=0$</span>, which is a contradiction because <span class=\"math-container\">$h''(x)&gt;0$</span>. </p>\n" }, { "answer_id": 2112130, "score": 5, "text": "<p>If there were three distinct roots $x_1&lt;x_2&lt;x_3$ of $f(x)-g(x)=0$ then by choosing $\\lambda_2$ so that $x_2=\\lambda_2x_1+(1-\\lambda_2)x_3$ we would have\n$$\n(f-g)(x_2)=0=\\lambda_2(f-g)(x_1)+(1-\\lambda_2)(f-g)(x_3),\n$$\nwhich would be in contradiction with the strict convexity of $f-g$.</p>\n" }, { "answer_id": 2112128, "score": 4, "text": "<p>If $f(x)$ is strictly convex and $g(x)$ is strictly concave, then $f(x) - g(x)$ is also strictly convex (the negative of a concave function is convex).</p>\n\n<p>A strictly convex function has its derivative monotonically increasing and so can have at most two zeroes.</p>\n\n<p>This comes about because a differentiable function with three zeroes on a closed interval $[a,b]$ must have at least two extrema on $(a,b)$ (a consequence of Rolle's Theorem, I think) but having two extrema violates monotonicity of the derivative.</p>\n" }, { "answer_id": 2112132, "score": 6, "text": "<p><span class=\"math-container\">$h(x)=f(x)-g(x)$</span> is a strictly convex function in <span class=\"math-container\">$[a,b]$</span>.</p>\n\n<p>So we just have to prove that a convex function has at most <span class=\"math-container\">$2$</span> roots in <span class=\"math-container\">$[a,b]$</span>.</p>\n\n<p>If <span class=\"math-container\">$h(x)$</span> has <span class=\"math-container\">$3$</span> roots let's say, <span class=\"math-container\">$p,q,r$</span> then there is <span class=\"math-container\">$c \\in (p,q)$</span> such that <span class=\"math-container\">$h'(c)=0$</span> (by Rolle's Theorem) and there is <span class=\"math-container\">$d \\in (q,r)$</span> such that <span class=\"math-container\">$h'(d)=0$</span>.</p>\n\n<p>But it means that there is <span class=\"math-container\">$e \\in (c,d)$</span> such that <span class=\"math-container\">$(h')'(e)=0$</span>, which is a contradiction because <span class=\"math-container\">$h''(x)&gt;0$</span>. </p>\n" }, { "answer_id": 2112130, "score": 5, "text": "<p>If there were three distinct roots $x_1&lt;x_2&lt;x_3$ of $f(x)-g(x)=0$ then by choosing $\\lambda_2$ so that $x_2=\\lambda_2x_1+(1-\\lambda_2)x_3$ we would have\n$$\n(f-g)(x_2)=0=\\lambda_2(f-g)(x_1)+(1-\\lambda_2)(f-g)(x_3),\n$$\nwhich would be in contradiction with the strict convexity of $f-g$.</p>\n" }, { "answer_id": 2112128, "score": 4, "text": "<p>If $f(x)$ is strictly convex and $g(x)$ is strictly concave, then $f(x) - g(x)$ is also strictly convex (the negative of a concave function is convex).</p>\n\n<p>A strictly convex function has its derivative monotonically increasing and so can have at most two zeroes.</p>\n\n<p>This comes about because a differentiable function with three zeroes on a closed interval $[a,b]$ must have at least two extrema on $(a,b)$ (a consequence of Rolle's Theorem, I think) but having two extrema violates monotonicity of the derivative.</p>\n" }, { "answer_id": 2112132, "score": 6, "text": "<p><span class=\"math-container\">$h(x)=f(x)-g(x)$</span> is a strictly convex function in <span class=\"math-container\">$[a,b]$</span>.</p>\n\n<p>So we just have to prove that a convex function has at most <span class=\"math-container\">$2$</span> roots in <span class=\"math-container\">$[a,b]$</span>.</p>\n\n<p>If <span class=\"math-container\">$h(x)$</span> has <span class=\"math-container\">$3$</span> roots let's say, <span class=\"math-container\">$p,q,r$</span> then there is <span class=\"math-container\">$c \\in (p,q)$</span> such that <span class=\"math-container\">$h'(c)=0$</span> (by Rolle's Theorem) and there is <span class=\"math-container\">$d \\in (q,r)$</span> such that <span class=\"math-container\">$h'(d)=0$</span>.</p>\n\n<p>But it means that there is <span class=\"math-container\">$e \\in (c,d)$</span> such that <span class=\"math-container\">$(h')'(e)=0$</span>, which is a contradiction because <span class=\"math-container\">$h''(x)&gt;0$</span>. </p>\n" }, { "answer_id": 2112130, "score": 5, "text": "<p>If there were three distinct roots $x_1&lt;x_2&lt;x_3$ of $f(x)-g(x)=0$ then by choosing $\\lambda_2$ so that $x_2=\\lambda_2x_1+(1-\\lambda_2)x_3$ we would have\n$$\n(f-g)(x_2)=0=\\lambda_2(f-g)(x_1)+(1-\\lambda_2)(f-g)(x_3),\n$$\nwhich would be in contradiction with the strict convexity of $f-g$.</p>\n" }, { "answer_id": 2112128, "score": 4, "text": "<p>If $f(x)$ is strictly convex and $g(x)$ is strictly concave, then $f(x) - g(x)$ is also strictly convex (the negative of a concave function is convex).</p>\n\n<p>A strictly convex function has its derivative monotonically increasing and so can have at most two zeroes.</p>\n\n<p>This comes about because a differentiable function with three zeroes on a closed interval $[a,b]$ must have at least two extrema on $(a,b)$ (a consequence of Rolle's Theorem, I think) but having two extrema violates monotonicity of the derivative.</p>\n" }, { "answer_id": 2112132, "score": 6, "text": "<p><span class=\"math-container\">$h(x)=f(x)-g(x)$</span> is a strictly convex function in <span class=\"math-container\">$[a,b]$</span>.</p>\n\n<p>So we just have to prove that a convex function has at most <span class=\"math-container\">$2$</span> roots in <span class=\"math-container\">$[a,b]$</span>.</p>\n\n<p>If <span class=\"math-container\">$h(x)$</span> has <span class=\"math-container\">$3$</span> roots let's say, <span class=\"math-container\">$p,q,r$</span> then there is <span class=\"math-container\">$c \\in (p,q)$</span> such that <span class=\"math-container\">$h'(c)=0$</span> (by Rolle's Theorem) and there is <span class=\"math-container\">$d \\in (q,r)$</span> such that <span class=\"math-container\">$h'(d)=0$</span>.</p>\n\n<p>But it means that there is <span class=\"math-container\">$e \\in (c,d)$</span> such that <span class=\"math-container\">$(h')'(e)=0$</span>, which is a contradiction because <span class=\"math-container\">$h''(x)&gt;0$</span>. </p>\n" }, { "answer_id": 2112130, "score": 5, "text": "<p>If there were three distinct roots $x_1&lt;x_2&lt;x_3$ of $f(x)-g(x)=0$ then by choosing $\\lambda_2$ so that $x_2=\\lambda_2x_1+(1-\\lambda_2)x_3$ we would have\n$$\n(f-g)(x_2)=0=\\lambda_2(f-g)(x_1)+(1-\\lambda_2)(f-g)(x_3),\n$$\nwhich would be in contradiction with the strict convexity of $f-g$.</p>\n" }, { "answer_id": 2112128, "score": 4, "text": "<p>If $f(x)$ is strictly convex and $g(x)$ is strictly concave, then $f(x) - g(x)$ is also strictly convex (the negative of a concave function is convex).</p>\n\n<p>A strictly convex function has its derivative monotonically increasing and so can have at most two zeroes.</p>\n\n<p>This comes about because a differentiable function with three zeroes on a closed interval $[a,b]$ must have at least two extrema on $(a,b)$ (a consequence of Rolle's Theorem, I think) but having two extrema violates monotonicity of the derivative.</p>\n" } ]
Question: Prove a convex and concave function can have at most 2 solutions Answer 1: <p><span class="math-container">$h(x)=f(x)-g(x)$</span> is a strictly convex function in <span class="math-container">$[a,b]$</span>.</p> <p>So we just have to prove that a convex function has at most <span class="math-container">$2$</span> roots in <span class="math-container">$[a,b]$</span>.</p> <p>If <span class="math-container">$h(x)$</span> has <span class="math-container">$3$</span> roots let's say, <span class="math-container">$p,q,r$</span> then there is <span class="math-container">$c \in (p,q)$</span> such that <span class="math-container">$h'(c)=0$</span> (by Rolle's Theorem) and there is <span class="math-container">$d \in (q,r)$</span> such that <span class="math-container">$h'(d)=0$</span>.</p> <p>But it means that there is <span class="math-container">$e \in (c,d)$</span> such that <span class="math-container">$(h')'(e)=0$</span>, which is a contradiction because <span class="math-container">$h''(x)&gt;0$</span>. </p> Answer 2: <p>If there were three distinct roots $x_1&lt;x_2&lt;x_3$ of $f(x)-g(x)=0$ then by choosing $\lambda_2$ so that $x_2=\lambda_2x_1+(1-\lambda_2)x_3$ we would have $$ (f-g)(x_2)=0=\lambda_2(f-g)(x_1)+(1-\lambda_2)(f-g)(x_3), $$ which would be in contradiction with the strict convexity of $f-g$.</p> Answer 3: <p>If $f(x)$ is strictly convex and $g(x)$ is strictly concave, then $f(x) - g(x)$ is also strictly convex (the negative of a concave function is convex).</p> <p>A strictly convex function has its derivative monotonically increasing and so can have at most two zeroes.</p> <p>This comes about because a differentiable function with three zeroes on a closed interval $[a,b]$ must have at least two extrema on $(a,b)$ (a consequence of Rolle's Theorem, I think) but having two extrema violates monotonicity of the derivative.</p> Answer 4: <p><span class="math-container">$h(x)=f(x)-g(x)$</span> is a strictly convex function in <span class="math-container">$[a,b]$</span>.</p> <p>So we just have to prove that a convex function has at most <span class="math-container">$2$</span> roots in <span class="math-container">$[a,b]$</span>.</p> <p>If <span class="math-container">$h(x)$</span> has <span class="math-container">$3$</span> roots let's say, <span class="math-container">$p,q,r$</span> then there is <span class="math-container">$c \in (p,q)$</span> such that <span class="math-container">$h'(c)=0$</span> (by Rolle's Theorem) and there is <span class="math-container">$d \in (q,r)$</span> such that <span class="math-container">$h'(d)=0$</span>.</p> <p>But it means that there is <span class="math-container">$e \in (c,d)$</span> such that <span class="math-container">$(h')'(e)=0$</span>, which is a contradiction because <span class="math-container">$h''(x)&gt;0$</span>. </p> Answer 5: <p>If there were three distinct roots $x_1&lt;x_2&lt;x_3$ of $f(x)-g(x)=0$ then by choosing $\lambda_2$ so that $x_2=\lambda_2x_1+(1-\lambda_2)x_3$ we would have $$ (f-g)(x_2)=0=\lambda_2(f-g)(x_1)+(1-\lambda_2)(f-g)(x_3), $$ which would be in contradiction with the strict convexity of $f-g$.</p> Answer 6: <p>If $f(x)$ is strictly convex and $g(x)$ is strictly concave, then $f(x) - g(x)$ is also strictly convex (the negative of a concave function is convex).</p> <p>A strictly convex function has its derivative monotonically increasing and so can have at most two zeroes.</p> <p>This comes about because a differentiable function with three zeroes on a closed interval $[a,b]$ must have at least two extrema on $(a,b)$ (a consequence of Rolle's Theorem, I think) but having two extrema violates monotonicity of the derivative.</p> Answer 7: <p><span class="math-container">$h(x)=f(x)-g(x)$</span> is a strictly convex function in <span class="math-container">$[a,b]$</span>.</p> <p>So we just have to prove that a convex function has at most <span class="math-container">$2$</span> roots in <span class="math-container">$[a,b]$</span>.</p> <p>If <span class="math-container">$h(x)$</span> has <span class="math-container">$3$</span> roots let's say, <span class="math-container">$p,q,r$</span> then there is <span class="math-container">$c \in (p,q)$</span> such that <span class="math-container">$h'(c)=0$</span> (by Rolle's Theorem) and there is <span class="math-container">$d \in (q,r)$</span> such that <span class="math-container">$h'(d)=0$</span>.</p> <p>But it means that there is <span class="math-container">$e \in (c,d)$</span> such that <span class="math-container">$(h')'(e)=0$</span>, which is a contradiction because <span class="math-container">$h''(x)&gt;0$</span>. </p> Answer 8: <p>If there were three distinct roots $x_1&lt;x_2&lt;x_3$ of $f(x)-g(x)=0$ then by choosing $\lambda_2$ so that $x_2=\lambda_2x_1+(1-\lambda_2)x_3$ we would have $$ (f-g)(x_2)=0=\lambda_2(f-g)(x_1)+(1-\lambda_2)(f-g)(x_3), $$ which would be in contradiction with the strict convexity of $f-g$.</p> Answer 9: <p>If $f(x)$ is strictly convex and $g(x)$ is strictly concave, then $f(x) - g(x)$ is also strictly convex (the negative of a concave function is convex).</p> <p>A strictly convex function has its derivative monotonically increasing and so can have at most two zeroes.</p> <p>This comes about because a differentiable function with three zeroes on a closed interval $[a,b]$ must have at least two extrema on $(a,b)$ (a consequence of Rolle's Theorem, I think) but having two extrema violates monotonicity of the derivative.</p> Answer 10: <p><span class="math-container">$h(x)=f(x)-g(x)$</span> is a strictly convex function in <span class="math-container">$[a,b]$</span>.</p> <p>So we just have to prove that a convex function has at most <span class="math-container">$2$</span> roots in <span class="math-container">$[a,b]$</span>.</p> <p>If <span class="math-container">$h(x)$</span> has <span class="math-container">$3$</span> roots let's say, <span class="math-container">$p,q,r$</span> then there is <span class="math-container">$c \in (p,q)$</span> such that <span class="math-container">$h'(c)=0$</span> (by Rolle's Theorem) and there is <span class="math-container">$d \in (q,r)$</span> such that <span class="math-container">$h'(d)=0$</span>.</p> <p>But it means that there is <span class="math-container">$e \in (c,d)$</span> such that <span class="math-container">$(h')'(e)=0$</span>, which is a contradiction because <span class="math-container">$h''(x)&gt;0$</span>. </p> Answer 11: <p>If there were three distinct roots $x_1&lt;x_2&lt;x_3$ of $f(x)-g(x)=0$ then by choosing $\lambda_2$ so that $x_2=\lambda_2x_1+(1-\lambda_2)x_3$ we would have $$ (f-g)(x_2)=0=\lambda_2(f-g)(x_1)+(1-\lambda_2)(f-g)(x_3), $$ which would be in contradiction with the strict convexity of $f-g$.</p> Answer 12: <p>If $f(x)$ is strictly convex and $g(x)$ is strictly concave, then $f(x) - g(x)$ is also strictly convex (the negative of a concave function is convex).</p> <p>A strictly convex function has its derivative monotonically increasing and so can have at most two zeroes.</p> <p>This comes about because a differentiable function with three zeroes on a closed interval $[a,b]$ must have at least two extrema on $(a,b)$ (a consequence of Rolle's Theorem, I think) but having two extrema violates monotonicity of the derivative.</p> Answer 13: <p><span class="math-container">$h(x)=f(x)-g(x)$</span> is a strictly convex function in <span class="math-container">$[a,b]$</span>.</p> <p>So we just have to prove that a convex function has at most <span class="math-container">$2$</span> roots in <span class="math-container">$[a,b]$</span>.</p> <p>If <span class="math-container">$h(x)$</span> has <span class="math-container">$3$</span> roots let's say, <span class="math-container">$p,q,r$</span> then there is <span class="math-container">$c \in (p,q)$</span> such that <span class="math-container">$h'(c)=0$</span> (by Rolle's Theorem) and there is <span class="math-container">$d \in (q,r)$</span> such that <span class="math-container">$h'(d)=0$</span>.</p> <p>But it means that there is <span class="math-container">$e \in (c,d)$</span> such that <span class="math-container">$(h')'(e)=0$</span>, which is a contradiction because <span class="math-container">$h''(x)&gt;0$</span>. </p> Answer 14: <p>If there were three distinct roots $x_1&lt;x_2&lt;x_3$ of $f(x)-g(x)=0$ then by choosing $\lambda_2$ so that $x_2=\lambda_2x_1+(1-\lambda_2)x_3$ we would have $$ (f-g)(x_2)=0=\lambda_2(f-g)(x_1)+(1-\lambda_2)(f-g)(x_3), $$ which would be in contradiction with the strict convexity of $f-g$.</p> Answer 15: <p>If $f(x)$ is strictly convex and $g(x)$ is strictly concave, then $f(x) - g(x)$ is also strictly convex (the negative of a concave function is convex).</p> <p>A strictly convex function has its derivative monotonically increasing and so can have at most two zeroes.</p> <p>This comes about because a differentiable function with three zeroes on a closed interval $[a,b]$ must have at least two extrema on $(a,b)$ (a consequence of Rolle's Theorem, I think) but having two extrema violates monotonicity of the derivative.</p>
What is the difference between relational logic and predicate logic?
2,112,147
5
https://math.stackexchange.com/questions/2112147/what-is-the-difference-between-relational-logic-and-predicate-logic
<p>I am studying the Introduction to logic course from Stanford University and I begin learning about relational logic. However when I search on google for the terms there I end up often with results from websites that teach predicate logic. Is there a difference between the two types of logic ?</p> <p>I am talking about THIS course from Standford : <a href="http://logic.stanford.edu/intrologic/notes/chapter_06.html" rel="noreferrer">http://logic.stanford.edu/intrologic/notes/chapter_06.html</a></p>
[ { "answer_id": 2112164, "score": 6, "text": "<p>Some books use 'relational logic' to emphasize that it goes beyond unary predicates ... (and there are important pedogogical, practical, and theoretical reasons for doing so). Indeed, many books first discuss something they call 'categorical logic', restricted to just unary predicates. For example, Aristotle studied this kind of logic with claims like 'All humans are mortal'. (Then again, some people hold 'categorical logic' to be something different yet, see e.g. the Wikipedia page on 'Categorical Logic'.) </p>\n\n<p>Your book, however, uses 'relational logic' in a way synonymous with 'predicate logic', which is typically understood as the logic where you can have predicates of any arity. (then again, some will insist that only 1-place relationships are 'predicates' (i.e. more like 'properties'), while 2- or more place relationships are 'relations', but not 'predicates' ...)</p>\n\n<p>In other words ... the terminology here is not fixed, so you will find different people have different definitions for these different logics. But, I think most people would agree with the claim that relational logic is a part of predicate logic, i.e. that 'predicate logic' is the more general logic. This is certainly how this community uses the tag 'predicate-logic'</p>\n\n<p>... all of which means ...</p>\n\n<p>You can probably learn plenty about relational logic on the sites that talk about predicate logic! You can also look for 'first-order logic' or 'quantificational logic'.</p>\n" }, { "answer_id": 2725724, "score": 5, "text": "<p>In the Stanford course \"relational logic\" just means First-order logic (FOL) with Herbrand semantics. In other words, FOL without model theory. You could say that Herbrand semantics introduces a new kind of model theory, but it really is just a way to set model theory aside and focus on logic.</p>\n\n<p>I think this is a good idea because it emphasizes the ancient distinction between logic and grammar, where logic is concerned with form, and grammar is concerned with meaning and interpretation. Taking this view to the extreme, we could say that model theory should not even be seen as part of logic, but rather grammar. They are closely related, but the distinction is important.</p>\n" }, { "answer_id": 2112164, "score": 6, "text": "<p>Some books use 'relational logic' to emphasize that it goes beyond unary predicates ... (and there are important pedogogical, practical, and theoretical reasons for doing so). Indeed, many books first discuss something they call 'categorical logic', restricted to just unary predicates. For example, Aristotle studied this kind of logic with claims like 'All humans are mortal'. (Then again, some people hold 'categorical logic' to be something different yet, see e.g. the Wikipedia page on 'Categorical Logic'.) </p>\n\n<p>Your book, however, uses 'relational logic' in a way synonymous with 'predicate logic', which is typically understood as the logic where you can have predicates of any arity. (then again, some will insist that only 1-place relationships are 'predicates' (i.e. more like 'properties'), while 2- or more place relationships are 'relations', but not 'predicates' ...)</p>\n\n<p>In other words ... the terminology here is not fixed, so you will find different people have different definitions for these different logics. But, I think most people would agree with the claim that relational logic is a part of predicate logic, i.e. that 'predicate logic' is the more general logic. This is certainly how this community uses the tag 'predicate-logic'</p>\n\n<p>... all of which means ...</p>\n\n<p>You can probably learn plenty about relational logic on the sites that talk about predicate logic! You can also look for 'first-order logic' or 'quantificational logic'.</p>\n" }, { "answer_id": 2725724, "score": 5, "text": "<p>In the Stanford course \"relational logic\" just means First-order logic (FOL) with Herbrand semantics. In other words, FOL without model theory. You could say that Herbrand semantics introduces a new kind of model theory, but it really is just a way to set model theory aside and focus on logic.</p>\n\n<p>I think this is a good idea because it emphasizes the ancient distinction between logic and grammar, where logic is concerned with form, and grammar is concerned with meaning and interpretation. Taking this view to the extreme, we could say that model theory should not even be seen as part of logic, but rather grammar. They are closely related, but the distinction is important.</p>\n" }, { "answer_id": 2112164, "score": 6, "text": "<p>Some books use 'relational logic' to emphasize that it goes beyond unary predicates ... (and there are important pedogogical, practical, and theoretical reasons for doing so). Indeed, many books first discuss something they call 'categorical logic', restricted to just unary predicates. For example, Aristotle studied this kind of logic with claims like 'All humans are mortal'. (Then again, some people hold 'categorical logic' to be something different yet, see e.g. the Wikipedia page on 'Categorical Logic'.) </p>\n\n<p>Your book, however, uses 'relational logic' in a way synonymous with 'predicate logic', which is typically understood as the logic where you can have predicates of any arity. (then again, some will insist that only 1-place relationships are 'predicates' (i.e. more like 'properties'), while 2- or more place relationships are 'relations', but not 'predicates' ...)</p>\n\n<p>In other words ... the terminology here is not fixed, so you will find different people have different definitions for these different logics. But, I think most people would agree with the claim that relational logic is a part of predicate logic, i.e. that 'predicate logic' is the more general logic. This is certainly how this community uses the tag 'predicate-logic'</p>\n\n<p>... all of which means ...</p>\n\n<p>You can probably learn plenty about relational logic on the sites that talk about predicate logic! You can also look for 'first-order logic' or 'quantificational logic'.</p>\n" }, { "answer_id": 2725724, "score": 5, "text": "<p>In the Stanford course \"relational logic\" just means First-order logic (FOL) with Herbrand semantics. In other words, FOL without model theory. You could say that Herbrand semantics introduces a new kind of model theory, but it really is just a way to set model theory aside and focus on logic.</p>\n\n<p>I think this is a good idea because it emphasizes the ancient distinction between logic and grammar, where logic is concerned with form, and grammar is concerned with meaning and interpretation. Taking this view to the extreme, we could say that model theory should not even be seen as part of logic, but rather grammar. They are closely related, but the distinction is important.</p>\n" }, { "answer_id": 2112164, "score": 6, "text": "<p>Some books use 'relational logic' to emphasize that it goes beyond unary predicates ... (and there are important pedogogical, practical, and theoretical reasons for doing so). Indeed, many books first discuss something they call 'categorical logic', restricted to just unary predicates. For example, Aristotle studied this kind of logic with claims like 'All humans are mortal'. (Then again, some people hold 'categorical logic' to be something different yet, see e.g. the Wikipedia page on 'Categorical Logic'.) </p>\n\n<p>Your book, however, uses 'relational logic' in a way synonymous with 'predicate logic', which is typically understood as the logic where you can have predicates of any arity. (then again, some will insist that only 1-place relationships are 'predicates' (i.e. more like 'properties'), while 2- or more place relationships are 'relations', but not 'predicates' ...)</p>\n\n<p>In other words ... the terminology here is not fixed, so you will find different people have different definitions for these different logics. But, I think most people would agree with the claim that relational logic is a part of predicate logic, i.e. that 'predicate logic' is the more general logic. This is certainly how this community uses the tag 'predicate-logic'</p>\n\n<p>... all of which means ...</p>\n\n<p>You can probably learn plenty about relational logic on the sites that talk about predicate logic! You can also look for 'first-order logic' or 'quantificational logic'.</p>\n" }, { "answer_id": 2725724, "score": 5, "text": "<p>In the Stanford course \"relational logic\" just means First-order logic (FOL) with Herbrand semantics. In other words, FOL without model theory. You could say that Herbrand semantics introduces a new kind of model theory, but it really is just a way to set model theory aside and focus on logic.</p>\n\n<p>I think this is a good idea because it emphasizes the ancient distinction between logic and grammar, where logic is concerned with form, and grammar is concerned with meaning and interpretation. Taking this view to the extreme, we could say that model theory should not even be seen as part of logic, but rather grammar. They are closely related, but the distinction is important.</p>\n" }, { "answer_id": 2112164, "score": 6, "text": "<p>Some books use 'relational logic' to emphasize that it goes beyond unary predicates ... (and there are important pedogogical, practical, and theoretical reasons for doing so). Indeed, many books first discuss something they call 'categorical logic', restricted to just unary predicates. For example, Aristotle studied this kind of logic with claims like 'All humans are mortal'. (Then again, some people hold 'categorical logic' to be something different yet, see e.g. the Wikipedia page on 'Categorical Logic'.) </p>\n\n<p>Your book, however, uses 'relational logic' in a way synonymous with 'predicate logic', which is typically understood as the logic where you can have predicates of any arity. (then again, some will insist that only 1-place relationships are 'predicates' (i.e. more like 'properties'), while 2- or more place relationships are 'relations', but not 'predicates' ...)</p>\n\n<p>In other words ... the terminology here is not fixed, so you will find different people have different definitions for these different logics. But, I think most people would agree with the claim that relational logic is a part of predicate logic, i.e. that 'predicate logic' is the more general logic. This is certainly how this community uses the tag 'predicate-logic'</p>\n\n<p>... all of which means ...</p>\n\n<p>You can probably learn plenty about relational logic on the sites that talk about predicate logic! You can also look for 'first-order logic' or 'quantificational logic'.</p>\n" }, { "answer_id": 2725724, "score": 5, "text": "<p>In the Stanford course \"relational logic\" just means First-order logic (FOL) with Herbrand semantics. In other words, FOL without model theory. You could say that Herbrand semantics introduces a new kind of model theory, but it really is just a way to set model theory aside and focus on logic.</p>\n\n<p>I think this is a good idea because it emphasizes the ancient distinction between logic and grammar, where logic is concerned with form, and grammar is concerned with meaning and interpretation. Taking this view to the extreme, we could say that model theory should not even be seen as part of logic, but rather grammar. They are closely related, but the distinction is important.</p>\n" } ]
Question: What is the difference between relational logic and predicate logic? Answer 1: <p>Some books use 'relational logic' to emphasize that it goes beyond unary predicates ... (and there are important pedogogical, practical, and theoretical reasons for doing so). Indeed, many books first discuss something they call 'categorical logic', restricted to just unary predicates. For example, Aristotle studied this kind of logic with claims like 'All humans are mortal'. (Then again, some people hold 'categorical logic' to be something different yet, see e.g. the Wikipedia page on 'Categorical Logic'.) </p> <p>Your book, however, uses 'relational logic' in a way synonymous with 'predicate logic', which is typically understood as the logic where you can have predicates of any arity. (then again, some will insist that only 1-place relationships are 'predicates' (i.e. more like 'properties'), while 2- or more place relationships are 'relations', but not 'predicates' ...)</p> <p>In other words ... the terminology here is not fixed, so you will find different people have different definitions for these different logics. But, I think most people would agree with the claim that relational logic is a part of predicate logic, i.e. that 'predicate logic' is the more general logic. This is certainly how this community uses the tag 'predicate-logic'</p> <p>... all of which means ...</p> <p>You can probably learn plenty about relational logic on the sites that talk about predicate logic! You can also look for 'first-order logic' or 'quantificational logic'.</p> Answer 2: <p>In the Stanford course "relational logic" just means First-order logic (FOL) with Herbrand semantics. In other words, FOL without model theory. You could say that Herbrand semantics introduces a new kind of model theory, but it really is just a way to set model theory aside and focus on logic.</p> <p>I think this is a good idea because it emphasizes the ancient distinction between logic and grammar, where logic is concerned with form, and grammar is concerned with meaning and interpretation. Taking this view to the extreme, we could say that model theory should not even be seen as part of logic, but rather grammar. They are closely related, but the distinction is important.</p> Answer 3: <p>Some books use 'relational logic' to emphasize that it goes beyond unary predicates ... (and there are important pedogogical, practical, and theoretical reasons for doing so). Indeed, many books first discuss something they call 'categorical logic', restricted to just unary predicates. For example, Aristotle studied this kind of logic with claims like 'All humans are mortal'. (Then again, some people hold 'categorical logic' to be something different yet, see e.g. the Wikipedia page on 'Categorical Logic'.) </p> <p>Your book, however, uses 'relational logic' in a way synonymous with 'predicate logic', which is typically understood as the logic where you can have predicates of any arity. (then again, some will insist that only 1-place relationships are 'predicates' (i.e. more like 'properties'), while 2- or more place relationships are 'relations', but not 'predicates' ...)</p> <p>In other words ... the terminology here is not fixed, so you will find different people have different definitions for these different logics. But, I think most people would agree with the claim that relational logic is a part of predicate logic, i.e. that 'predicate logic' is the more general logic. This is certainly how this community uses the tag 'predicate-logic'</p> <p>... all of which means ...</p> <p>You can probably learn plenty about relational logic on the sites that talk about predicate logic! You can also look for 'first-order logic' or 'quantificational logic'.</p> Answer 4: <p>In the Stanford course "relational logic" just means First-order logic (FOL) with Herbrand semantics. In other words, FOL without model theory. You could say that Herbrand semantics introduces a new kind of model theory, but it really is just a way to set model theory aside and focus on logic.</p> <p>I think this is a good idea because it emphasizes the ancient distinction between logic and grammar, where logic is concerned with form, and grammar is concerned with meaning and interpretation. Taking this view to the extreme, we could say that model theory should not even be seen as part of logic, but rather grammar. They are closely related, but the distinction is important.</p> Answer 5: <p>Some books use 'relational logic' to emphasize that it goes beyond unary predicates ... (and there are important pedogogical, practical, and theoretical reasons for doing so). Indeed, many books first discuss something they call 'categorical logic', restricted to just unary predicates. For example, Aristotle studied this kind of logic with claims like 'All humans are mortal'. (Then again, some people hold 'categorical logic' to be something different yet, see e.g. the Wikipedia page on 'Categorical Logic'.) </p> <p>Your book, however, uses 'relational logic' in a way synonymous with 'predicate logic', which is typically understood as the logic where you can have predicates of any arity. (then again, some will insist that only 1-place relationships are 'predicates' (i.e. more like 'properties'), while 2- or more place relationships are 'relations', but not 'predicates' ...)</p> <p>In other words ... the terminology here is not fixed, so you will find different people have different definitions for these different logics. But, I think most people would agree with the claim that relational logic is a part of predicate logic, i.e. that 'predicate logic' is the more general logic. This is certainly how this community uses the tag 'predicate-logic'</p> <p>... all of which means ...</p> <p>You can probably learn plenty about relational logic on the sites that talk about predicate logic! You can also look for 'first-order logic' or 'quantificational logic'.</p> Answer 6: <p>In the Stanford course "relational logic" just means First-order logic (FOL) with Herbrand semantics. In other words, FOL without model theory. You could say that Herbrand semantics introduces a new kind of model theory, but it really is just a way to set model theory aside and focus on logic.</p> <p>I think this is a good idea because it emphasizes the ancient distinction between logic and grammar, where logic is concerned with form, and grammar is concerned with meaning and interpretation. Taking this view to the extreme, we could say that model theory should not even be seen as part of logic, but rather grammar. They are closely related, but the distinction is important.</p> Answer 7: <p>Some books use 'relational logic' to emphasize that it goes beyond unary predicates ... (and there are important pedogogical, practical, and theoretical reasons for doing so). Indeed, many books first discuss something they call 'categorical logic', restricted to just unary predicates. For example, Aristotle studied this kind of logic with claims like 'All humans are mortal'. (Then again, some people hold 'categorical logic' to be something different yet, see e.g. the Wikipedia page on 'Categorical Logic'.) </p> <p>Your book, however, uses 'relational logic' in a way synonymous with 'predicate logic', which is typically understood as the logic where you can have predicates of any arity. (then again, some will insist that only 1-place relationships are 'predicates' (i.e. more like 'properties'), while 2- or more place relationships are 'relations', but not 'predicates' ...)</p> <p>In other words ... the terminology here is not fixed, so you will find different people have different definitions for these different logics. But, I think most people would agree with the claim that relational logic is a part of predicate logic, i.e. that 'predicate logic' is the more general logic. This is certainly how this community uses the tag 'predicate-logic'</p> <p>... all of which means ...</p> <p>You can probably learn plenty about relational logic on the sites that talk about predicate logic! You can also look for 'first-order logic' or 'quantificational logic'.</p> Answer 8: <p>In the Stanford course "relational logic" just means First-order logic (FOL) with Herbrand semantics. In other words, FOL without model theory. You could say that Herbrand semantics introduces a new kind of model theory, but it really is just a way to set model theory aside and focus on logic.</p> <p>I think this is a good idea because it emphasizes the ancient distinction between logic and grammar, where logic is concerned with form, and grammar is concerned with meaning and interpretation. Taking this view to the extreme, we could say that model theory should not even be seen as part of logic, but rather grammar. They are closely related, but the distinction is important.</p> Answer 9: <p>Some books use 'relational logic' to emphasize that it goes beyond unary predicates ... (and there are important pedogogical, practical, and theoretical reasons for doing so). Indeed, many books first discuss something they call 'categorical logic', restricted to just unary predicates. For example, Aristotle studied this kind of logic with claims like 'All humans are mortal'. (Then again, some people hold 'categorical logic' to be something different yet, see e.g. the Wikipedia page on 'Categorical Logic'.) </p> <p>Your book, however, uses 'relational logic' in a way synonymous with 'predicate logic', which is typically understood as the logic where you can have predicates of any arity. (then again, some will insist that only 1-place relationships are 'predicates' (i.e. more like 'properties'), while 2- or more place relationships are 'relations', but not 'predicates' ...)</p> <p>In other words ... the terminology here is not fixed, so you will find different people have different definitions for these different logics. But, I think most people would agree with the claim that relational logic is a part of predicate logic, i.e. that 'predicate logic' is the more general logic. This is certainly how this community uses the tag 'predicate-logic'</p> <p>... all of which means ...</p> <p>You can probably learn plenty about relational logic on the sites that talk about predicate logic! You can also look for 'first-order logic' or 'quantificational logic'.</p> Answer 10: <p>In the Stanford course "relational logic" just means First-order logic (FOL) with Herbrand semantics. In other words, FOL without model theory. You could say that Herbrand semantics introduces a new kind of model theory, but it really is just a way to set model theory aside and focus on logic.</p> <p>I think this is a good idea because it emphasizes the ancient distinction between logic and grammar, where logic is concerned with form, and grammar is concerned with meaning and interpretation. Taking this view to the extreme, we could say that model theory should not even be seen as part of logic, but rather grammar. They are closely related, but the distinction is important.</p>
Differential equation / Separable
2,112,148
5
https://math.stackexchange.com/questions/2112148/differential-equation-separable
<p>I've been given the following ODE and have to solve for the general solution: $$ \displaystyle \frac{dy}{dx} = y\cot x $$ </p> <p>I've looked at the solution, and it does the following: </p> <p>$$ \int \frac{1}{y} \ dy = \int \cot x \ dx $$ $$ \ln y = \ln|\sin x| + c $$ $$\ln y = \ln|\sin x| + \ln c $$ $$ \ln y = \ln |c\sin x| $$ $$ y = c\sin x$$</p> <p>I don't quite understand how they were able to convert $c$ into $\ln c $. Since $c$ is just a constant, it can very well be negative and the $\ln -ve$ is undefined. </p> <p>I also don't understand why they were able to remove the absolute value sign?</p>
[ { "answer_id": 2112166, "score": 3, "text": "<p>Well, we have:</p>\n\n<p>$$\\ln\\left|\\text{y}\\left(x\\right)\\right|=\\ln\\left|\\sin\\left(x\\right)\\right|+\\text{C}$$</p>\n\n<p>Now, raising each term to $e$:</p>\n\n<p>$$\\exp\\left(\\ln\\left|\\text{y}\\left(x\\right)\\right|\\right)=\\exp\\left(\\ln\\left|\\sin\\left(x\\right)\\right|+\\text{C}\\right)\\space\\Longleftrightarrow\\space\\left|\\text{y}\\left(x\\right)\\right|=\\exp\\left(\\ln\\left|\\sin\\left(x\\right)\\right|\\right)\\cdot\\exp\\left(\\text{C}\\right)\\space\\Longleftrightarrow\\space$$\n$$\\left|\\text{y}\\left(x\\right)\\right|=\\text{C}\\cdot\\left|\\sin\\left(x\\right)\\right|$$</p>\n" }, { "answer_id": 2112167, "score": 3, "text": "<p>This is a very common thing that we do. The thing is, we take the constant term itself as $\\ln c$ and not $c$. Yes, this can lead to problems. To be on the safe side, you can take the constant terms to be $\\pm \\ln c$ which would give you the result as $y = c \\sin x$ or $y = \\frac{\\sin x}{c}$. You can now just say that the solution is $y = k \\sin x$ where $k$ is another constant term. Hence this doesn't cause any contradiction with writing the result as $y = c \\sin x$ as $c$ is basically just a constant.</p>\n\n<p>The only value that $k$ can't take in $y = k\\sin x$ is $0$, which is of course true.</p>\n" }, { "answer_id": 2112158, "score": 2, "text": "<p>When integration constants appear, their value remains undefined until you bring an extra condition. Until then, silently replacing the constant by a function of itself is harmless.</p>\n\n<p>Like for instance</p>\n\n<p>$$\\log y=\\log x+c$$ is often transformed into</p>\n\n<p>$$y=cx$$</p>\n\n<p>instead of</p>\n\n<p>$$y=e^cx.$$</p>\n" }, { "answer_id": 2112189, "score": 2, "text": "<p>We have:\n$$\\ln |y|=\\ln|\\sin{x}|+c$$\nWe can do this differently by exponentiating both sides instead:\n$$e^{\\ln |y|}=e^{\\ln|\\sin{x}|+c}$$\n$$|y|=e^{\\ln|\\sin{x}|}\\cdot e^c$$\n$$|y|=|\\sin{x}|\\cdot e^c$$\nNow, note that $e^c$ is an <strong>arbitrary constant</strong>. Therefore, we can substitute $k=e^c$:\n$$|y|=k\\cdot |\\sin{x}|$$\nWhich explains why they \"converted\" from $\\ln{c}$ to $c$, so that the result looks nicer.</p>\n" }, { "answer_id": 2112174, "score": 1, "text": "<p>$\\ln:\\mathbb{R}_+^*\\longrightarrow\\mathbb{R}$ is a <strong>bijective</strong> function which means $$\\forall y\\in\\mathbb{R}, \\exists!\\ x\\in\\mathbb{R}_+^*, y=\\ln(x) $$\nSo, by changing the $c$ to $\\ln(c)$, the solution skips some simple steps which are: $$\\exists k\\in\\mathbb{R}_+^*,\\ c=\\ln(k) $$ \nsince $k$ is a constant we can rename it and can call it $c$.</p>\n\n<p>Then, we have 'the new' $c\\in\\mathbb{R}_+^*$, so it can be included under the absolute value sign. </p>\n" }, { "answer_id": 2112232, "score": 1, "text": "<p>I'll address only the arbitrary constant here.</p>\n\n<blockquote>\n <p>I don't quite understand how they were able to convert $c$ into $\\ln c $</p>\n</blockquote>\n\n<p>Yes, it's true that $c$ could be negative. But the $c$ in $\\ln|\\sin x| + c$ and the $c$ in $\\ln c$ are not actually the same $c$. This is a fairly standard borderline abuse of notation in math. Generally we use $c$ (or capital $C$) to represent arbitrary constants of integration, even when those arbitrary constants change form, so to speak.</p>\n\n<p>A more pedantic approach would be to explain as follows:</p>\n\n<hr>\n\n<p>$$\\ln y = \\ln |\\sin x| + c$$\nAt this point we would like to remove the natural logs. So, since $c$ is some arbitrary real number and since the natural log function is surjective onto the real numbers, then there must be some other (and positive) constant $c_2$ such that $\\ln (c_2) = c$. Then we have\n$$ \\ln y = \\ln|\\sin x| + \\ln(c_2).$$\nWell, $c_2$ itself is just an arbitrary constant (\"arbitrary\" subject to the restriction that it must be positive), so let's just keep calling it $c$ so we don't get everything muddled with different constant names. Therefore\n$$ \\ln y = \\ln|\\sin x| + \\ln c.$$</p>\n\n<hr>\n\n<p>In short, you may see solutions that always use $c$ to represent arbitrary constants, even if they are different arbitrary constants along the way. <strong>But the only constant we really care about is the one in the solution we obtain at the end.</strong> That's pretty much how we justify calling all arbitrary constants $c$ or $C$. So in this problem, the only arbitrary constant we care about is the $c$ in $y = c \\sin x$.</p>\n" }, { "answer_id": 2112166, "score": 3, "text": "<p>Well, we have:</p>\n\n<p>$$\\ln\\left|\\text{y}\\left(x\\right)\\right|=\\ln\\left|\\sin\\left(x\\right)\\right|+\\text{C}$$</p>\n\n<p>Now, raising each term to $e$:</p>\n\n<p>$$\\exp\\left(\\ln\\left|\\text{y}\\left(x\\right)\\right|\\right)=\\exp\\left(\\ln\\left|\\sin\\left(x\\right)\\right|+\\text{C}\\right)\\space\\Longleftrightarrow\\space\\left|\\text{y}\\left(x\\right)\\right|=\\exp\\left(\\ln\\left|\\sin\\left(x\\right)\\right|\\right)\\cdot\\exp\\left(\\text{C}\\right)\\space\\Longleftrightarrow\\space$$\n$$\\left|\\text{y}\\left(x\\right)\\right|=\\text{C}\\cdot\\left|\\sin\\left(x\\right)\\right|$$</p>\n" }, { "answer_id": 2112167, "score": 3, "text": "<p>This is a very common thing that we do. The thing is, we take the constant term itself as $\\ln c$ and not $c$. Yes, this can lead to problems. To be on the safe side, you can take the constant terms to be $\\pm \\ln c$ which would give you the result as $y = c \\sin x$ or $y = \\frac{\\sin x}{c}$. You can now just say that the solution is $y = k \\sin x$ where $k$ is another constant term. Hence this doesn't cause any contradiction with writing the result as $y = c \\sin x$ as $c$ is basically just a constant.</p>\n\n<p>The only value that $k$ can't take in $y = k\\sin x$ is $0$, which is of course true.</p>\n" }, { "answer_id": 2112158, "score": 2, "text": "<p>When integration constants appear, their value remains undefined until you bring an extra condition. Until then, silently replacing the constant by a function of itself is harmless.</p>\n\n<p>Like for instance</p>\n\n<p>$$\\log y=\\log x+c$$ is often transformed into</p>\n\n<p>$$y=cx$$</p>\n\n<p>instead of</p>\n\n<p>$$y=e^cx.$$</p>\n" }, { "answer_id": 2112189, "score": 2, "text": "<p>We have:\n$$\\ln |y|=\\ln|\\sin{x}|+c$$\nWe can do this differently by exponentiating both sides instead:\n$$e^{\\ln |y|}=e^{\\ln|\\sin{x}|+c}$$\n$$|y|=e^{\\ln|\\sin{x}|}\\cdot e^c$$\n$$|y|=|\\sin{x}|\\cdot e^c$$\nNow, note that $e^c$ is an <strong>arbitrary constant</strong>. Therefore, we can substitute $k=e^c$:\n$$|y|=k\\cdot |\\sin{x}|$$\nWhich explains why they \"converted\" from $\\ln{c}$ to $c$, so that the result looks nicer.</p>\n" }, { "answer_id": 2112174, "score": 1, "text": "<p>$\\ln:\\mathbb{R}_+^*\\longrightarrow\\mathbb{R}$ is a <strong>bijective</strong> function which means $$\\forall y\\in\\mathbb{R}, \\exists!\\ x\\in\\mathbb{R}_+^*, y=\\ln(x) $$\nSo, by changing the $c$ to $\\ln(c)$, the solution skips some simple steps which are: $$\\exists k\\in\\mathbb{R}_+^*,\\ c=\\ln(k) $$ \nsince $k$ is a constant we can rename it and can call it $c$.</p>\n\n<p>Then, we have 'the new' $c\\in\\mathbb{R}_+^*$, so it can be included under the absolute value sign. </p>\n" }, { "answer_id": 2112232, "score": 1, "text": "<p>I'll address only the arbitrary constant here.</p>\n\n<blockquote>\n <p>I don't quite understand how they were able to convert $c$ into $\\ln c $</p>\n</blockquote>\n\n<p>Yes, it's true that $c$ could be negative. But the $c$ in $\\ln|\\sin x| + c$ and the $c$ in $\\ln c$ are not actually the same $c$. This is a fairly standard borderline abuse of notation in math. Generally we use $c$ (or capital $C$) to represent arbitrary constants of integration, even when those arbitrary constants change form, so to speak.</p>\n\n<p>A more pedantic approach would be to explain as follows:</p>\n\n<hr>\n\n<p>$$\\ln y = \\ln |\\sin x| + c$$\nAt this point we would like to remove the natural logs. So, since $c$ is some arbitrary real number and since the natural log function is surjective onto the real numbers, then there must be some other (and positive) constant $c_2$ such that $\\ln (c_2) = c$. Then we have\n$$ \\ln y = \\ln|\\sin x| + \\ln(c_2).$$\nWell, $c_2$ itself is just an arbitrary constant (\"arbitrary\" subject to the restriction that it must be positive), so let's just keep calling it $c$ so we don't get everything muddled with different constant names. Therefore\n$$ \\ln y = \\ln|\\sin x| + \\ln c.$$</p>\n\n<hr>\n\n<p>In short, you may see solutions that always use $c$ to represent arbitrary constants, even if they are different arbitrary constants along the way. <strong>But the only constant we really care about is the one in the solution we obtain at the end.</strong> That's pretty much how we justify calling all arbitrary constants $c$ or $C$. So in this problem, the only arbitrary constant we care about is the $c$ in $y = c \\sin x$.</p>\n" }, { "answer_id": 2112166, "score": 3, "text": "<p>Well, we have:</p>\n\n<p>$$\\ln\\left|\\text{y}\\left(x\\right)\\right|=\\ln\\left|\\sin\\left(x\\right)\\right|+\\text{C}$$</p>\n\n<p>Now, raising each term to $e$:</p>\n\n<p>$$\\exp\\left(\\ln\\left|\\text{y}\\left(x\\right)\\right|\\right)=\\exp\\left(\\ln\\left|\\sin\\left(x\\right)\\right|+\\text{C}\\right)\\space\\Longleftrightarrow\\space\\left|\\text{y}\\left(x\\right)\\right|=\\exp\\left(\\ln\\left|\\sin\\left(x\\right)\\right|\\right)\\cdot\\exp\\left(\\text{C}\\right)\\space\\Longleftrightarrow\\space$$\n$$\\left|\\text{y}\\left(x\\right)\\right|=\\text{C}\\cdot\\left|\\sin\\left(x\\right)\\right|$$</p>\n" }, { "answer_id": 2112167, "score": 3, "text": "<p>This is a very common thing that we do. The thing is, we take the constant term itself as $\\ln c$ and not $c$. Yes, this can lead to problems. To be on the safe side, you can take the constant terms to be $\\pm \\ln c$ which would give you the result as $y = c \\sin x$ or $y = \\frac{\\sin x}{c}$. You can now just say that the solution is $y = k \\sin x$ where $k$ is another constant term. Hence this doesn't cause any contradiction with writing the result as $y = c \\sin x$ as $c$ is basically just a constant.</p>\n\n<p>The only value that $k$ can't take in $y = k\\sin x$ is $0$, which is of course true.</p>\n" }, { "answer_id": 2112158, "score": 2, "text": "<p>When integration constants appear, their value remains undefined until you bring an extra condition. Until then, silently replacing the constant by a function of itself is harmless.</p>\n\n<p>Like for instance</p>\n\n<p>$$\\log y=\\log x+c$$ is often transformed into</p>\n\n<p>$$y=cx$$</p>\n\n<p>instead of</p>\n\n<p>$$y=e^cx.$$</p>\n" }, { "answer_id": 2112189, "score": 2, "text": "<p>We have:\n$$\\ln |y|=\\ln|\\sin{x}|+c$$\nWe can do this differently by exponentiating both sides instead:\n$$e^{\\ln |y|}=e^{\\ln|\\sin{x}|+c}$$\n$$|y|=e^{\\ln|\\sin{x}|}\\cdot e^c$$\n$$|y|=|\\sin{x}|\\cdot e^c$$\nNow, note that $e^c$ is an <strong>arbitrary constant</strong>. Therefore, we can substitute $k=e^c$:\n$$|y|=k\\cdot |\\sin{x}|$$\nWhich explains why they \"converted\" from $\\ln{c}$ to $c$, so that the result looks nicer.</p>\n" }, { "answer_id": 2112174, "score": 1, "text": "<p>$\\ln:\\mathbb{R}_+^*\\longrightarrow\\mathbb{R}$ is a <strong>bijective</strong> function which means $$\\forall y\\in\\mathbb{R}, \\exists!\\ x\\in\\mathbb{R}_+^*, y=\\ln(x) $$\nSo, by changing the $c$ to $\\ln(c)$, the solution skips some simple steps which are: $$\\exists k\\in\\mathbb{R}_+^*,\\ c=\\ln(k) $$ \nsince $k$ is a constant we can rename it and can call it $c$.</p>\n\n<p>Then, we have 'the new' $c\\in\\mathbb{R}_+^*$, so it can be included under the absolute value sign. </p>\n" }, { "answer_id": 2112232, "score": 1, "text": "<p>I'll address only the arbitrary constant here.</p>\n\n<blockquote>\n <p>I don't quite understand how they were able to convert $c$ into $\\ln c $</p>\n</blockquote>\n\n<p>Yes, it's true that $c$ could be negative. But the $c$ in $\\ln|\\sin x| + c$ and the $c$ in $\\ln c$ are not actually the same $c$. This is a fairly standard borderline abuse of notation in math. Generally we use $c$ (or capital $C$) to represent arbitrary constants of integration, even when those arbitrary constants change form, so to speak.</p>\n\n<p>A more pedantic approach would be to explain as follows:</p>\n\n<hr>\n\n<p>$$\\ln y = \\ln |\\sin x| + c$$\nAt this point we would like to remove the natural logs. So, since $c$ is some arbitrary real number and since the natural log function is surjective onto the real numbers, then there must be some other (and positive) constant $c_2$ such that $\\ln (c_2) = c$. Then we have\n$$ \\ln y = \\ln|\\sin x| + \\ln(c_2).$$\nWell, $c_2$ itself is just an arbitrary constant (\"arbitrary\" subject to the restriction that it must be positive), so let's just keep calling it $c$ so we don't get everything muddled with different constant names. Therefore\n$$ \\ln y = \\ln|\\sin x| + \\ln c.$$</p>\n\n<hr>\n\n<p>In short, you may see solutions that always use $c$ to represent arbitrary constants, even if they are different arbitrary constants along the way. <strong>But the only constant we really care about is the one in the solution we obtain at the end.</strong> That's pretty much how we justify calling all arbitrary constants $c$ or $C$. So in this problem, the only arbitrary constant we care about is the $c$ in $y = c \\sin x$.</p>\n" }, { "answer_id": 2112166, "score": 3, "text": "<p>Well, we have:</p>\n\n<p>$$\\ln\\left|\\text{y}\\left(x\\right)\\right|=\\ln\\left|\\sin\\left(x\\right)\\right|+\\text{C}$$</p>\n\n<p>Now, raising each term to $e$:</p>\n\n<p>$$\\exp\\left(\\ln\\left|\\text{y}\\left(x\\right)\\right|\\right)=\\exp\\left(\\ln\\left|\\sin\\left(x\\right)\\right|+\\text{C}\\right)\\space\\Longleftrightarrow\\space\\left|\\text{y}\\left(x\\right)\\right|=\\exp\\left(\\ln\\left|\\sin\\left(x\\right)\\right|\\right)\\cdot\\exp\\left(\\text{C}\\right)\\space\\Longleftrightarrow\\space$$\n$$\\left|\\text{y}\\left(x\\right)\\right|=\\text{C}\\cdot\\left|\\sin\\left(x\\right)\\right|$$</p>\n" }, { "answer_id": 2112167, "score": 3, "text": "<p>This is a very common thing that we do. The thing is, we take the constant term itself as $\\ln c$ and not $c$. Yes, this can lead to problems. To be on the safe side, you can take the constant terms to be $\\pm \\ln c$ which would give you the result as $y = c \\sin x$ or $y = \\frac{\\sin x}{c}$. You can now just say that the solution is $y = k \\sin x$ where $k$ is another constant term. Hence this doesn't cause any contradiction with writing the result as $y = c \\sin x$ as $c$ is basically just a constant.</p>\n\n<p>The only value that $k$ can't take in $y = k\\sin x$ is $0$, which is of course true.</p>\n" }, { "answer_id": 2112158, "score": 2, "text": "<p>When integration constants appear, their value remains undefined until you bring an extra condition. Until then, silently replacing the constant by a function of itself is harmless.</p>\n\n<p>Like for instance</p>\n\n<p>$$\\log y=\\log x+c$$ is often transformed into</p>\n\n<p>$$y=cx$$</p>\n\n<p>instead of</p>\n\n<p>$$y=e^cx.$$</p>\n" }, { "answer_id": 2112189, "score": 2, "text": "<p>We have:\n$$\\ln |y|=\\ln|\\sin{x}|+c$$\nWe can do this differently by exponentiating both sides instead:\n$$e^{\\ln |y|}=e^{\\ln|\\sin{x}|+c}$$\n$$|y|=e^{\\ln|\\sin{x}|}\\cdot e^c$$\n$$|y|=|\\sin{x}|\\cdot e^c$$\nNow, note that $e^c$ is an <strong>arbitrary constant</strong>. Therefore, we can substitute $k=e^c$:\n$$|y|=k\\cdot |\\sin{x}|$$\nWhich explains why they \"converted\" from $\\ln{c}$ to $c$, so that the result looks nicer.</p>\n" }, { "answer_id": 2112174, "score": 1, "text": "<p>$\\ln:\\mathbb{R}_+^*\\longrightarrow\\mathbb{R}$ is a <strong>bijective</strong> function which means $$\\forall y\\in\\mathbb{R}, \\exists!\\ x\\in\\mathbb{R}_+^*, y=\\ln(x) $$\nSo, by changing the $c$ to $\\ln(c)$, the solution skips some simple steps which are: $$\\exists k\\in\\mathbb{R}_+^*,\\ c=\\ln(k) $$ \nsince $k$ is a constant we can rename it and can call it $c$.</p>\n\n<p>Then, we have 'the new' $c\\in\\mathbb{R}_+^*$, so it can be included under the absolute value sign. </p>\n" }, { "answer_id": 2112232, "score": 1, "text": "<p>I'll address only the arbitrary constant here.</p>\n\n<blockquote>\n <p>I don't quite understand how they were able to convert $c$ into $\\ln c $</p>\n</blockquote>\n\n<p>Yes, it's true that $c$ could be negative. But the $c$ in $\\ln|\\sin x| + c$ and the $c$ in $\\ln c$ are not actually the same $c$. This is a fairly standard borderline abuse of notation in math. Generally we use $c$ (or capital $C$) to represent arbitrary constants of integration, even when those arbitrary constants change form, so to speak.</p>\n\n<p>A more pedantic approach would be to explain as follows:</p>\n\n<hr>\n\n<p>$$\\ln y = \\ln |\\sin x| + c$$\nAt this point we would like to remove the natural logs. So, since $c$ is some arbitrary real number and since the natural log function is surjective onto the real numbers, then there must be some other (and positive) constant $c_2$ such that $\\ln (c_2) = c$. Then we have\n$$ \\ln y = \\ln|\\sin x| + \\ln(c_2).$$\nWell, $c_2$ itself is just an arbitrary constant (\"arbitrary\" subject to the restriction that it must be positive), so let's just keep calling it $c$ so we don't get everything muddled with different constant names. Therefore\n$$ \\ln y = \\ln|\\sin x| + \\ln c.$$</p>\n\n<hr>\n\n<p>In short, you may see solutions that always use $c$ to represent arbitrary constants, even if they are different arbitrary constants along the way. <strong>But the only constant we really care about is the one in the solution we obtain at the end.</strong> That's pretty much how we justify calling all arbitrary constants $c$ or $C$. So in this problem, the only arbitrary constant we care about is the $c$ in $y = c \\sin x$.</p>\n" }, { "answer_id": 2112166, "score": 3, "text": "<p>Well, we have:</p>\n\n<p>$$\\ln\\left|\\text{y}\\left(x\\right)\\right|=\\ln\\left|\\sin\\left(x\\right)\\right|+\\text{C}$$</p>\n\n<p>Now, raising each term to $e$:</p>\n\n<p>$$\\exp\\left(\\ln\\left|\\text{y}\\left(x\\right)\\right|\\right)=\\exp\\left(\\ln\\left|\\sin\\left(x\\right)\\right|+\\text{C}\\right)\\space\\Longleftrightarrow\\space\\left|\\text{y}\\left(x\\right)\\right|=\\exp\\left(\\ln\\left|\\sin\\left(x\\right)\\right|\\right)\\cdot\\exp\\left(\\text{C}\\right)\\space\\Longleftrightarrow\\space$$\n$$\\left|\\text{y}\\left(x\\right)\\right|=\\text{C}\\cdot\\left|\\sin\\left(x\\right)\\right|$$</p>\n" }, { "answer_id": 2112167, "score": 3, "text": "<p>This is a very common thing that we do. The thing is, we take the constant term itself as $\\ln c$ and not $c$. Yes, this can lead to problems. To be on the safe side, you can take the constant terms to be $\\pm \\ln c$ which would give you the result as $y = c \\sin x$ or $y = \\frac{\\sin x}{c}$. You can now just say that the solution is $y = k \\sin x$ where $k$ is another constant term. Hence this doesn't cause any contradiction with writing the result as $y = c \\sin x$ as $c$ is basically just a constant.</p>\n\n<p>The only value that $k$ can't take in $y = k\\sin x$ is $0$, which is of course true.</p>\n" }, { "answer_id": 2112158, "score": 2, "text": "<p>When integration constants appear, their value remains undefined until you bring an extra condition. Until then, silently replacing the constant by a function of itself is harmless.</p>\n\n<p>Like for instance</p>\n\n<p>$$\\log y=\\log x+c$$ is often transformed into</p>\n\n<p>$$y=cx$$</p>\n\n<p>instead of</p>\n\n<p>$$y=e^cx.$$</p>\n" }, { "answer_id": 2112189, "score": 2, "text": "<p>We have:\n$$\\ln |y|=\\ln|\\sin{x}|+c$$\nWe can do this differently by exponentiating both sides instead:\n$$e^{\\ln |y|}=e^{\\ln|\\sin{x}|+c}$$\n$$|y|=e^{\\ln|\\sin{x}|}\\cdot e^c$$\n$$|y|=|\\sin{x}|\\cdot e^c$$\nNow, note that $e^c$ is an <strong>arbitrary constant</strong>. Therefore, we can substitute $k=e^c$:\n$$|y|=k\\cdot |\\sin{x}|$$\nWhich explains why they \"converted\" from $\\ln{c}$ to $c$, so that the result looks nicer.</p>\n" }, { "answer_id": 2112174, "score": 1, "text": "<p>$\\ln:\\mathbb{R}_+^*\\longrightarrow\\mathbb{R}$ is a <strong>bijective</strong> function which means $$\\forall y\\in\\mathbb{R}, \\exists!\\ x\\in\\mathbb{R}_+^*, y=\\ln(x) $$\nSo, by changing the $c$ to $\\ln(c)$, the solution skips some simple steps which are: $$\\exists k\\in\\mathbb{R}_+^*,\\ c=\\ln(k) $$ \nsince $k$ is a constant we can rename it and can call it $c$.</p>\n\n<p>Then, we have 'the new' $c\\in\\mathbb{R}_+^*$, so it can be included under the absolute value sign. </p>\n" }, { "answer_id": 2112232, "score": 1, "text": "<p>I'll address only the arbitrary constant here.</p>\n\n<blockquote>\n <p>I don't quite understand how they were able to convert $c$ into $\\ln c $</p>\n</blockquote>\n\n<p>Yes, it's true that $c$ could be negative. But the $c$ in $\\ln|\\sin x| + c$ and the $c$ in $\\ln c$ are not actually the same $c$. This is a fairly standard borderline abuse of notation in math. Generally we use $c$ (or capital $C$) to represent arbitrary constants of integration, even when those arbitrary constants change form, so to speak.</p>\n\n<p>A more pedantic approach would be to explain as follows:</p>\n\n<hr>\n\n<p>$$\\ln y = \\ln |\\sin x| + c$$\nAt this point we would like to remove the natural logs. So, since $c$ is some arbitrary real number and since the natural log function is surjective onto the real numbers, then there must be some other (and positive) constant $c_2$ such that $\\ln (c_2) = c$. Then we have\n$$ \\ln y = \\ln|\\sin x| + \\ln(c_2).$$\nWell, $c_2$ itself is just an arbitrary constant (\"arbitrary\" subject to the restriction that it must be positive), so let's just keep calling it $c$ so we don't get everything muddled with different constant names. Therefore\n$$ \\ln y = \\ln|\\sin x| + \\ln c.$$</p>\n\n<hr>\n\n<p>In short, you may see solutions that always use $c$ to represent arbitrary constants, even if they are different arbitrary constants along the way. <strong>But the only constant we really care about is the one in the solution we obtain at the end.</strong> That's pretty much how we justify calling all arbitrary constants $c$ or $C$. So in this problem, the only arbitrary constant we care about is the $c$ in $y = c \\sin x$.</p>\n" } ]
Question: Differential equation / Separable Answer 1: <p>Well, we have:</p> <p>$$\ln\left|\text{y}\left(x\right)\right|=\ln\left|\sin\left(x\right)\right|+\text{C}$$</p> <p>Now, raising each term to $e$:</p> <p>$$\exp\left(\ln\left|\text{y}\left(x\right)\right|\right)=\exp\left(\ln\left|\sin\left(x\right)\right|+\text{C}\right)\space\Longleftrightarrow\space\left|\text{y}\left(x\right)\right|=\exp\left(\ln\left|\sin\left(x\right)\right|\right)\cdot\exp\left(\text{C}\right)\space\Longleftrightarrow\space$$ $$\left|\text{y}\left(x\right)\right|=\text{C}\cdot\left|\sin\left(x\right)\right|$$</p> Answer 2: <p>This is a very common thing that we do. The thing is, we take the constant term itself as $\ln c$ and not $c$. Yes, this can lead to problems. To be on the safe side, you can take the constant terms to be $\pm \ln c$ which would give you the result as $y = c \sin x$ or $y = \frac{\sin x}{c}$. You can now just say that the solution is $y = k \sin x$ where $k$ is another constant term. Hence this doesn't cause any contradiction with writing the result as $y = c \sin x$ as $c$ is basically just a constant.</p> <p>The only value that $k$ can't take in $y = k\sin x$ is $0$, which is of course true.</p> Answer 3: <p>When integration constants appear, their value remains undefined until you bring an extra condition. Until then, silently replacing the constant by a function of itself is harmless.</p> <p>Like for instance</p> <p>$$\log y=\log x+c$$ is often transformed into</p> <p>$$y=cx$$</p> <p>instead of</p> <p>$$y=e^cx.$$</p> Answer 4: <p>We have: $$\ln |y|=\ln|\sin{x}|+c$$ We can do this differently by exponentiating both sides instead: $$e^{\ln |y|}=e^{\ln|\sin{x}|+c}$$ $$|y|=e^{\ln|\sin{x}|}\cdot e^c$$ $$|y|=|\sin{x}|\cdot e^c$$ Now, note that $e^c$ is an <strong>arbitrary constant</strong>. Therefore, we can substitute $k=e^c$: $$|y|=k\cdot |\sin{x}|$$ Which explains why they "converted" from $\ln{c}$ to $c$, so that the result looks nicer.</p> Answer 5: <p>$\ln:\mathbb{R}_+^*\longrightarrow\mathbb{R}$ is a <strong>bijective</strong> function which means $$\forall y\in\mathbb{R}, \exists!\ x\in\mathbb{R}_+^*, y=\ln(x) $$ So, by changing the $c$ to $\ln(c)$, the solution skips some simple steps which are: $$\exists k\in\mathbb{R}_+^*,\ c=\ln(k) $$ since $k$ is a constant we can rename it and can call it $c$.</p> <p>Then, we have 'the new' $c\in\mathbb{R}_+^*$, so it can be included under the absolute value sign. </p> Answer 6: <p>I'll address only the arbitrary constant here.</p> <blockquote> <p>I don't quite understand how they were able to convert $c$ into $\ln c $</p> </blockquote> <p>Yes, it's true that $c$ could be negative. But the $c$ in $\ln|\sin x| + c$ and the $c$ in $\ln c$ are not actually the same $c$. This is a fairly standard borderline abuse of notation in math. Generally we use $c$ (or capital $C$) to represent arbitrary constants of integration, even when those arbitrary constants change form, so to speak.</p> <p>A more pedantic approach would be to explain as follows:</p> <hr> <p>$$\ln y = \ln |\sin x| + c$$ At this point we would like to remove the natural logs. So, since $c$ is some arbitrary real number and since the natural log function is surjective onto the real numbers, then there must be some other (and positive) constant $c_2$ such that $\ln (c_2) = c$. Then we have $$ \ln y = \ln|\sin x| + \ln(c_2).$$ Well, $c_2$ itself is just an arbitrary constant ("arbitrary" subject to the restriction that it must be positive), so let's just keep calling it $c$ so we don't get everything muddled with different constant names. Therefore $$ \ln y = \ln|\sin x| + \ln c.$$</p> <hr> <p>In short, you may see solutions that always use $c$ to represent arbitrary constants, even if they are different arbitrary constants along the way. <strong>But the only constant we really care about is the one in the solution we obtain at the end.</strong> That's pretty much how we justify calling all arbitrary constants $c$ or $C$. So in this problem, the only arbitrary constant we care about is the $c$ in $y = c \sin x$.</p> Answer 7: <p>Well, we have:</p> <p>$$\ln\left|\text{y}\left(x\right)\right|=\ln\left|\sin\left(x\right)\right|+\text{C}$$</p> <p>Now, raising each term to $e$:</p> <p>$$\exp\left(\ln\left|\text{y}\left(x\right)\right|\right)=\exp\left(\ln\left|\sin\left(x\right)\right|+\text{C}\right)\space\Longleftrightarrow\space\left|\text{y}\left(x\right)\right|=\exp\left(\ln\left|\sin\left(x\right)\right|\right)\cdot\exp\left(\text{C}\right)\space\Longleftrightarrow\space$$ $$\left|\text{y}\left(x\right)\right|=\text{C}\cdot\left|\sin\left(x\right)\right|$$</p> Answer 8: <p>This is a very common thing that we do. The thing is, we take the constant term itself as $\ln c$ and not $c$. Yes, this can lead to problems. To be on the safe side, you can take the constant terms to be $\pm \ln c$ which would give you the result as $y = c \sin x$ or $y = \frac{\sin x}{c}$. You can now just say that the solution is $y = k \sin x$ where $k$ is another constant term. Hence this doesn't cause any contradiction with writing the result as $y = c \sin x$ as $c$ is basically just a constant.</p> <p>The only value that $k$ can't take in $y = k\sin x$ is $0$, which is of course true.</p> Answer 9: <p>When integration constants appear, their value remains undefined until you bring an extra condition. Until then, silently replacing the constant by a function of itself is harmless.</p> <p>Like for instance</p> <p>$$\log y=\log x+c$$ is often transformed into</p> <p>$$y=cx$$</p> <p>instead of</p> <p>$$y=e^cx.$$</p> Answer 10: <p>We have: $$\ln |y|=\ln|\sin{x}|+c$$ We can do this differently by exponentiating both sides instead: $$e^{\ln |y|}=e^{\ln|\sin{x}|+c}$$ $$|y|=e^{\ln|\sin{x}|}\cdot e^c$$ $$|y|=|\sin{x}|\cdot e^c$$ Now, note that $e^c$ is an <strong>arbitrary constant</strong>. Therefore, we can substitute $k=e^c$: $$|y|=k\cdot |\sin{x}|$$ Which explains why they "converted" from $\ln{c}$ to $c$, so that the result looks nicer.</p> Answer 11: <p>$\ln:\mathbb{R}_+^*\longrightarrow\mathbb{R}$ is a <strong>bijective</strong> function which means $$\forall y\in\mathbb{R}, \exists!\ x\in\mathbb{R}_+^*, y=\ln(x) $$ So, by changing the $c$ to $\ln(c)$, the solution skips some simple steps which are: $$\exists k\in\mathbb{R}_+^*,\ c=\ln(k) $$ since $k$ is a constant we can rename it and can call it $c$.</p> <p>Then, we have 'the new' $c\in\mathbb{R}_+^*$, so it can be included under the absolute value sign. </p> Answer 12: <p>I'll address only the arbitrary constant here.</p> <blockquote> <p>I don't quite understand how they were able to convert $c$ into $\ln c $</p> </blockquote> <p>Yes, it's true that $c$ could be negative. But the $c$ in $\ln|\sin x| + c$ and the $c$ in $\ln c$ are not actually the same $c$. This is a fairly standard borderline abuse of notation in math. Generally we use $c$ (or capital $C$) to represent arbitrary constants of integration, even when those arbitrary constants change form, so to speak.</p> <p>A more pedantic approach would be to explain as follows:</p> <hr> <p>$$\ln y = \ln |\sin x| + c$$ At this point we would like to remove the natural logs. So, since $c$ is some arbitrary real number and since the natural log function is surjective onto the real numbers, then there must be some other (and positive) constant $c_2$ such that $\ln (c_2) = c$. Then we have $$ \ln y = \ln|\sin x| + \ln(c_2).$$ Well, $c_2$ itself is just an arbitrary constant ("arbitrary" subject to the restriction that it must be positive), so let's just keep calling it $c$ so we don't get everything muddled with different constant names. Therefore $$ \ln y = \ln|\sin x| + \ln c.$$</p> <hr> <p>In short, you may see solutions that always use $c$ to represent arbitrary constants, even if they are different arbitrary constants along the way. <strong>But the only constant we really care about is the one in the solution we obtain at the end.</strong> That's pretty much how we justify calling all arbitrary constants $c$ or $C$. So in this problem, the only arbitrary constant we care about is the $c$ in $y = c \sin x$.</p> Answer 13: <p>Well, we have:</p> <p>$$\ln\left|\text{y}\left(x\right)\right|=\ln\left|\sin\left(x\right)\right|+\text{C}$$</p> <p>Now, raising each term to $e$:</p> <p>$$\exp\left(\ln\left|\text{y}\left(x\right)\right|\right)=\exp\left(\ln\left|\sin\left(x\right)\right|+\text{C}\right)\space\Longleftrightarrow\space\left|\text{y}\left(x\right)\right|=\exp\left(\ln\left|\sin\left(x\right)\right|\right)\cdot\exp\left(\text{C}\right)\space\Longleftrightarrow\space$$ $$\left|\text{y}\left(x\right)\right|=\text{C}\cdot\left|\sin\left(x\right)\right|$$</p> Answer 14: <p>This is a very common thing that we do. The thing is, we take the constant term itself as $\ln c$ and not $c$. Yes, this can lead to problems. To be on the safe side, you can take the constant terms to be $\pm \ln c$ which would give you the result as $y = c \sin x$ or $y = \frac{\sin x}{c}$. You can now just say that the solution is $y = k \sin x$ where $k$ is another constant term. Hence this doesn't cause any contradiction with writing the result as $y = c \sin x$ as $c$ is basically just a constant.</p> <p>The only value that $k$ can't take in $y = k\sin x$ is $0$, which is of course true.</p> Answer 15: <p>When integration constants appear, their value remains undefined until you bring an extra condition. Until then, silently replacing the constant by a function of itself is harmless.</p> <p>Like for instance</p> <p>$$\log y=\log x+c$$ is often transformed into</p> <p>$$y=cx$$</p> <p>instead of</p> <p>$$y=e^cx.$$</p> Answer 16: <p>We have: $$\ln |y|=\ln|\sin{x}|+c$$ We can do this differently by exponentiating both sides instead: $$e^{\ln |y|}=e^{\ln|\sin{x}|+c}$$ $$|y|=e^{\ln|\sin{x}|}\cdot e^c$$ $$|y|=|\sin{x}|\cdot e^c$$ Now, note that $e^c$ is an <strong>arbitrary constant</strong>. Therefore, we can substitute $k=e^c$: $$|y|=k\cdot |\sin{x}|$$ Which explains why they "converted" from $\ln{c}$ to $c$, so that the result looks nicer.</p> Answer 17: <p>$\ln:\mathbb{R}_+^*\longrightarrow\mathbb{R}$ is a <strong>bijective</strong> function which means $$\forall y\in\mathbb{R}, \exists!\ x\in\mathbb{R}_+^*, y=\ln(x) $$ So, by changing the $c$ to $\ln(c)$, the solution skips some simple steps which are: $$\exists k\in\mathbb{R}_+^*,\ c=\ln(k) $$ since $k$ is a constant we can rename it and can call it $c$.</p> <p>Then, we have 'the new' $c\in\mathbb{R}_+^*$, so it can be included under the absolute value sign. </p> Answer 18: <p>I'll address only the arbitrary constant here.</p> <blockquote> <p>I don't quite understand how they were able to convert $c$ into $\ln c $</p> </blockquote> <p>Yes, it's true that $c$ could be negative. But the $c$ in $\ln|\sin x| + c$ and the $c$ in $\ln c$ are not actually the same $c$. This is a fairly standard borderline abuse of notation in math. Generally we use $c$ (or capital $C$) to represent arbitrary constants of integration, even when those arbitrary constants change form, so to speak.</p> <p>A more pedantic approach would be to explain as follows:</p> <hr> <p>$$\ln y = \ln |\sin x| + c$$ At this point we would like to remove the natural logs. So, since $c$ is some arbitrary real number and since the natural log function is surjective onto the real numbers, then there must be some other (and positive) constant $c_2$ such that $\ln (c_2) = c$. Then we have $$ \ln y = \ln|\sin x| + \ln(c_2).$$ Well, $c_2$ itself is just an arbitrary constant ("arbitrary" subject to the restriction that it must be positive), so let's just keep calling it $c$ so we don't get everything muddled with different constant names. Therefore $$ \ln y = \ln|\sin x| + \ln c.$$</p> <hr> <p>In short, you may see solutions that always use $c$ to represent arbitrary constants, even if they are different arbitrary constants along the way. <strong>But the only constant we really care about is the one in the solution we obtain at the end.</strong> That's pretty much how we justify calling all arbitrary constants $c$ or $C$. So in this problem, the only arbitrary constant we care about is the $c$ in $y = c \sin x$.</p> Answer 19: <p>Well, we have:</p> <p>$$\ln\left|\text{y}\left(x\right)\right|=\ln\left|\sin\left(x\right)\right|+\text{C}$$</p> <p>Now, raising each term to $e$:</p> <p>$$\exp\left(\ln\left|\text{y}\left(x\right)\right|\right)=\exp\left(\ln\left|\sin\left(x\right)\right|+\text{C}\right)\space\Longleftrightarrow\space\left|\text{y}\left(x\right)\right|=\exp\left(\ln\left|\sin\left(x\right)\right|\right)\cdot\exp\left(\text{C}\right)\space\Longleftrightarrow\space$$ $$\left|\text{y}\left(x\right)\right|=\text{C}\cdot\left|\sin\left(x\right)\right|$$</p> Answer 20: <p>This is a very common thing that we do. The thing is, we take the constant term itself as $\ln c$ and not $c$. Yes, this can lead to problems. To be on the safe side, you can take the constant terms to be $\pm \ln c$ which would give you the result as $y = c \sin x$ or $y = \frac{\sin x}{c}$. You can now just say that the solution is $y = k \sin x$ where $k$ is another constant term. Hence this doesn't cause any contradiction with writing the result as $y = c \sin x$ as $c$ is basically just a constant.</p> <p>The only value that $k$ can't take in $y = k\sin x$ is $0$, which is of course true.</p> Answer 21: <p>When integration constants appear, their value remains undefined until you bring an extra condition. Until then, silently replacing the constant by a function of itself is harmless.</p> <p>Like for instance</p> <p>$$\log y=\log x+c$$ is often transformed into</p> <p>$$y=cx$$</p> <p>instead of</p> <p>$$y=e^cx.$$</p> Answer 22: <p>We have: $$\ln |y|=\ln|\sin{x}|+c$$ We can do this differently by exponentiating both sides instead: $$e^{\ln |y|}=e^{\ln|\sin{x}|+c}$$ $$|y|=e^{\ln|\sin{x}|}\cdot e^c$$ $$|y|=|\sin{x}|\cdot e^c$$ Now, note that $e^c$ is an <strong>arbitrary constant</strong>. Therefore, we can substitute $k=e^c$: $$|y|=k\cdot |\sin{x}|$$ Which explains why they "converted" from $\ln{c}$ to $c$, so that the result looks nicer.</p> Answer 23: <p>$\ln:\mathbb{R}_+^*\longrightarrow\mathbb{R}$ is a <strong>bijective</strong> function which means $$\forall y\in\mathbb{R}, \exists!\ x\in\mathbb{R}_+^*, y=\ln(x) $$ So, by changing the $c$ to $\ln(c)$, the solution skips some simple steps which are: $$\exists k\in\mathbb{R}_+^*,\ c=\ln(k) $$ since $k$ is a constant we can rename it and can call it $c$.</p> <p>Then, we have 'the new' $c\in\mathbb{R}_+^*$, so it can be included under the absolute value sign. </p> Answer 24: <p>I'll address only the arbitrary constant here.</p> <blockquote> <p>I don't quite understand how they were able to convert $c$ into $\ln c $</p> </blockquote> <p>Yes, it's true that $c$ could be negative. But the $c$ in $\ln|\sin x| + c$ and the $c$ in $\ln c$ are not actually the same $c$. This is a fairly standard borderline abuse of notation in math. Generally we use $c$ (or capital $C$) to represent arbitrary constants of integration, even when those arbitrary constants change form, so to speak.</p> <p>A more pedantic approach would be to explain as follows:</p> <hr> <p>$$\ln y = \ln |\sin x| + c$$ At this point we would like to remove the natural logs. So, since $c$ is some arbitrary real number and since the natural log function is surjective onto the real numbers, then there must be some other (and positive) constant $c_2$ such that $\ln (c_2) = c$. Then we have $$ \ln y = \ln|\sin x| + \ln(c_2).$$ Well, $c_2$ itself is just an arbitrary constant ("arbitrary" subject to the restriction that it must be positive), so let's just keep calling it $c$ so we don't get everything muddled with different constant names. Therefore $$ \ln y = \ln|\sin x| + \ln c.$$</p> <hr> <p>In short, you may see solutions that always use $c$ to represent arbitrary constants, even if they are different arbitrary constants along the way. <strong>But the only constant we really care about is the one in the solution we obtain at the end.</strong> That's pretty much how we justify calling all arbitrary constants $c$ or $C$. So in this problem, the only arbitrary constant we care about is the $c$ in $y = c \sin x$.</p> Answer 25: <p>Well, we have:</p> <p>$$\ln\left|\text{y}\left(x\right)\right|=\ln\left|\sin\left(x\right)\right|+\text{C}$$</p> <p>Now, raising each term to $e$:</p> <p>$$\exp\left(\ln\left|\text{y}\left(x\right)\right|\right)=\exp\left(\ln\left|\sin\left(x\right)\right|+\text{C}\right)\space\Longleftrightarrow\space\left|\text{y}\left(x\right)\right|=\exp\left(\ln\left|\sin\left(x\right)\right|\right)\cdot\exp\left(\text{C}\right)\space\Longleftrightarrow\space$$ $$\left|\text{y}\left(x\right)\right|=\text{C}\cdot\left|\sin\left(x\right)\right|$$</p> Answer 26: <p>This is a very common thing that we do. The thing is, we take the constant term itself as $\ln c$ and not $c$. Yes, this can lead to problems. To be on the safe side, you can take the constant terms to be $\pm \ln c$ which would give you the result as $y = c \sin x$ or $y = \frac{\sin x}{c}$. You can now just say that the solution is $y = k \sin x$ where $k$ is another constant term. Hence this doesn't cause any contradiction with writing the result as $y = c \sin x$ as $c$ is basically just a constant.</p> <p>The only value that $k$ can't take in $y = k\sin x$ is $0$, which is of course true.</p> Answer 27: <p>When integration constants appear, their value remains undefined until you bring an extra condition. Until then, silently replacing the constant by a function of itself is harmless.</p> <p>Like for instance</p> <p>$$\log y=\log x+c$$ is often transformed into</p> <p>$$y=cx$$</p> <p>instead of</p> <p>$$y=e^cx.$$</p> Answer 28: <p>We have: $$\ln |y|=\ln|\sin{x}|+c$$ We can do this differently by exponentiating both sides instead: $$e^{\ln |y|}=e^{\ln|\sin{x}|+c}$$ $$|y|=e^{\ln|\sin{x}|}\cdot e^c$$ $$|y|=|\sin{x}|\cdot e^c$$ Now, note that $e^c$ is an <strong>arbitrary constant</strong>. Therefore, we can substitute $k=e^c$: $$|y|=k\cdot |\sin{x}|$$ Which explains why they "converted" from $\ln{c}$ to $c$, so that the result looks nicer.</p> Answer 29: <p>$\ln:\mathbb{R}_+^*\longrightarrow\mathbb{R}$ is a <strong>bijective</strong> function which means $$\forall y\in\mathbb{R}, \exists!\ x\in\mathbb{R}_+^*, y=\ln(x) $$ So, by changing the $c$ to $\ln(c)$, the solution skips some simple steps which are: $$\exists k\in\mathbb{R}_+^*,\ c=\ln(k) $$ since $k$ is a constant we can rename it and can call it $c$.</p> <p>Then, we have 'the new' $c\in\mathbb{R}_+^*$, so it can be included under the absolute value sign. </p> Answer 30: <p>I'll address only the arbitrary constant here.</p> <blockquote> <p>I don't quite understand how they were able to convert $c$ into $\ln c $</p> </blockquote> <p>Yes, it's true that $c$ could be negative. But the $c$ in $\ln|\sin x| + c$ and the $c$ in $\ln c$ are not actually the same $c$. This is a fairly standard borderline abuse of notation in math. Generally we use $c$ (or capital $C$) to represent arbitrary constants of integration, even when those arbitrary constants change form, so to speak.</p> <p>A more pedantic approach would be to explain as follows:</p> <hr> <p>$$\ln y = \ln |\sin x| + c$$ At this point we would like to remove the natural logs. So, since $c$ is some arbitrary real number and since the natural log function is surjective onto the real numbers, then there must be some other (and positive) constant $c_2$ such that $\ln (c_2) = c$. Then we have $$ \ln y = \ln|\sin x| + \ln(c_2).$$ Well, $c_2$ itself is just an arbitrary constant ("arbitrary" subject to the restriction that it must be positive), so let's just keep calling it $c$ so we don't get everything muddled with different constant names. Therefore $$ \ln y = \ln|\sin x| + \ln c.$$</p> <hr> <p>In short, you may see solutions that always use $c$ to represent arbitrary constants, even if they are different arbitrary constants along the way. <strong>But the only constant we really care about is the one in the solution we obtain at the end.</strong> That's pretty much how we justify calling all arbitrary constants $c$ or $C$. So in this problem, the only arbitrary constant we care about is the $c$ in $y = c \sin x$.</p>
What is the meaning of $\sum_{i,j = 1}^n$?
2,112,774
5
https://math.stackexchange.com/questions/2112774/what-is-the-meaning-of-sum-i-j-1n
<p>In the multi-dimensional Ito formula, this notation pops up:$$\sum_{i,j = 1}^n$$ What does that even mean? Do we take $i = 1$, then $j = 1, ....,n$, then $i =2$ and $j = 1,...,n$, and so on? Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on? If it is the latter, then why don't they just write $\sum_{k=1}^n$, and then everywhere inside the summation, exchange both $j$ and $i$ for $k$? If it is the former, then surely the double summation notation makes the point clearer and is also better suited for calculation (i.e, if you want to take a constant in the inner sum out into the outer).</p>
[ { "answer_id": 2112781, "score": 6, "text": "<p>It has the following meaning\n$$\\sum_{i,j=1}^n=\\sum_{i=1}^n\\sum_{j=1}^n=\\sum_{j=1}^n\\sum_{i=1}^n$$\nSince the sums are finite, the order of summation does not matter. This notation is convenient because it takes up less space than writing it out.</p>\n" }, { "answer_id": 2113378, "score": 2, "text": "<p>It's the summation of a square array. Here's a concrete example: $$\\sum_{i, j = 1}^5 \\gcd(i, j).$$ You <em>could</em> start with $\\gcd(1, 1)$, then $\\gcd(1, 2)$, $\\gcd(1, 3)$, and so on and so forth until reaching $\\gcd(5, 5)$. But like Moop said, addition is commutative, so you could fix $j$ while you iterate $i$, or you could proceed in diagonals, or you could even go through it randomly (of course keeping track of what you've already calculated).</p>\n\n<p>I personally would prefer $$\\sum_{i = 1}^5 \\sum_{j = 1}^5 \\gcd(i, j)$$ because you can more easily change that to a nonsquare rectangular array if you need to. But it could be a parsing speed bump if someone gets temporarily confused and thinks there is multiplication involved.</p>\n\n<blockquote>\n <p>Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on?</p>\n</blockquote>\n\n<p>As you've already noted, that would be inefficient, since $i$ and $j$ could be collapsed to a single variable, like you said. it's the kind of mistake I would not expect in a book or a peer-reviewed journal article. But on a post on this website? It could happen, I presume we're all human here.</p>\n" }, { "answer_id": 2112781, "score": 6, "text": "<p>It has the following meaning\n$$\\sum_{i,j=1}^n=\\sum_{i=1}^n\\sum_{j=1}^n=\\sum_{j=1}^n\\sum_{i=1}^n$$\nSince the sums are finite, the order of summation does not matter. This notation is convenient because it takes up less space than writing it out.</p>\n" }, { "answer_id": 2113378, "score": 2, "text": "<p>It's the summation of a square array. Here's a concrete example: $$\\sum_{i, j = 1}^5 \\gcd(i, j).$$ You <em>could</em> start with $\\gcd(1, 1)$, then $\\gcd(1, 2)$, $\\gcd(1, 3)$, and so on and so forth until reaching $\\gcd(5, 5)$. But like Moop said, addition is commutative, so you could fix $j$ while you iterate $i$, or you could proceed in diagonals, or you could even go through it randomly (of course keeping track of what you've already calculated).</p>\n\n<p>I personally would prefer $$\\sum_{i = 1}^5 \\sum_{j = 1}^5 \\gcd(i, j)$$ because you can more easily change that to a nonsquare rectangular array if you need to. But it could be a parsing speed bump if someone gets temporarily confused and thinks there is multiplication involved.</p>\n\n<blockquote>\n <p>Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on?</p>\n</blockquote>\n\n<p>As you've already noted, that would be inefficient, since $i$ and $j$ could be collapsed to a single variable, like you said. it's the kind of mistake I would not expect in a book or a peer-reviewed journal article. But on a post on this website? It could happen, I presume we're all human here.</p>\n" }, { "answer_id": 2112781, "score": 6, "text": "<p>It has the following meaning\n$$\\sum_{i,j=1}^n=\\sum_{i=1}^n\\sum_{j=1}^n=\\sum_{j=1}^n\\sum_{i=1}^n$$\nSince the sums are finite, the order of summation does not matter. This notation is convenient because it takes up less space than writing it out.</p>\n" }, { "answer_id": 2113378, "score": 2, "text": "<p>It's the summation of a square array. Here's a concrete example: $$\\sum_{i, j = 1}^5 \\gcd(i, j).$$ You <em>could</em> start with $\\gcd(1, 1)$, then $\\gcd(1, 2)$, $\\gcd(1, 3)$, and so on and so forth until reaching $\\gcd(5, 5)$. But like Moop said, addition is commutative, so you could fix $j$ while you iterate $i$, or you could proceed in diagonals, or you could even go through it randomly (of course keeping track of what you've already calculated).</p>\n\n<p>I personally would prefer $$\\sum_{i = 1}^5 \\sum_{j = 1}^5 \\gcd(i, j)$$ because you can more easily change that to a nonsquare rectangular array if you need to. But it could be a parsing speed bump if someone gets temporarily confused and thinks there is multiplication involved.</p>\n\n<blockquote>\n <p>Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on?</p>\n</blockquote>\n\n<p>As you've already noted, that would be inefficient, since $i$ and $j$ could be collapsed to a single variable, like you said. it's the kind of mistake I would not expect in a book or a peer-reviewed journal article. But on a post on this website? It could happen, I presume we're all human here.</p>\n" }, { "answer_id": 2112781, "score": 6, "text": "<p>It has the following meaning\n$$\\sum_{i,j=1}^n=\\sum_{i=1}^n\\sum_{j=1}^n=\\sum_{j=1}^n\\sum_{i=1}^n$$\nSince the sums are finite, the order of summation does not matter. This notation is convenient because it takes up less space than writing it out.</p>\n" }, { "answer_id": 2113378, "score": 2, "text": "<p>It's the summation of a square array. Here's a concrete example: $$\\sum_{i, j = 1}^5 \\gcd(i, j).$$ You <em>could</em> start with $\\gcd(1, 1)$, then $\\gcd(1, 2)$, $\\gcd(1, 3)$, and so on and so forth until reaching $\\gcd(5, 5)$. But like Moop said, addition is commutative, so you could fix $j$ while you iterate $i$, or you could proceed in diagonals, or you could even go through it randomly (of course keeping track of what you've already calculated).</p>\n\n<p>I personally would prefer $$\\sum_{i = 1}^5 \\sum_{j = 1}^5 \\gcd(i, j)$$ because you can more easily change that to a nonsquare rectangular array if you need to. But it could be a parsing speed bump if someone gets temporarily confused and thinks there is multiplication involved.</p>\n\n<blockquote>\n <p>Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on?</p>\n</blockquote>\n\n<p>As you've already noted, that would be inefficient, since $i$ and $j$ could be collapsed to a single variable, like you said. it's the kind of mistake I would not expect in a book or a peer-reviewed journal article. But on a post on this website? It could happen, I presume we're all human here.</p>\n" }, { "answer_id": 2112781, "score": 6, "text": "<p>It has the following meaning\n$$\\sum_{i,j=1}^n=\\sum_{i=1}^n\\sum_{j=1}^n=\\sum_{j=1}^n\\sum_{i=1}^n$$\nSince the sums are finite, the order of summation does not matter. This notation is convenient because it takes up less space than writing it out.</p>\n" }, { "answer_id": 2113378, "score": 2, "text": "<p>It's the summation of a square array. Here's a concrete example: $$\\sum_{i, j = 1}^5 \\gcd(i, j).$$ You <em>could</em> start with $\\gcd(1, 1)$, then $\\gcd(1, 2)$, $\\gcd(1, 3)$, and so on and so forth until reaching $\\gcd(5, 5)$. But like Moop said, addition is commutative, so you could fix $j$ while you iterate $i$, or you could proceed in diagonals, or you could even go through it randomly (of course keeping track of what you've already calculated).</p>\n\n<p>I personally would prefer $$\\sum_{i = 1}^5 \\sum_{j = 1}^5 \\gcd(i, j)$$ because you can more easily change that to a nonsquare rectangular array if you need to. But it could be a parsing speed bump if someone gets temporarily confused and thinks there is multiplication involved.</p>\n\n<blockquote>\n <p>Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on?</p>\n</blockquote>\n\n<p>As you've already noted, that would be inefficient, since $i$ and $j$ could be collapsed to a single variable, like you said. it's the kind of mistake I would not expect in a book or a peer-reviewed journal article. But on a post on this website? It could happen, I presume we're all human here.</p>\n" } ]
Question: What is the meaning of $\sum_{i,j = 1}^n$? Answer 1: <p>It has the following meaning $$\sum_{i,j=1}^n=\sum_{i=1}^n\sum_{j=1}^n=\sum_{j=1}^n\sum_{i=1}^n$$ Since the sums are finite, the order of summation does not matter. This notation is convenient because it takes up less space than writing it out.</p> Answer 2: <p>It's the summation of a square array. Here's a concrete example: $$\sum_{i, j = 1}^5 \gcd(i, j).$$ You <em>could</em> start with $\gcd(1, 1)$, then $\gcd(1, 2)$, $\gcd(1, 3)$, and so on and so forth until reaching $\gcd(5, 5)$. But like Moop said, addition is commutative, so you could fix $j$ while you iterate $i$, or you could proceed in diagonals, or you could even go through it randomly (of course keeping track of what you've already calculated).</p> <p>I personally would prefer $$\sum_{i = 1}^5 \sum_{j = 1}^5 \gcd(i, j)$$ because you can more easily change that to a nonsquare rectangular array if you need to. But it could be a parsing speed bump if someone gets temporarily confused and thinks there is multiplication involved.</p> <blockquote> <p>Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on?</p> </blockquote> <p>As you've already noted, that would be inefficient, since $i$ and $j$ could be collapsed to a single variable, like you said. it's the kind of mistake I would not expect in a book or a peer-reviewed journal article. But on a post on this website? It could happen, I presume we're all human here.</p> Answer 3: <p>It has the following meaning $$\sum_{i,j=1}^n=\sum_{i=1}^n\sum_{j=1}^n=\sum_{j=1}^n\sum_{i=1}^n$$ Since the sums are finite, the order of summation does not matter. This notation is convenient because it takes up less space than writing it out.</p> Answer 4: <p>It's the summation of a square array. Here's a concrete example: $$\sum_{i, j = 1}^5 \gcd(i, j).$$ You <em>could</em> start with $\gcd(1, 1)$, then $\gcd(1, 2)$, $\gcd(1, 3)$, and so on and so forth until reaching $\gcd(5, 5)$. But like Moop said, addition is commutative, so you could fix $j$ while you iterate $i$, or you could proceed in diagonals, or you could even go through it randomly (of course keeping track of what you've already calculated).</p> <p>I personally would prefer $$\sum_{i = 1}^5 \sum_{j = 1}^5 \gcd(i, j)$$ because you can more easily change that to a nonsquare rectangular array if you need to. But it could be a parsing speed bump if someone gets temporarily confused and thinks there is multiplication involved.</p> <blockquote> <p>Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on?</p> </blockquote> <p>As you've already noted, that would be inefficient, since $i$ and $j$ could be collapsed to a single variable, like you said. it's the kind of mistake I would not expect in a book or a peer-reviewed journal article. But on a post on this website? It could happen, I presume we're all human here.</p> Answer 5: <p>It has the following meaning $$\sum_{i,j=1}^n=\sum_{i=1}^n\sum_{j=1}^n=\sum_{j=1}^n\sum_{i=1}^n$$ Since the sums are finite, the order of summation does not matter. This notation is convenient because it takes up less space than writing it out.</p> Answer 6: <p>It's the summation of a square array. Here's a concrete example: $$\sum_{i, j = 1}^5 \gcd(i, j).$$ You <em>could</em> start with $\gcd(1, 1)$, then $\gcd(1, 2)$, $\gcd(1, 3)$, and so on and so forth until reaching $\gcd(5, 5)$. But like Moop said, addition is commutative, so you could fix $j$ while you iterate $i$, or you could proceed in diagonals, or you could even go through it randomly (of course keeping track of what you've already calculated).</p> <p>I personally would prefer $$\sum_{i = 1}^5 \sum_{j = 1}^5 \gcd(i, j)$$ because you can more easily change that to a nonsquare rectangular array if you need to. But it could be a parsing speed bump if someone gets temporarily confused and thinks there is multiplication involved.</p> <blockquote> <p>Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on?</p> </blockquote> <p>As you've already noted, that would be inefficient, since $i$ and $j$ could be collapsed to a single variable, like you said. it's the kind of mistake I would not expect in a book or a peer-reviewed journal article. But on a post on this website? It could happen, I presume we're all human here.</p> Answer 7: <p>It has the following meaning $$\sum_{i,j=1}^n=\sum_{i=1}^n\sum_{j=1}^n=\sum_{j=1}^n\sum_{i=1}^n$$ Since the sums are finite, the order of summation does not matter. This notation is convenient because it takes up less space than writing it out.</p> Answer 8: <p>It's the summation of a square array. Here's a concrete example: $$\sum_{i, j = 1}^5 \gcd(i, j).$$ You <em>could</em> start with $\gcd(1, 1)$, then $\gcd(1, 2)$, $\gcd(1, 3)$, and so on and so forth until reaching $\gcd(5, 5)$. But like Moop said, addition is commutative, so you could fix $j$ while you iterate $i$, or you could proceed in diagonals, or you could even go through it randomly (of course keeping track of what you've already calculated).</p> <p>I personally would prefer $$\sum_{i = 1}^5 \sum_{j = 1}^5 \gcd(i, j)$$ because you can more easily change that to a nonsquare rectangular array if you need to. But it could be a parsing speed bump if someone gets temporarily confused and thinks there is multiplication involved.</p> <blockquote> <p>Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on?</p> </blockquote> <p>As you've already noted, that would be inefficient, since $i$ and $j$ could be collapsed to a single variable, like you said. it's the kind of mistake I would not expect in a book or a peer-reviewed journal article. But on a post on this website? It could happen, I presume we're all human here.</p> Answer 9: <p>It has the following meaning $$\sum_{i,j=1}^n=\sum_{i=1}^n\sum_{j=1}^n=\sum_{j=1}^n\sum_{i=1}^n$$ Since the sums are finite, the order of summation does not matter. This notation is convenient because it takes up less space than writing it out.</p> Answer 10: <p>It's the summation of a square array. Here's a concrete example: $$\sum_{i, j = 1}^5 \gcd(i, j).$$ You <em>could</em> start with $\gcd(1, 1)$, then $\gcd(1, 2)$, $\gcd(1, 3)$, and so on and so forth until reaching $\gcd(5, 5)$. But like Moop said, addition is commutative, so you could fix $j$ while you iterate $i$, or you could proceed in diagonals, or you could even go through it randomly (of course keeping track of what you've already calculated).</p> <p>I personally would prefer $$\sum_{i = 1}^5 \sum_{j = 1}^5 \gcd(i, j)$$ because you can more easily change that to a nonsquare rectangular array if you need to. But it could be a parsing speed bump if someone gets temporarily confused and thinks there is multiplication involved.</p> <blockquote> <p>Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on?</p> </blockquote> <p>As you've already noted, that would be inefficient, since $i$ and $j$ could be collapsed to a single variable, like you said. it's the kind of mistake I would not expect in a book or a peer-reviewed journal article. But on a post on this website? It could happen, I presume we're all human here.</p>
If $\lim_{n\to\infty} \frac{a_{n+1}}{a_n} = 0$, show that $\lim_{n\to\infty}{a_n} =0$.
2,113,399
5
https://math.stackexchange.com/questions/2113399/if-lim-n-to-infty-fraca-n1a-n-0-show-that-lim-n-to-inftya-n
<p><strong>Problem:</strong> </p> <p>If $\lim_{n\to\infty} \frac{a_{n+1}}{a_n} = 0$, show that $\lim_{n\to\infty}a_n = 0$.</p> <p><strong>Attempted Proof:</strong> </p> <p>Let $\epsilon &gt; 0$. From the hypothesis, $\exists \ N \in \mathbb{P}$ such that if $n \geq N$, then $$\left|\ \frac{a_{n+1}}{a_n} - 0 \ \right| &lt; \epsilon.$$</p> <p>This implies</p> <p>$$\left| {a_{n+1}} \right| &lt; \epsilon \left| a_n \right|.$$</p> <p>Thus, let $n\geq N'$ such that $$\left|\ \frac{a_{n+1}}{a_n} - 0 \ \right| &lt; \epsilon\left|a_n\right|.$$</p> <p>Then we have $\left|a_{n+1}\right| &lt; \epsilon,$ which implies $\lim_{n\to\infty}a_n = 0$.</p> <p>My main concern with my proof is that $\epsilon$ depends on $a_n$. Is this an issue?</p>
[ { "answer_id": 2113444, "score": 6, "text": "<p>Since $\\lim_{n\\to\\infty}\\frac{a_{n+1}}{a_n}=0$, then for $\\epsilon=\\frac12$, there is $N\\in\\mathbb{N}$ such that when $n\\ge N$,\n$$ \\bigg|\\frac{a_{n+1}}{a_n}\\bigg|&lt;\\epsilon. $$\nThus for $n&gt;N$, one has\n$$ \\bigg|\\frac{a_{N+1}}{a_N}\\bigg|&lt;\\epsilon, \\bigg|\\frac{a_{N+2}}{a_{N+1}}\\bigg|&lt;\\epsilon,\\cdots,\\bigg|\\frac{a_{n}}{a_{n-1}}\\bigg|&lt;\\epsilon$$\nand hence\n$$ \\bigg|\\frac{a_{n}}{a_N}\\bigg|&lt;\\epsilon^{n-N}$$\nor\n$$ |a_n|&lt;\\epsilon^{n-N}|a_N|. $$\nSo\n$$ \\lim_{n\\to\\infty}a_n=0.$$</p>\n" }, { "answer_id": 2113417, "score": 3, "text": "<p>For $\\epsilon =1$, one gets $|a_{n+1}| &lt; |a_n|$ for each $n \\ge N$, so as $(|a_n|)$ is bounded below, we deduce that it is convergent. Suppose that it converges to some $L &gt;0$ and get a contradiction by proving that $\\lim \\frac{a_{n+1}}{a_n}$ would then be $1$. Therefore it converges to $0$ and so does $(a_n)$.</p>\n" }, { "answer_id": 2113445, "score": 3, "text": "<p>That would be an issue, yes. The quickest way to prove this is to recognize that $\\lim_{n\\to\\infty}\\frac{a_{n+1}}{a_n}=0$ tells you that $\\sum_0^\\infty a_n$ converges by the ratio test. Thus $a_n$ must go to $0$. </p>\n" }, { "answer_id": 2113420, "score": 1, "text": "<p>You should study $|a_n|$ and see that after $N$ big enough, you have $|a_{n+1}|&lt;|a_n|$...</p>\n" }, { "answer_id": 2113444, "score": 6, "text": "<p>Since $\\lim_{n\\to\\infty}\\frac{a_{n+1}}{a_n}=0$, then for $\\epsilon=\\frac12$, there is $N\\in\\mathbb{N}$ such that when $n\\ge N$,\n$$ \\bigg|\\frac{a_{n+1}}{a_n}\\bigg|&lt;\\epsilon. $$\nThus for $n&gt;N$, one has\n$$ \\bigg|\\frac{a_{N+1}}{a_N}\\bigg|&lt;\\epsilon, \\bigg|\\frac{a_{N+2}}{a_{N+1}}\\bigg|&lt;\\epsilon,\\cdots,\\bigg|\\frac{a_{n}}{a_{n-1}}\\bigg|&lt;\\epsilon$$\nand hence\n$$ \\bigg|\\frac{a_{n}}{a_N}\\bigg|&lt;\\epsilon^{n-N}$$\nor\n$$ |a_n|&lt;\\epsilon^{n-N}|a_N|. $$\nSo\n$$ \\lim_{n\\to\\infty}a_n=0.$$</p>\n" }, { "answer_id": 2113417, "score": 3, "text": "<p>For $\\epsilon =1$, one gets $|a_{n+1}| &lt; |a_n|$ for each $n \\ge N$, so as $(|a_n|)$ is bounded below, we deduce that it is convergent. Suppose that it converges to some $L &gt;0$ and get a contradiction by proving that $\\lim \\frac{a_{n+1}}{a_n}$ would then be $1$. Therefore it converges to $0$ and so does $(a_n)$.</p>\n" }, { "answer_id": 2113445, "score": 3, "text": "<p>That would be an issue, yes. The quickest way to prove this is to recognize that $\\lim_{n\\to\\infty}\\frac{a_{n+1}}{a_n}=0$ tells you that $\\sum_0^\\infty a_n$ converges by the ratio test. Thus $a_n$ must go to $0$. </p>\n" }, { "answer_id": 2113420, "score": 1, "text": "<p>You should study $|a_n|$ and see that after $N$ big enough, you have $|a_{n+1}|&lt;|a_n|$...</p>\n" }, { "answer_id": 2113444, "score": 6, "text": "<p>Since $\\lim_{n\\to\\infty}\\frac{a_{n+1}}{a_n}=0$, then for $\\epsilon=\\frac12$, there is $N\\in\\mathbb{N}$ such that when $n\\ge N$,\n$$ \\bigg|\\frac{a_{n+1}}{a_n}\\bigg|&lt;\\epsilon. $$\nThus for $n&gt;N$, one has\n$$ \\bigg|\\frac{a_{N+1}}{a_N}\\bigg|&lt;\\epsilon, \\bigg|\\frac{a_{N+2}}{a_{N+1}}\\bigg|&lt;\\epsilon,\\cdots,\\bigg|\\frac{a_{n}}{a_{n-1}}\\bigg|&lt;\\epsilon$$\nand hence\n$$ \\bigg|\\frac{a_{n}}{a_N}\\bigg|&lt;\\epsilon^{n-N}$$\nor\n$$ |a_n|&lt;\\epsilon^{n-N}|a_N|. $$\nSo\n$$ \\lim_{n\\to\\infty}a_n=0.$$</p>\n" }, { "answer_id": 2113417, "score": 3, "text": "<p>For $\\epsilon =1$, one gets $|a_{n+1}| &lt; |a_n|$ for each $n \\ge N$, so as $(|a_n|)$ is bounded below, we deduce that it is convergent. Suppose that it converges to some $L &gt;0$ and get a contradiction by proving that $\\lim \\frac{a_{n+1}}{a_n}$ would then be $1$. Therefore it converges to $0$ and so does $(a_n)$.</p>\n" }, { "answer_id": 2113445, "score": 3, "text": "<p>That would be an issue, yes. The quickest way to prove this is to recognize that $\\lim_{n\\to\\infty}\\frac{a_{n+1}}{a_n}=0$ tells you that $\\sum_0^\\infty a_n$ converges by the ratio test. Thus $a_n$ must go to $0$. </p>\n" }, { "answer_id": 2113420, "score": 1, "text": "<p>You should study $|a_n|$ and see that after $N$ big enough, you have $|a_{n+1}|&lt;|a_n|$...</p>\n" }, { "answer_id": 2113444, "score": 6, "text": "<p>Since $\\lim_{n\\to\\infty}\\frac{a_{n+1}}{a_n}=0$, then for $\\epsilon=\\frac12$, there is $N\\in\\mathbb{N}$ such that when $n\\ge N$,\n$$ \\bigg|\\frac{a_{n+1}}{a_n}\\bigg|&lt;\\epsilon. $$\nThus for $n&gt;N$, one has\n$$ \\bigg|\\frac{a_{N+1}}{a_N}\\bigg|&lt;\\epsilon, \\bigg|\\frac{a_{N+2}}{a_{N+1}}\\bigg|&lt;\\epsilon,\\cdots,\\bigg|\\frac{a_{n}}{a_{n-1}}\\bigg|&lt;\\epsilon$$\nand hence\n$$ \\bigg|\\frac{a_{n}}{a_N}\\bigg|&lt;\\epsilon^{n-N}$$\nor\n$$ |a_n|&lt;\\epsilon^{n-N}|a_N|. $$\nSo\n$$ \\lim_{n\\to\\infty}a_n=0.$$</p>\n" }, { "answer_id": 2113417, "score": 3, "text": "<p>For $\\epsilon =1$, one gets $|a_{n+1}| &lt; |a_n|$ for each $n \\ge N$, so as $(|a_n|)$ is bounded below, we deduce that it is convergent. Suppose that it converges to some $L &gt;0$ and get a contradiction by proving that $\\lim \\frac{a_{n+1}}{a_n}$ would then be $1$. Therefore it converges to $0$ and so does $(a_n)$.</p>\n" }, { "answer_id": 2113445, "score": 3, "text": "<p>That would be an issue, yes. The quickest way to prove this is to recognize that $\\lim_{n\\to\\infty}\\frac{a_{n+1}}{a_n}=0$ tells you that $\\sum_0^\\infty a_n$ converges by the ratio test. Thus $a_n$ must go to $0$. </p>\n" }, { "answer_id": 2113420, "score": 1, "text": "<p>You should study $|a_n|$ and see that after $N$ big enough, you have $|a_{n+1}|&lt;|a_n|$...</p>\n" }, { "answer_id": 2113444, "score": 6, "text": "<p>Since $\\lim_{n\\to\\infty}\\frac{a_{n+1}}{a_n}=0$, then for $\\epsilon=\\frac12$, there is $N\\in\\mathbb{N}$ such that when $n\\ge N$,\n$$ \\bigg|\\frac{a_{n+1}}{a_n}\\bigg|&lt;\\epsilon. $$\nThus for $n&gt;N$, one has\n$$ \\bigg|\\frac{a_{N+1}}{a_N}\\bigg|&lt;\\epsilon, \\bigg|\\frac{a_{N+2}}{a_{N+1}}\\bigg|&lt;\\epsilon,\\cdots,\\bigg|\\frac{a_{n}}{a_{n-1}}\\bigg|&lt;\\epsilon$$\nand hence\n$$ \\bigg|\\frac{a_{n}}{a_N}\\bigg|&lt;\\epsilon^{n-N}$$\nor\n$$ |a_n|&lt;\\epsilon^{n-N}|a_N|. $$\nSo\n$$ \\lim_{n\\to\\infty}a_n=0.$$</p>\n" }, { "answer_id": 2113417, "score": 3, "text": "<p>For $\\epsilon =1$, one gets $|a_{n+1}| &lt; |a_n|$ for each $n \\ge N$, so as $(|a_n|)$ is bounded below, we deduce that it is convergent. Suppose that it converges to some $L &gt;0$ and get a contradiction by proving that $\\lim \\frac{a_{n+1}}{a_n}$ would then be $1$. Therefore it converges to $0$ and so does $(a_n)$.</p>\n" }, { "answer_id": 2113445, "score": 3, "text": "<p>That would be an issue, yes. The quickest way to prove this is to recognize that $\\lim_{n\\to\\infty}\\frac{a_{n+1}}{a_n}=0$ tells you that $\\sum_0^\\infty a_n$ converges by the ratio test. Thus $a_n$ must go to $0$. </p>\n" }, { "answer_id": 2113420, "score": 1, "text": "<p>You should study $|a_n|$ and see that after $N$ big enough, you have $|a_{n+1}|&lt;|a_n|$...</p>\n" } ]
Question: If $\lim_{n\to\infty} \frac{a_{n+1}}{a_n} = 0$, show that $\lim_{n\to\infty}{a_n} =0$. Answer 1: <p>Since $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$, then for $\epsilon=\frac12$, there is $N\in\mathbb{N}$ such that when $n\ge N$, $$ \bigg|\frac{a_{n+1}}{a_n}\bigg|&lt;\epsilon. $$ Thus for $n&gt;N$, one has $$ \bigg|\frac{a_{N+1}}{a_N}\bigg|&lt;\epsilon, \bigg|\frac{a_{N+2}}{a_{N+1}}\bigg|&lt;\epsilon,\cdots,\bigg|\frac{a_{n}}{a_{n-1}}\bigg|&lt;\epsilon$$ and hence $$ \bigg|\frac{a_{n}}{a_N}\bigg|&lt;\epsilon^{n-N}$$ or $$ |a_n|&lt;\epsilon^{n-N}|a_N|. $$ So $$ \lim_{n\to\infty}a_n=0.$$</p> Answer 2: <p>For $\epsilon =1$, one gets $|a_{n+1}| &lt; |a_n|$ for each $n \ge N$, so as $(|a_n|)$ is bounded below, we deduce that it is convergent. Suppose that it converges to some $L &gt;0$ and get a contradiction by proving that $\lim \frac{a_{n+1}}{a_n}$ would then be $1$. Therefore it converges to $0$ and so does $(a_n)$.</p> Answer 3: <p>That would be an issue, yes. The quickest way to prove this is to recognize that $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$ tells you that $\sum_0^\infty a_n$ converges by the ratio test. Thus $a_n$ must go to $0$. </p> Answer 4: <p>You should study $|a_n|$ and see that after $N$ big enough, you have $|a_{n+1}|&lt;|a_n|$...</p> Answer 5: <p>Since $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$, then for $\epsilon=\frac12$, there is $N\in\mathbb{N}$ such that when $n\ge N$, $$ \bigg|\frac{a_{n+1}}{a_n}\bigg|&lt;\epsilon. $$ Thus for $n&gt;N$, one has $$ \bigg|\frac{a_{N+1}}{a_N}\bigg|&lt;\epsilon, \bigg|\frac{a_{N+2}}{a_{N+1}}\bigg|&lt;\epsilon,\cdots,\bigg|\frac{a_{n}}{a_{n-1}}\bigg|&lt;\epsilon$$ and hence $$ \bigg|\frac{a_{n}}{a_N}\bigg|&lt;\epsilon^{n-N}$$ or $$ |a_n|&lt;\epsilon^{n-N}|a_N|. $$ So $$ \lim_{n\to\infty}a_n=0.$$</p> Answer 6: <p>For $\epsilon =1$, one gets $|a_{n+1}| &lt; |a_n|$ for each $n \ge N$, so as $(|a_n|)$ is bounded below, we deduce that it is convergent. Suppose that it converges to some $L &gt;0$ and get a contradiction by proving that $\lim \frac{a_{n+1}}{a_n}$ would then be $1$. Therefore it converges to $0$ and so does $(a_n)$.</p> Answer 7: <p>That would be an issue, yes. The quickest way to prove this is to recognize that $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$ tells you that $\sum_0^\infty a_n$ converges by the ratio test. Thus $a_n$ must go to $0$. </p> Answer 8: <p>You should study $|a_n|$ and see that after $N$ big enough, you have $|a_{n+1}|&lt;|a_n|$...</p> Answer 9: <p>Since $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$, then for $\epsilon=\frac12$, there is $N\in\mathbb{N}$ such that when $n\ge N$, $$ \bigg|\frac{a_{n+1}}{a_n}\bigg|&lt;\epsilon. $$ Thus for $n&gt;N$, one has $$ \bigg|\frac{a_{N+1}}{a_N}\bigg|&lt;\epsilon, \bigg|\frac{a_{N+2}}{a_{N+1}}\bigg|&lt;\epsilon,\cdots,\bigg|\frac{a_{n}}{a_{n-1}}\bigg|&lt;\epsilon$$ and hence $$ \bigg|\frac{a_{n}}{a_N}\bigg|&lt;\epsilon^{n-N}$$ or $$ |a_n|&lt;\epsilon^{n-N}|a_N|. $$ So $$ \lim_{n\to\infty}a_n=0.$$</p> Answer 10: <p>For $\epsilon =1$, one gets $|a_{n+1}| &lt; |a_n|$ for each $n \ge N$, so as $(|a_n|)$ is bounded below, we deduce that it is convergent. Suppose that it converges to some $L &gt;0$ and get a contradiction by proving that $\lim \frac{a_{n+1}}{a_n}$ would then be $1$. Therefore it converges to $0$ and so does $(a_n)$.</p> Answer 11: <p>That would be an issue, yes. The quickest way to prove this is to recognize that $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$ tells you that $\sum_0^\infty a_n$ converges by the ratio test. Thus $a_n$ must go to $0$. </p> Answer 12: <p>You should study $|a_n|$ and see that after $N$ big enough, you have $|a_{n+1}|&lt;|a_n|$...</p> Answer 13: <p>Since $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$, then for $\epsilon=\frac12$, there is $N\in\mathbb{N}$ such that when $n\ge N$, $$ \bigg|\frac{a_{n+1}}{a_n}\bigg|&lt;\epsilon. $$ Thus for $n&gt;N$, one has $$ \bigg|\frac{a_{N+1}}{a_N}\bigg|&lt;\epsilon, \bigg|\frac{a_{N+2}}{a_{N+1}}\bigg|&lt;\epsilon,\cdots,\bigg|\frac{a_{n}}{a_{n-1}}\bigg|&lt;\epsilon$$ and hence $$ \bigg|\frac{a_{n}}{a_N}\bigg|&lt;\epsilon^{n-N}$$ or $$ |a_n|&lt;\epsilon^{n-N}|a_N|. $$ So $$ \lim_{n\to\infty}a_n=0.$$</p> Answer 14: <p>For $\epsilon =1$, one gets $|a_{n+1}| &lt; |a_n|$ for each $n \ge N$, so as $(|a_n|)$ is bounded below, we deduce that it is convergent. Suppose that it converges to some $L &gt;0$ and get a contradiction by proving that $\lim \frac{a_{n+1}}{a_n}$ would then be $1$. Therefore it converges to $0$ and so does $(a_n)$.</p> Answer 15: <p>That would be an issue, yes. The quickest way to prove this is to recognize that $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$ tells you that $\sum_0^\infty a_n$ converges by the ratio test. Thus $a_n$ must go to $0$. </p> Answer 16: <p>You should study $|a_n|$ and see that after $N$ big enough, you have $|a_{n+1}|&lt;|a_n|$...</p> Answer 17: <p>Since $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$, then for $\epsilon=\frac12$, there is $N\in\mathbb{N}$ such that when $n\ge N$, $$ \bigg|\frac{a_{n+1}}{a_n}\bigg|&lt;\epsilon. $$ Thus for $n&gt;N$, one has $$ \bigg|\frac{a_{N+1}}{a_N}\bigg|&lt;\epsilon, \bigg|\frac{a_{N+2}}{a_{N+1}}\bigg|&lt;\epsilon,\cdots,\bigg|\frac{a_{n}}{a_{n-1}}\bigg|&lt;\epsilon$$ and hence $$ \bigg|\frac{a_{n}}{a_N}\bigg|&lt;\epsilon^{n-N}$$ or $$ |a_n|&lt;\epsilon^{n-N}|a_N|. $$ So $$ \lim_{n\to\infty}a_n=0.$$</p> Answer 18: <p>For $\epsilon =1$, one gets $|a_{n+1}| &lt; |a_n|$ for each $n \ge N$, so as $(|a_n|)$ is bounded below, we deduce that it is convergent. Suppose that it converges to some $L &gt;0$ and get a contradiction by proving that $\lim \frac{a_{n+1}}{a_n}$ would then be $1$. Therefore it converges to $0$ and so does $(a_n)$.</p> Answer 19: <p>That would be an issue, yes. The quickest way to prove this is to recognize that $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$ tells you that $\sum_0^\infty a_n$ converges by the ratio test. Thus $a_n$ must go to $0$. </p> Answer 20: <p>You should study $|a_n|$ and see that after $N$ big enough, you have $|a_{n+1}|&lt;|a_n|$...</p>
Limit $\lim_{n\to\infty} n^{-3/2}(1+\sqrt{2}+\ldots+\sqrt{n})=\lim_{n \to \infty} \frac{\sqrt{1} + \sqrt{2} + ... + \sqrt{n}}{n\sqrt{n}}$
2,114,065
5
https://math.stackexchange.com/questions/2114065/limit-lim-n-to-infty-n-3-21-sqrt2-ldots-sqrtn-lim-n-to-infty
<p>How do I find the following limit?</p> <p>$$ \lim_{n \to \infty} \frac{\sqrt{1} + \sqrt{2} + ... + \sqrt{n}}{n\sqrt{n}} $$</p> <p>The answer (from <a href="http://www.wolframalpha.com/input/?i=%5Clim_%7Bn%20%5Cto%20%5Cinfty%7D%20%5Cfrac%7B%5Csqrt%7B1%7D%20%2B%20%5Csqrt%7B2%7D%20%2B%20...%20%2B%20%5Csqrt%7Bn%7D%7D%7Bn%5Csqrt%7Bn%7D%7D" rel="nofollow noreferrer">Wolfram</a>) is $\frac{2}{3}$, but I'm not sure how to proceed.</p> <p>Is this an application of the Squeeze theorem? I'm not quite sure.</p>
[ { "answer_id": 2114073, "score": 5, "text": "<p>$$\\lim_{n\\to +\\infty}\\frac{1}{n}\\sum_{k=1}^{n}\\sqrt{\\frac{k}{n}} = \\int_{0}^{1}\\sqrt{x}\\,dx = \\color{red}{\\frac{2}{3}}$$\nby <a href=\"https://en.wikipedia.org/wiki/Riemann_sum\" rel=\"nofollow noreferrer\">Riemann sums</a> and the integrability of $\\sqrt{x}$ over $[0,1]$.</p>\n\n<p>For a more elementary approach, notice that $\\sqrt{k}$ is pretty close to $\\frac{2}{3}\\left[\\left(k+\\frac{1}{2}\\right)^{3/2}-\\left(k-\\frac{1}{2}\\right)^{3/2}\\right]$ and apply creative telescoping and squeezing.</p>\n" }, { "answer_id": 2114093, "score": 4, "text": "<p>How about Stolz-Cesaro? If $x_n \\to \\infty $, $y_n \\to \\infty $ and $$ \\lim_{n\\to\\infty} \\frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$ exists, then : $$ \\lim_{n\\to\\infty} \\frac{x_n}{y_n} = \\lim_{n\\to\\infty} \\frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$</p>\n" }, { "answer_id": 2114096, "score": 4, "text": "<p>All the other answers are by Riemann sums, but one can also use the Stolz theorem. The limit becomes \n$$\\frac{\\sqrt{n}}{n\\sqrt{n}-(n-1)\\sqrt{n-1}}=\\frac{\\sqrt{n}(n\\sqrt{n}+(n-1)\\sqrt{n-1})}{n^3+(n-1)^3}=$$ \n$$\\frac{n^2+(n-1)\\sqrt{n^2-n})}{n^2+n(n-1)+(n-1)^2}\\to\\frac{2}{3}$$ </p>\n" }, { "answer_id": 2114075, "score": 1, "text": "<p>Fill in details: $$\\frac1{n\\sqrt n}\\sum_{k=1}^n\\sqrt k=\\frac1{n}\\sum_{k=1}^n\\sqrt\\frac kn\\xrightarrow[n\\to\\infty]{}\\int_0^1\\sqrt x\\,dx$$</p>\n" }, { "answer_id": 2114077, "score": 1, "text": "<p>Using $\\;\\;\\displaystyle\\lim_{n\\to +\\infty}\\frac{1}{n}\\sum_{k=1}^nf\\left(\\frac{k}{n}\\right)=\\int_0^1f(x)\\;dx,$ we get\n$$\\lim_{n \\to \\infty} \\frac{\\sqrt{1} + \\sqrt{2} +\\ldots + \\sqrt{n}}{n\\sqrt{n}}=\\lim_{n \\to \\infty}\\frac{1}{n}\\sum_{k=1}^n\\sqrt{\\frac{k}{n}}=\\int_0^1\\sqrt{x}\\;dx=\\frac{2}{3}.$$</p>\n" }, { "answer_id": 2114073, "score": 5, "text": "<p>$$\\lim_{n\\to +\\infty}\\frac{1}{n}\\sum_{k=1}^{n}\\sqrt{\\frac{k}{n}} = \\int_{0}^{1}\\sqrt{x}\\,dx = \\color{red}{\\frac{2}{3}}$$\nby <a href=\"https://en.wikipedia.org/wiki/Riemann_sum\" rel=\"nofollow noreferrer\">Riemann sums</a> and the integrability of $\\sqrt{x}$ over $[0,1]$.</p>\n\n<p>For a more elementary approach, notice that $\\sqrt{k}$ is pretty close to $\\frac{2}{3}\\left[\\left(k+\\frac{1}{2}\\right)^{3/2}-\\left(k-\\frac{1}{2}\\right)^{3/2}\\right]$ and apply creative telescoping and squeezing.</p>\n" }, { "answer_id": 2114093, "score": 4, "text": "<p>How about Stolz-Cesaro? If $x_n \\to \\infty $, $y_n \\to \\infty $ and $$ \\lim_{n\\to\\infty} \\frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$ exists, then : $$ \\lim_{n\\to\\infty} \\frac{x_n}{y_n} = \\lim_{n\\to\\infty} \\frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$</p>\n" }, { "answer_id": 2114096, "score": 4, "text": "<p>All the other answers are by Riemann sums, but one can also use the Stolz theorem. The limit becomes \n$$\\frac{\\sqrt{n}}{n\\sqrt{n}-(n-1)\\sqrt{n-1}}=\\frac{\\sqrt{n}(n\\sqrt{n}+(n-1)\\sqrt{n-1})}{n^3+(n-1)^3}=$$ \n$$\\frac{n^2+(n-1)\\sqrt{n^2-n})}{n^2+n(n-1)+(n-1)^2}\\to\\frac{2}{3}$$ </p>\n" }, { "answer_id": 2114075, "score": 1, "text": "<p>Fill in details: $$\\frac1{n\\sqrt n}\\sum_{k=1}^n\\sqrt k=\\frac1{n}\\sum_{k=1}^n\\sqrt\\frac kn\\xrightarrow[n\\to\\infty]{}\\int_0^1\\sqrt x\\,dx$$</p>\n" }, { "answer_id": 2114077, "score": 1, "text": "<p>Using $\\;\\;\\displaystyle\\lim_{n\\to +\\infty}\\frac{1}{n}\\sum_{k=1}^nf\\left(\\frac{k}{n}\\right)=\\int_0^1f(x)\\;dx,$ we get\n$$\\lim_{n \\to \\infty} \\frac{\\sqrt{1} + \\sqrt{2} +\\ldots + \\sqrt{n}}{n\\sqrt{n}}=\\lim_{n \\to \\infty}\\frac{1}{n}\\sum_{k=1}^n\\sqrt{\\frac{k}{n}}=\\int_0^1\\sqrt{x}\\;dx=\\frac{2}{3}.$$</p>\n" }, { "answer_id": 2114073, "score": 5, "text": "<p>$$\\lim_{n\\to +\\infty}\\frac{1}{n}\\sum_{k=1}^{n}\\sqrt{\\frac{k}{n}} = \\int_{0}^{1}\\sqrt{x}\\,dx = \\color{red}{\\frac{2}{3}}$$\nby <a href=\"https://en.wikipedia.org/wiki/Riemann_sum\" rel=\"nofollow noreferrer\">Riemann sums</a> and the integrability of $\\sqrt{x}$ over $[0,1]$.</p>\n\n<p>For a more elementary approach, notice that $\\sqrt{k}$ is pretty close to $\\frac{2}{3}\\left[\\left(k+\\frac{1}{2}\\right)^{3/2}-\\left(k-\\frac{1}{2}\\right)^{3/2}\\right]$ and apply creative telescoping and squeezing.</p>\n" }, { "answer_id": 2114093, "score": 4, "text": "<p>How about Stolz-Cesaro? If $x_n \\to \\infty $, $y_n \\to \\infty $ and $$ \\lim_{n\\to\\infty} \\frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$ exists, then : $$ \\lim_{n\\to\\infty} \\frac{x_n}{y_n} = \\lim_{n\\to\\infty} \\frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$</p>\n" }, { "answer_id": 2114096, "score": 4, "text": "<p>All the other answers are by Riemann sums, but one can also use the Stolz theorem. The limit becomes \n$$\\frac{\\sqrt{n}}{n\\sqrt{n}-(n-1)\\sqrt{n-1}}=\\frac{\\sqrt{n}(n\\sqrt{n}+(n-1)\\sqrt{n-1})}{n^3+(n-1)^3}=$$ \n$$\\frac{n^2+(n-1)\\sqrt{n^2-n})}{n^2+n(n-1)+(n-1)^2}\\to\\frac{2}{3}$$ </p>\n" }, { "answer_id": 2114075, "score": 1, "text": "<p>Fill in details: $$\\frac1{n\\sqrt n}\\sum_{k=1}^n\\sqrt k=\\frac1{n}\\sum_{k=1}^n\\sqrt\\frac kn\\xrightarrow[n\\to\\infty]{}\\int_0^1\\sqrt x\\,dx$$</p>\n" }, { "answer_id": 2114077, "score": 1, "text": "<p>Using $\\;\\;\\displaystyle\\lim_{n\\to +\\infty}\\frac{1}{n}\\sum_{k=1}^nf\\left(\\frac{k}{n}\\right)=\\int_0^1f(x)\\;dx,$ we get\n$$\\lim_{n \\to \\infty} \\frac{\\sqrt{1} + \\sqrt{2} +\\ldots + \\sqrt{n}}{n\\sqrt{n}}=\\lim_{n \\to \\infty}\\frac{1}{n}\\sum_{k=1}^n\\sqrt{\\frac{k}{n}}=\\int_0^1\\sqrt{x}\\;dx=\\frac{2}{3}.$$</p>\n" }, { "answer_id": 2114073, "score": 5, "text": "<p>$$\\lim_{n\\to +\\infty}\\frac{1}{n}\\sum_{k=1}^{n}\\sqrt{\\frac{k}{n}} = \\int_{0}^{1}\\sqrt{x}\\,dx = \\color{red}{\\frac{2}{3}}$$\nby <a href=\"https://en.wikipedia.org/wiki/Riemann_sum\" rel=\"nofollow noreferrer\">Riemann sums</a> and the integrability of $\\sqrt{x}$ over $[0,1]$.</p>\n\n<p>For a more elementary approach, notice that $\\sqrt{k}$ is pretty close to $\\frac{2}{3}\\left[\\left(k+\\frac{1}{2}\\right)^{3/2}-\\left(k-\\frac{1}{2}\\right)^{3/2}\\right]$ and apply creative telescoping and squeezing.</p>\n" }, { "answer_id": 2114093, "score": 4, "text": "<p>How about Stolz-Cesaro? If $x_n \\to \\infty $, $y_n \\to \\infty $ and $$ \\lim_{n\\to\\infty} \\frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$ exists, then : $$ \\lim_{n\\to\\infty} \\frac{x_n}{y_n} = \\lim_{n\\to\\infty} \\frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$</p>\n" }, { "answer_id": 2114096, "score": 4, "text": "<p>All the other answers are by Riemann sums, but one can also use the Stolz theorem. The limit becomes \n$$\\frac{\\sqrt{n}}{n\\sqrt{n}-(n-1)\\sqrt{n-1}}=\\frac{\\sqrt{n}(n\\sqrt{n}+(n-1)\\sqrt{n-1})}{n^3+(n-1)^3}=$$ \n$$\\frac{n^2+(n-1)\\sqrt{n^2-n})}{n^2+n(n-1)+(n-1)^2}\\to\\frac{2}{3}$$ </p>\n" }, { "answer_id": 2114075, "score": 1, "text": "<p>Fill in details: $$\\frac1{n\\sqrt n}\\sum_{k=1}^n\\sqrt k=\\frac1{n}\\sum_{k=1}^n\\sqrt\\frac kn\\xrightarrow[n\\to\\infty]{}\\int_0^1\\sqrt x\\,dx$$</p>\n" }, { "answer_id": 2114077, "score": 1, "text": "<p>Using $\\;\\;\\displaystyle\\lim_{n\\to +\\infty}\\frac{1}{n}\\sum_{k=1}^nf\\left(\\frac{k}{n}\\right)=\\int_0^1f(x)\\;dx,$ we get\n$$\\lim_{n \\to \\infty} \\frac{\\sqrt{1} + \\sqrt{2} +\\ldots + \\sqrt{n}}{n\\sqrt{n}}=\\lim_{n \\to \\infty}\\frac{1}{n}\\sum_{k=1}^n\\sqrt{\\frac{k}{n}}=\\int_0^1\\sqrt{x}\\;dx=\\frac{2}{3}.$$</p>\n" }, { "answer_id": 2114073, "score": 5, "text": "<p>$$\\lim_{n\\to +\\infty}\\frac{1}{n}\\sum_{k=1}^{n}\\sqrt{\\frac{k}{n}} = \\int_{0}^{1}\\sqrt{x}\\,dx = \\color{red}{\\frac{2}{3}}$$\nby <a href=\"https://en.wikipedia.org/wiki/Riemann_sum\" rel=\"nofollow noreferrer\">Riemann sums</a> and the integrability of $\\sqrt{x}$ over $[0,1]$.</p>\n\n<p>For a more elementary approach, notice that $\\sqrt{k}$ is pretty close to $\\frac{2}{3}\\left[\\left(k+\\frac{1}{2}\\right)^{3/2}-\\left(k-\\frac{1}{2}\\right)^{3/2}\\right]$ and apply creative telescoping and squeezing.</p>\n" }, { "answer_id": 2114093, "score": 4, "text": "<p>How about Stolz-Cesaro? If $x_n \\to \\infty $, $y_n \\to \\infty $ and $$ \\lim_{n\\to\\infty} \\frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$ exists, then : $$ \\lim_{n\\to\\infty} \\frac{x_n}{y_n} = \\lim_{n\\to\\infty} \\frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$</p>\n" }, { "answer_id": 2114096, "score": 4, "text": "<p>All the other answers are by Riemann sums, but one can also use the Stolz theorem. The limit becomes \n$$\\frac{\\sqrt{n}}{n\\sqrt{n}-(n-1)\\sqrt{n-1}}=\\frac{\\sqrt{n}(n\\sqrt{n}+(n-1)\\sqrt{n-1})}{n^3+(n-1)^3}=$$ \n$$\\frac{n^2+(n-1)\\sqrt{n^2-n})}{n^2+n(n-1)+(n-1)^2}\\to\\frac{2}{3}$$ </p>\n" }, { "answer_id": 2114075, "score": 1, "text": "<p>Fill in details: $$\\frac1{n\\sqrt n}\\sum_{k=1}^n\\sqrt k=\\frac1{n}\\sum_{k=1}^n\\sqrt\\frac kn\\xrightarrow[n\\to\\infty]{}\\int_0^1\\sqrt x\\,dx$$</p>\n" }, { "answer_id": 2114077, "score": 1, "text": "<p>Using $\\;\\;\\displaystyle\\lim_{n\\to +\\infty}\\frac{1}{n}\\sum_{k=1}^nf\\left(\\frac{k}{n}\\right)=\\int_0^1f(x)\\;dx,$ we get\n$$\\lim_{n \\to \\infty} \\frac{\\sqrt{1} + \\sqrt{2} +\\ldots + \\sqrt{n}}{n\\sqrt{n}}=\\lim_{n \\to \\infty}\\frac{1}{n}\\sum_{k=1}^n\\sqrt{\\frac{k}{n}}=\\int_0^1\\sqrt{x}\\;dx=\\frac{2}{3}.$$</p>\n" } ]
Question: Limit $\lim_{n\to\infty} n^{-3/2}(1+\sqrt{2}+\ldots+\sqrt{n})=\lim_{n \to \infty} \frac{\sqrt{1} + \sqrt{2} + ... + \sqrt{n}}{n\sqrt{n}}$ Answer 1: <p>$$\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^{n}\sqrt{\frac{k}{n}} = \int_{0}^{1}\sqrt{x}\,dx = \color{red}{\frac{2}{3}}$$ by <a href="https://en.wikipedia.org/wiki/Riemann_sum" rel="nofollow noreferrer">Riemann sums</a> and the integrability of $\sqrt{x}$ over $[0,1]$.</p> <p>For a more elementary approach, notice that $\sqrt{k}$ is pretty close to $\frac{2}{3}\left[\left(k+\frac{1}{2}\right)^{3/2}-\left(k-\frac{1}{2}\right)^{3/2}\right]$ and apply creative telescoping and squeezing.</p> Answer 2: <p>How about Stolz-Cesaro? If $x_n \to \infty $, $y_n \to \infty $ and $$ \lim_{n\to\infty} \frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$ exists, then : $$ \lim_{n\to\infty} \frac{x_n}{y_n} = \lim_{n\to\infty} \frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$</p> Answer 3: <p>All the other answers are by Riemann sums, but one can also use the Stolz theorem. The limit becomes $$\frac{\sqrt{n}}{n\sqrt{n}-(n-1)\sqrt{n-1}}=\frac{\sqrt{n}(n\sqrt{n}+(n-1)\sqrt{n-1})}{n^3+(n-1)^3}=$$ $$\frac{n^2+(n-1)\sqrt{n^2-n})}{n^2+n(n-1)+(n-1)^2}\to\frac{2}{3}$$ </p> Answer 4: <p>Fill in details: $$\frac1{n\sqrt n}\sum_{k=1}^n\sqrt k=\frac1{n}\sum_{k=1}^n\sqrt\frac kn\xrightarrow[n\to\infty]{}\int_0^1\sqrt x\,dx$$</p> Answer 5: <p>Using $\;\;\displaystyle\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^nf\left(\frac{k}{n}\right)=\int_0^1f(x)\;dx,$ we get $$\lim_{n \to \infty} \frac{\sqrt{1} + \sqrt{2} +\ldots + \sqrt{n}}{n\sqrt{n}}=\lim_{n \to \infty}\frac{1}{n}\sum_{k=1}^n\sqrt{\frac{k}{n}}=\int_0^1\sqrt{x}\;dx=\frac{2}{3}.$$</p> Answer 6: <p>$$\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^{n}\sqrt{\frac{k}{n}} = \int_{0}^{1}\sqrt{x}\,dx = \color{red}{\frac{2}{3}}$$ by <a href="https://en.wikipedia.org/wiki/Riemann_sum" rel="nofollow noreferrer">Riemann sums</a> and the integrability of $\sqrt{x}$ over $[0,1]$.</p> <p>For a more elementary approach, notice that $\sqrt{k}$ is pretty close to $\frac{2}{3}\left[\left(k+\frac{1}{2}\right)^{3/2}-\left(k-\frac{1}{2}\right)^{3/2}\right]$ and apply creative telescoping and squeezing.</p> Answer 7: <p>How about Stolz-Cesaro? If $x_n \to \infty $, $y_n \to \infty $ and $$ \lim_{n\to\infty} \frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$ exists, then : $$ \lim_{n\to\infty} \frac{x_n}{y_n} = \lim_{n\to\infty} \frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$</p> Answer 8: <p>All the other answers are by Riemann sums, but one can also use the Stolz theorem. The limit becomes $$\frac{\sqrt{n}}{n\sqrt{n}-(n-1)\sqrt{n-1}}=\frac{\sqrt{n}(n\sqrt{n}+(n-1)\sqrt{n-1})}{n^3+(n-1)^3}=$$ $$\frac{n^2+(n-1)\sqrt{n^2-n})}{n^2+n(n-1)+(n-1)^2}\to\frac{2}{3}$$ </p> Answer 9: <p>Fill in details: $$\frac1{n\sqrt n}\sum_{k=1}^n\sqrt k=\frac1{n}\sum_{k=1}^n\sqrt\frac kn\xrightarrow[n\to\infty]{}\int_0^1\sqrt x\,dx$$</p> Answer 10: <p>Using $\;\;\displaystyle\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^nf\left(\frac{k}{n}\right)=\int_0^1f(x)\;dx,$ we get $$\lim_{n \to \infty} \frac{\sqrt{1} + \sqrt{2} +\ldots + \sqrt{n}}{n\sqrt{n}}=\lim_{n \to \infty}\frac{1}{n}\sum_{k=1}^n\sqrt{\frac{k}{n}}=\int_0^1\sqrt{x}\;dx=\frac{2}{3}.$$</p> Answer 11: <p>$$\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^{n}\sqrt{\frac{k}{n}} = \int_{0}^{1}\sqrt{x}\,dx = \color{red}{\frac{2}{3}}$$ by <a href="https://en.wikipedia.org/wiki/Riemann_sum" rel="nofollow noreferrer">Riemann sums</a> and the integrability of $\sqrt{x}$ over $[0,1]$.</p> <p>For a more elementary approach, notice that $\sqrt{k}$ is pretty close to $\frac{2}{3}\left[\left(k+\frac{1}{2}\right)^{3/2}-\left(k-\frac{1}{2}\right)^{3/2}\right]$ and apply creative telescoping and squeezing.</p> Answer 12: <p>How about Stolz-Cesaro? If $x_n \to \infty $, $y_n \to \infty $ and $$ \lim_{n\to\infty} \frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$ exists, then : $$ \lim_{n\to\infty} \frac{x_n}{y_n} = \lim_{n\to\infty} \frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$</p> Answer 13: <p>All the other answers are by Riemann sums, but one can also use the Stolz theorem. The limit becomes $$\frac{\sqrt{n}}{n\sqrt{n}-(n-1)\sqrt{n-1}}=\frac{\sqrt{n}(n\sqrt{n}+(n-1)\sqrt{n-1})}{n^3+(n-1)^3}=$$ $$\frac{n^2+(n-1)\sqrt{n^2-n})}{n^2+n(n-1)+(n-1)^2}\to\frac{2}{3}$$ </p> Answer 14: <p>Fill in details: $$\frac1{n\sqrt n}\sum_{k=1}^n\sqrt k=\frac1{n}\sum_{k=1}^n\sqrt\frac kn\xrightarrow[n\to\infty]{}\int_0^1\sqrt x\,dx$$</p> Answer 15: <p>Using $\;\;\displaystyle\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^nf\left(\frac{k}{n}\right)=\int_0^1f(x)\;dx,$ we get $$\lim_{n \to \infty} \frac{\sqrt{1} + \sqrt{2} +\ldots + \sqrt{n}}{n\sqrt{n}}=\lim_{n \to \infty}\frac{1}{n}\sum_{k=1}^n\sqrt{\frac{k}{n}}=\int_0^1\sqrt{x}\;dx=\frac{2}{3}.$$</p> Answer 16: <p>$$\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^{n}\sqrt{\frac{k}{n}} = \int_{0}^{1}\sqrt{x}\,dx = \color{red}{\frac{2}{3}}$$ by <a href="https://en.wikipedia.org/wiki/Riemann_sum" rel="nofollow noreferrer">Riemann sums</a> and the integrability of $\sqrt{x}$ over $[0,1]$.</p> <p>For a more elementary approach, notice that $\sqrt{k}$ is pretty close to $\frac{2}{3}\left[\left(k+\frac{1}{2}\right)^{3/2}-\left(k-\frac{1}{2}\right)^{3/2}\right]$ and apply creative telescoping and squeezing.</p> Answer 17: <p>How about Stolz-Cesaro? If $x_n \to \infty $, $y_n \to \infty $ and $$ \lim_{n\to\infty} \frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$ exists, then : $$ \lim_{n\to\infty} \frac{x_n}{y_n} = \lim_{n\to\infty} \frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$</p> Answer 18: <p>All the other answers are by Riemann sums, but one can also use the Stolz theorem. The limit becomes $$\frac{\sqrt{n}}{n\sqrt{n}-(n-1)\sqrt{n-1}}=\frac{\sqrt{n}(n\sqrt{n}+(n-1)\sqrt{n-1})}{n^3+(n-1)^3}=$$ $$\frac{n^2+(n-1)\sqrt{n^2-n})}{n^2+n(n-1)+(n-1)^2}\to\frac{2}{3}$$ </p> Answer 19: <p>Fill in details: $$\frac1{n\sqrt n}\sum_{k=1}^n\sqrt k=\frac1{n}\sum_{k=1}^n\sqrt\frac kn\xrightarrow[n\to\infty]{}\int_0^1\sqrt x\,dx$$</p> Answer 20: <p>Using $\;\;\displaystyle\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^nf\left(\frac{k}{n}\right)=\int_0^1f(x)\;dx,$ we get $$\lim_{n \to \infty} \frac{\sqrt{1} + \sqrt{2} +\ldots + \sqrt{n}}{n\sqrt{n}}=\lim_{n \to \infty}\frac{1}{n}\sum_{k=1}^n\sqrt{\frac{k}{n}}=\int_0^1\sqrt{x}\;dx=\frac{2}{3}.$$</p> Answer 21: <p>$$\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^{n}\sqrt{\frac{k}{n}} = \int_{0}^{1}\sqrt{x}\,dx = \color{red}{\frac{2}{3}}$$ by <a href="https://en.wikipedia.org/wiki/Riemann_sum" rel="nofollow noreferrer">Riemann sums</a> and the integrability of $\sqrt{x}$ over $[0,1]$.</p> <p>For a more elementary approach, notice that $\sqrt{k}$ is pretty close to $\frac{2}{3}\left[\left(k+\frac{1}{2}\right)^{3/2}-\left(k-\frac{1}{2}\right)^{3/2}\right]$ and apply creative telescoping and squeezing.</p> Answer 22: <p>How about Stolz-Cesaro? If $x_n \to \infty $, $y_n \to \infty $ and $$ \lim_{n\to\infty} \frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$ exists, then : $$ \lim_{n\to\infty} \frac{x_n}{y_n} = \lim_{n\to\infty} \frac{x_{n+1}-x_n}{y_{n+1}-y_n}$$</p> Answer 23: <p>All the other answers are by Riemann sums, but one can also use the Stolz theorem. The limit becomes $$\frac{\sqrt{n}}{n\sqrt{n}-(n-1)\sqrt{n-1}}=\frac{\sqrt{n}(n\sqrt{n}+(n-1)\sqrt{n-1})}{n^3+(n-1)^3}=$$ $$\frac{n^2+(n-1)\sqrt{n^2-n})}{n^2+n(n-1)+(n-1)^2}\to\frac{2}{3}$$ </p> Answer 24: <p>Fill in details: $$\frac1{n\sqrt n}\sum_{k=1}^n\sqrt k=\frac1{n}\sum_{k=1}^n\sqrt\frac kn\xrightarrow[n\to\infty]{}\int_0^1\sqrt x\,dx$$</p> Answer 25: <p>Using $\;\;\displaystyle\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^nf\left(\frac{k}{n}\right)=\int_0^1f(x)\;dx,$ we get $$\lim_{n \to \infty} \frac{\sqrt{1} + \sqrt{2} +\ldots + \sqrt{n}}{n\sqrt{n}}=\lim_{n \to \infty}\frac{1}{n}\sum_{k=1}^n\sqrt{\frac{k}{n}}=\int_0^1\sqrt{x}\;dx=\frac{2}{3}.$$</p>
Problem in showing $\lim_{(x,y)\to (0,0)} \frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0$ using polar coordinates
2,114,606
5
https://math.stackexchange.com/questions/2114606/problem-in-showing-lim-x-y-to-0-0-frac12x3y54x4y4x64y8-0-u
<p>I'm trying to show that $$\lim_{(x,y)\to (0,0)} \frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0$$</p> <p>I've used polar coordinates but when I do this I get the possibility of $\frac{0}{0}$ if $\cos(\theta)\to 0$ as $r\to 0$. So it must be that I need some sort of bound to make sense of this limit. But I'm not sure how to proceed.</p>
[ { "answer_id": 2114615, "score": 6, "text": "<p>Hint: $u^2 + 4v^2 = (u-2v)^2 + 4uv \\ge 4uv$, so:</p>\n\n<p>$$x^6 + 4y^8 = (x^3)^2 + 4(y^4)^2 \\ge 4x^3 y^4$$</p>\n\n<p><strong>Edit:</strong> </p>\n\n<p>I pointed out in a comment that an argument by symmetry works.</p>\n\n<p>A slightly better approach might be as follows:</p>\n\n<p>$$\\left| \\frac{12x^3 y^5 + 4 x^4y^4}{x^6 + 4y^8} \\right| \\le 4\\frac{3|x|^3 |y|^5 + x^4y^4}{x^6 + 4y^8}$$</p>\n\n<p>Now we use $x^6 + 4y^8 \\ge 4|x|^3 y^4$ to see that the RHS $\\to 0$.</p>\n" }, { "answer_id": 2115744, "score": 5, "text": "<p>If we wish to use polar coordinates $(r,\\theta)$, then we can write</p>\n\n<p>$$\\begin{align}\n\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}&amp;=r^2\\left(\\frac{12\\cos^3(\\theta)\\sin^5(\\theta)+4\\cos^4(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}\\right)\\\\\\\\\n&amp;=r\\left(3\\sin(\\theta)+\\cos(\\theta)\\right)\\left(\\frac{4r\\cos^3(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}\\right)\\tag1\n\\end{align}$$</p>\n\n<p>Let $ g(r,\\theta)=\\frac{4r\\cos^3(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}$. Denote $\\sin(\\theta)$ by $s$ and $\\cos(\\theta)$ by $c$. </p>\n\n<p>We will view $g(r,\\theta)$ as a function of $\\theta\\in \\mathbb{R}$, which is differentiable and $2\\pi$-periodic. Therefore the extrema occur at points for which $\\frac{\\partial g(r,\\theta)}{\\partial \\theta}=0$. </p>\n\n<p>Then, taking the partial derivative with respect to $\\theta$, and , we have </p>\n\n<p>$$\\begin{align}\n\\frac{\\partial g(r,\\theta)}{\\partial \\theta}&amp;=4rs^3c^2\\,\\left(\\frac{(4c^2-3s^2)(c^6+4r^2s^8)-s^2c^2(32r^2s^6-6c^4)}{(c^6+4r^2s^8)^2}\\right)\\\\\\\\\n&amp;= 4rs^3c^2\\,\\left(\\frac{(3+c^2)(c^6-4r^2s^8)}{(c^6+4r^2s^8)^2}\\right)\\tag 2\n\\end{align}$$</p>\n\n<p>We see that $\\frac{\\partial g(r,\\theta)}{\\partial \\theta}=0$ when $\\sin(\\theta)=0$ or $\\cos(\\theta)=0$ or $\\cos^6(\\theta)=4r^2\\sin^8(\\theta)$. </p>\n\n<p>When $\\sin(\\theta)=0$ or $\\cos(\\theta)=0$, $g(r,\\theta)=0$. When $\\cos(\\theta)^6=4r^2\\sin^8(\\theta)$, </p>\n\n<p>$$g(r,\\theta)=\\text{sgn}(\\cos(\\theta))\\tag 3$$</p>\n\n<p>Finally, using $(3)$ in $(1)$ reveals</p>\n\n<p>$$\\left|\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}\\right|\\le r|3\\sin(\\theta)+\\cos(\\theta)|$$</p>\n\n<p>whereupon applying the squeeze theorem yields the coveted limit</p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\lim_{(x,y)\\to (0,0)}\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0}$$</p>\n</blockquote>\n" }, { "answer_id": 2114615, "score": 6, "text": "<p>Hint: $u^2 + 4v^2 = (u-2v)^2 + 4uv \\ge 4uv$, so:</p>\n\n<p>$$x^6 + 4y^8 = (x^3)^2 + 4(y^4)^2 \\ge 4x^3 y^4$$</p>\n\n<p><strong>Edit:</strong> </p>\n\n<p>I pointed out in a comment that an argument by symmetry works.</p>\n\n<p>A slightly better approach might be as follows:</p>\n\n<p>$$\\left| \\frac{12x^3 y^5 + 4 x^4y^4}{x^6 + 4y^8} \\right| \\le 4\\frac{3|x|^3 |y|^5 + x^4y^4}{x^6 + 4y^8}$$</p>\n\n<p>Now we use $x^6 + 4y^8 \\ge 4|x|^3 y^4$ to see that the RHS $\\to 0$.</p>\n" }, { "answer_id": 2115744, "score": 5, "text": "<p>If we wish to use polar coordinates $(r,\\theta)$, then we can write</p>\n\n<p>$$\\begin{align}\n\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}&amp;=r^2\\left(\\frac{12\\cos^3(\\theta)\\sin^5(\\theta)+4\\cos^4(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}\\right)\\\\\\\\\n&amp;=r\\left(3\\sin(\\theta)+\\cos(\\theta)\\right)\\left(\\frac{4r\\cos^3(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}\\right)\\tag1\n\\end{align}$$</p>\n\n<p>Let $ g(r,\\theta)=\\frac{4r\\cos^3(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}$. Denote $\\sin(\\theta)$ by $s$ and $\\cos(\\theta)$ by $c$. </p>\n\n<p>We will view $g(r,\\theta)$ as a function of $\\theta\\in \\mathbb{R}$, which is differentiable and $2\\pi$-periodic. Therefore the extrema occur at points for which $\\frac{\\partial g(r,\\theta)}{\\partial \\theta}=0$. </p>\n\n<p>Then, taking the partial derivative with respect to $\\theta$, and , we have </p>\n\n<p>$$\\begin{align}\n\\frac{\\partial g(r,\\theta)}{\\partial \\theta}&amp;=4rs^3c^2\\,\\left(\\frac{(4c^2-3s^2)(c^6+4r^2s^8)-s^2c^2(32r^2s^6-6c^4)}{(c^6+4r^2s^8)^2}\\right)\\\\\\\\\n&amp;= 4rs^3c^2\\,\\left(\\frac{(3+c^2)(c^6-4r^2s^8)}{(c^6+4r^2s^8)^2}\\right)\\tag 2\n\\end{align}$$</p>\n\n<p>We see that $\\frac{\\partial g(r,\\theta)}{\\partial \\theta}=0$ when $\\sin(\\theta)=0$ or $\\cos(\\theta)=0$ or $\\cos^6(\\theta)=4r^2\\sin^8(\\theta)$. </p>\n\n<p>When $\\sin(\\theta)=0$ or $\\cos(\\theta)=0$, $g(r,\\theta)=0$. When $\\cos(\\theta)^6=4r^2\\sin^8(\\theta)$, </p>\n\n<p>$$g(r,\\theta)=\\text{sgn}(\\cos(\\theta))\\tag 3$$</p>\n\n<p>Finally, using $(3)$ in $(1)$ reveals</p>\n\n<p>$$\\left|\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}\\right|\\le r|3\\sin(\\theta)+\\cos(\\theta)|$$</p>\n\n<p>whereupon applying the squeeze theorem yields the coveted limit</p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\lim_{(x,y)\\to (0,0)}\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0}$$</p>\n</blockquote>\n" }, { "answer_id": 2114615, "score": 6, "text": "<p>Hint: $u^2 + 4v^2 = (u-2v)^2 + 4uv \\ge 4uv$, so:</p>\n\n<p>$$x^6 + 4y^8 = (x^3)^2 + 4(y^4)^2 \\ge 4x^3 y^4$$</p>\n\n<p><strong>Edit:</strong> </p>\n\n<p>I pointed out in a comment that an argument by symmetry works.</p>\n\n<p>A slightly better approach might be as follows:</p>\n\n<p>$$\\left| \\frac{12x^3 y^5 + 4 x^4y^4}{x^6 + 4y^8} \\right| \\le 4\\frac{3|x|^3 |y|^5 + x^4y^4}{x^6 + 4y^8}$$</p>\n\n<p>Now we use $x^6 + 4y^8 \\ge 4|x|^3 y^4$ to see that the RHS $\\to 0$.</p>\n" }, { "answer_id": 2115744, "score": 5, "text": "<p>If we wish to use polar coordinates $(r,\\theta)$, then we can write</p>\n\n<p>$$\\begin{align}\n\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}&amp;=r^2\\left(\\frac{12\\cos^3(\\theta)\\sin^5(\\theta)+4\\cos^4(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}\\right)\\\\\\\\\n&amp;=r\\left(3\\sin(\\theta)+\\cos(\\theta)\\right)\\left(\\frac{4r\\cos^3(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}\\right)\\tag1\n\\end{align}$$</p>\n\n<p>Let $ g(r,\\theta)=\\frac{4r\\cos^3(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}$. Denote $\\sin(\\theta)$ by $s$ and $\\cos(\\theta)$ by $c$. </p>\n\n<p>We will view $g(r,\\theta)$ as a function of $\\theta\\in \\mathbb{R}$, which is differentiable and $2\\pi$-periodic. Therefore the extrema occur at points for which $\\frac{\\partial g(r,\\theta)}{\\partial \\theta}=0$. </p>\n\n<p>Then, taking the partial derivative with respect to $\\theta$, and , we have </p>\n\n<p>$$\\begin{align}\n\\frac{\\partial g(r,\\theta)}{\\partial \\theta}&amp;=4rs^3c^2\\,\\left(\\frac{(4c^2-3s^2)(c^6+4r^2s^8)-s^2c^2(32r^2s^6-6c^4)}{(c^6+4r^2s^8)^2}\\right)\\\\\\\\\n&amp;= 4rs^3c^2\\,\\left(\\frac{(3+c^2)(c^6-4r^2s^8)}{(c^6+4r^2s^8)^2}\\right)\\tag 2\n\\end{align}$$</p>\n\n<p>We see that $\\frac{\\partial g(r,\\theta)}{\\partial \\theta}=0$ when $\\sin(\\theta)=0$ or $\\cos(\\theta)=0$ or $\\cos^6(\\theta)=4r^2\\sin^8(\\theta)$. </p>\n\n<p>When $\\sin(\\theta)=0$ or $\\cos(\\theta)=0$, $g(r,\\theta)=0$. When $\\cos(\\theta)^6=4r^2\\sin^8(\\theta)$, </p>\n\n<p>$$g(r,\\theta)=\\text{sgn}(\\cos(\\theta))\\tag 3$$</p>\n\n<p>Finally, using $(3)$ in $(1)$ reveals</p>\n\n<p>$$\\left|\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}\\right|\\le r|3\\sin(\\theta)+\\cos(\\theta)|$$</p>\n\n<p>whereupon applying the squeeze theorem yields the coveted limit</p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\lim_{(x,y)\\to (0,0)}\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0}$$</p>\n</blockquote>\n" }, { "answer_id": 2114615, "score": 6, "text": "<p>Hint: $u^2 + 4v^2 = (u-2v)^2 + 4uv \\ge 4uv$, so:</p>\n\n<p>$$x^6 + 4y^8 = (x^3)^2 + 4(y^4)^2 \\ge 4x^3 y^4$$</p>\n\n<p><strong>Edit:</strong> </p>\n\n<p>I pointed out in a comment that an argument by symmetry works.</p>\n\n<p>A slightly better approach might be as follows:</p>\n\n<p>$$\\left| \\frac{12x^3 y^5 + 4 x^4y^4}{x^6 + 4y^8} \\right| \\le 4\\frac{3|x|^3 |y|^5 + x^4y^4}{x^6 + 4y^8}$$</p>\n\n<p>Now we use $x^6 + 4y^8 \\ge 4|x|^3 y^4$ to see that the RHS $\\to 0$.</p>\n" }, { "answer_id": 2115744, "score": 5, "text": "<p>If we wish to use polar coordinates $(r,\\theta)$, then we can write</p>\n\n<p>$$\\begin{align}\n\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}&amp;=r^2\\left(\\frac{12\\cos^3(\\theta)\\sin^5(\\theta)+4\\cos^4(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}\\right)\\\\\\\\\n&amp;=r\\left(3\\sin(\\theta)+\\cos(\\theta)\\right)\\left(\\frac{4r\\cos^3(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}\\right)\\tag1\n\\end{align}$$</p>\n\n<p>Let $ g(r,\\theta)=\\frac{4r\\cos^3(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}$. Denote $\\sin(\\theta)$ by $s$ and $\\cos(\\theta)$ by $c$. </p>\n\n<p>We will view $g(r,\\theta)$ as a function of $\\theta\\in \\mathbb{R}$, which is differentiable and $2\\pi$-periodic. Therefore the extrema occur at points for which $\\frac{\\partial g(r,\\theta)}{\\partial \\theta}=0$. </p>\n\n<p>Then, taking the partial derivative with respect to $\\theta$, and , we have </p>\n\n<p>$$\\begin{align}\n\\frac{\\partial g(r,\\theta)}{\\partial \\theta}&amp;=4rs^3c^2\\,\\left(\\frac{(4c^2-3s^2)(c^6+4r^2s^8)-s^2c^2(32r^2s^6-6c^4)}{(c^6+4r^2s^8)^2}\\right)\\\\\\\\\n&amp;= 4rs^3c^2\\,\\left(\\frac{(3+c^2)(c^6-4r^2s^8)}{(c^6+4r^2s^8)^2}\\right)\\tag 2\n\\end{align}$$</p>\n\n<p>We see that $\\frac{\\partial g(r,\\theta)}{\\partial \\theta}=0$ when $\\sin(\\theta)=0$ or $\\cos(\\theta)=0$ or $\\cos^6(\\theta)=4r^2\\sin^8(\\theta)$. </p>\n\n<p>When $\\sin(\\theta)=0$ or $\\cos(\\theta)=0$, $g(r,\\theta)=0$. When $\\cos(\\theta)^6=4r^2\\sin^8(\\theta)$, </p>\n\n<p>$$g(r,\\theta)=\\text{sgn}(\\cos(\\theta))\\tag 3$$</p>\n\n<p>Finally, using $(3)$ in $(1)$ reveals</p>\n\n<p>$$\\left|\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}\\right|\\le r|3\\sin(\\theta)+\\cos(\\theta)|$$</p>\n\n<p>whereupon applying the squeeze theorem yields the coveted limit</p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\lim_{(x,y)\\to (0,0)}\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0}$$</p>\n</blockquote>\n" }, { "answer_id": 2114615, "score": 6, "text": "<p>Hint: $u^2 + 4v^2 = (u-2v)^2 + 4uv \\ge 4uv$, so:</p>\n\n<p>$$x^6 + 4y^8 = (x^3)^2 + 4(y^4)^2 \\ge 4x^3 y^4$$</p>\n\n<p><strong>Edit:</strong> </p>\n\n<p>I pointed out in a comment that an argument by symmetry works.</p>\n\n<p>A slightly better approach might be as follows:</p>\n\n<p>$$\\left| \\frac{12x^3 y^5 + 4 x^4y^4}{x^6 + 4y^8} \\right| \\le 4\\frac{3|x|^3 |y|^5 + x^4y^4}{x^6 + 4y^8}$$</p>\n\n<p>Now we use $x^6 + 4y^8 \\ge 4|x|^3 y^4$ to see that the RHS $\\to 0$.</p>\n" }, { "answer_id": 2115744, "score": 5, "text": "<p>If we wish to use polar coordinates $(r,\\theta)$, then we can write</p>\n\n<p>$$\\begin{align}\n\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}&amp;=r^2\\left(\\frac{12\\cos^3(\\theta)\\sin^5(\\theta)+4\\cos^4(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}\\right)\\\\\\\\\n&amp;=r\\left(3\\sin(\\theta)+\\cos(\\theta)\\right)\\left(\\frac{4r\\cos^3(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}\\right)\\tag1\n\\end{align}$$</p>\n\n<p>Let $ g(r,\\theta)=\\frac{4r\\cos^3(\\theta)\\sin^4(\\theta)}{\\cos^6(\\theta)+4r^2\\sin^8(\\theta)}$. Denote $\\sin(\\theta)$ by $s$ and $\\cos(\\theta)$ by $c$. </p>\n\n<p>We will view $g(r,\\theta)$ as a function of $\\theta\\in \\mathbb{R}$, which is differentiable and $2\\pi$-periodic. Therefore the extrema occur at points for which $\\frac{\\partial g(r,\\theta)}{\\partial \\theta}=0$. </p>\n\n<p>Then, taking the partial derivative with respect to $\\theta$, and , we have </p>\n\n<p>$$\\begin{align}\n\\frac{\\partial g(r,\\theta)}{\\partial \\theta}&amp;=4rs^3c^2\\,\\left(\\frac{(4c^2-3s^2)(c^6+4r^2s^8)-s^2c^2(32r^2s^6-6c^4)}{(c^6+4r^2s^8)^2}\\right)\\\\\\\\\n&amp;= 4rs^3c^2\\,\\left(\\frac{(3+c^2)(c^6-4r^2s^8)}{(c^6+4r^2s^8)^2}\\right)\\tag 2\n\\end{align}$$</p>\n\n<p>We see that $\\frac{\\partial g(r,\\theta)}{\\partial \\theta}=0$ when $\\sin(\\theta)=0$ or $\\cos(\\theta)=0$ or $\\cos^6(\\theta)=4r^2\\sin^8(\\theta)$. </p>\n\n<p>When $\\sin(\\theta)=0$ or $\\cos(\\theta)=0$, $g(r,\\theta)=0$. When $\\cos(\\theta)^6=4r^2\\sin^8(\\theta)$, </p>\n\n<p>$$g(r,\\theta)=\\text{sgn}(\\cos(\\theta))\\tag 3$$</p>\n\n<p>Finally, using $(3)$ in $(1)$ reveals</p>\n\n<p>$$\\left|\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}\\right|\\le r|3\\sin(\\theta)+\\cos(\\theta)|$$</p>\n\n<p>whereupon applying the squeeze theorem yields the coveted limit</p>\n\n<blockquote>\n <p>$$\\bbox[5px,border:2px solid #C0A000]{\\lim_{(x,y)\\to (0,0)}\\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0}$$</p>\n</blockquote>\n" } ]
Question: Problem in showing $\lim_{(x,y)\to (0,0)} \frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0$ using polar coordinates Answer 1: <p>Hint: $u^2 + 4v^2 = (u-2v)^2 + 4uv \ge 4uv$, so:</p> <p>$$x^6 + 4y^8 = (x^3)^2 + 4(y^4)^2 \ge 4x^3 y^4$$</p> <p><strong>Edit:</strong> </p> <p>I pointed out in a comment that an argument by symmetry works.</p> <p>A slightly better approach might be as follows:</p> <p>$$\left| \frac{12x^3 y^5 + 4 x^4y^4}{x^6 + 4y^8} \right| \le 4\frac{3|x|^3 |y|^5 + x^4y^4}{x^6 + 4y^8}$$</p> <p>Now we use $x^6 + 4y^8 \ge 4|x|^3 y^4$ to see that the RHS $\to 0$.</p> Answer 2: <p>If we wish to use polar coordinates $(r,\theta)$, then we can write</p> <p>$$\begin{align} \frac{12x^3y^5+4x^4y^4}{x^6+4y^8}&amp;=r^2\left(\frac{12\cos^3(\theta)\sin^5(\theta)+4\cos^4(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}\right)\\\\ &amp;=r\left(3\sin(\theta)+\cos(\theta)\right)\left(\frac{4r\cos^3(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}\right)\tag1 \end{align}$$</p> <p>Let $ g(r,\theta)=\frac{4r\cos^3(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}$. Denote $\sin(\theta)$ by $s$ and $\cos(\theta)$ by $c$. </p> <p>We will view $g(r,\theta)$ as a function of $\theta\in \mathbb{R}$, which is differentiable and $2\pi$-periodic. Therefore the extrema occur at points for which $\frac{\partial g(r,\theta)}{\partial \theta}=0$. </p> <p>Then, taking the partial derivative with respect to $\theta$, and , we have </p> <p>$$\begin{align} \frac{\partial g(r,\theta)}{\partial \theta}&amp;=4rs^3c^2\,\left(\frac{(4c^2-3s^2)(c^6+4r^2s^8)-s^2c^2(32r^2s^6-6c^4)}{(c^6+4r^2s^8)^2}\right)\\\\ &amp;= 4rs^3c^2\,\left(\frac{(3+c^2)(c^6-4r^2s^8)}{(c^6+4r^2s^8)^2}\right)\tag 2 \end{align}$$</p> <p>We see that $\frac{\partial g(r,\theta)}{\partial \theta}=0$ when $\sin(\theta)=0$ or $\cos(\theta)=0$ or $\cos^6(\theta)=4r^2\sin^8(\theta)$. </p> <p>When $\sin(\theta)=0$ or $\cos(\theta)=0$, $g(r,\theta)=0$. When $\cos(\theta)^6=4r^2\sin^8(\theta)$, </p> <p>$$g(r,\theta)=\text{sgn}(\cos(\theta))\tag 3$$</p> <p>Finally, using $(3)$ in $(1)$ reveals</p> <p>$$\left|\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}\right|\le r|3\sin(\theta)+\cos(\theta)|$$</p> <p>whereupon applying the squeeze theorem yields the coveted limit</p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\lim_{(x,y)\to (0,0)}\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0}$$</p> </blockquote> Answer 3: <p>Hint: $u^2 + 4v^2 = (u-2v)^2 + 4uv \ge 4uv$, so:</p> <p>$$x^6 + 4y^8 = (x^3)^2 + 4(y^4)^2 \ge 4x^3 y^4$$</p> <p><strong>Edit:</strong> </p> <p>I pointed out in a comment that an argument by symmetry works.</p> <p>A slightly better approach might be as follows:</p> <p>$$\left| \frac{12x^3 y^5 + 4 x^4y^4}{x^6 + 4y^8} \right| \le 4\frac{3|x|^3 |y|^5 + x^4y^4}{x^6 + 4y^8}$$</p> <p>Now we use $x^6 + 4y^8 \ge 4|x|^3 y^4$ to see that the RHS $\to 0$.</p> Answer 4: <p>If we wish to use polar coordinates $(r,\theta)$, then we can write</p> <p>$$\begin{align} \frac{12x^3y^5+4x^4y^4}{x^6+4y^8}&amp;=r^2\left(\frac{12\cos^3(\theta)\sin^5(\theta)+4\cos^4(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}\right)\\\\ &amp;=r\left(3\sin(\theta)+\cos(\theta)\right)\left(\frac{4r\cos^3(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}\right)\tag1 \end{align}$$</p> <p>Let $ g(r,\theta)=\frac{4r\cos^3(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}$. Denote $\sin(\theta)$ by $s$ and $\cos(\theta)$ by $c$. </p> <p>We will view $g(r,\theta)$ as a function of $\theta\in \mathbb{R}$, which is differentiable and $2\pi$-periodic. Therefore the extrema occur at points for which $\frac{\partial g(r,\theta)}{\partial \theta}=0$. </p> <p>Then, taking the partial derivative with respect to $\theta$, and , we have </p> <p>$$\begin{align} \frac{\partial g(r,\theta)}{\partial \theta}&amp;=4rs^3c^2\,\left(\frac{(4c^2-3s^2)(c^6+4r^2s^8)-s^2c^2(32r^2s^6-6c^4)}{(c^6+4r^2s^8)^2}\right)\\\\ &amp;= 4rs^3c^2\,\left(\frac{(3+c^2)(c^6-4r^2s^8)}{(c^6+4r^2s^8)^2}\right)\tag 2 \end{align}$$</p> <p>We see that $\frac{\partial g(r,\theta)}{\partial \theta}=0$ when $\sin(\theta)=0$ or $\cos(\theta)=0$ or $\cos^6(\theta)=4r^2\sin^8(\theta)$. </p> <p>When $\sin(\theta)=0$ or $\cos(\theta)=0$, $g(r,\theta)=0$. When $\cos(\theta)^6=4r^2\sin^8(\theta)$, </p> <p>$$g(r,\theta)=\text{sgn}(\cos(\theta))\tag 3$$</p> <p>Finally, using $(3)$ in $(1)$ reveals</p> <p>$$\left|\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}\right|\le r|3\sin(\theta)+\cos(\theta)|$$</p> <p>whereupon applying the squeeze theorem yields the coveted limit</p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\lim_{(x,y)\to (0,0)}\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0}$$</p> </blockquote> Answer 5: <p>Hint: $u^2 + 4v^2 = (u-2v)^2 + 4uv \ge 4uv$, so:</p> <p>$$x^6 + 4y^8 = (x^3)^2 + 4(y^4)^2 \ge 4x^3 y^4$$</p> <p><strong>Edit:</strong> </p> <p>I pointed out in a comment that an argument by symmetry works.</p> <p>A slightly better approach might be as follows:</p> <p>$$\left| \frac{12x^3 y^5 + 4 x^4y^4}{x^6 + 4y^8} \right| \le 4\frac{3|x|^3 |y|^5 + x^4y^4}{x^6 + 4y^8}$$</p> <p>Now we use $x^6 + 4y^8 \ge 4|x|^3 y^4$ to see that the RHS $\to 0$.</p> Answer 6: <p>If we wish to use polar coordinates $(r,\theta)$, then we can write</p> <p>$$\begin{align} \frac{12x^3y^5+4x^4y^4}{x^6+4y^8}&amp;=r^2\left(\frac{12\cos^3(\theta)\sin^5(\theta)+4\cos^4(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}\right)\\\\ &amp;=r\left(3\sin(\theta)+\cos(\theta)\right)\left(\frac{4r\cos^3(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}\right)\tag1 \end{align}$$</p> <p>Let $ g(r,\theta)=\frac{4r\cos^3(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}$. Denote $\sin(\theta)$ by $s$ and $\cos(\theta)$ by $c$. </p> <p>We will view $g(r,\theta)$ as a function of $\theta\in \mathbb{R}$, which is differentiable and $2\pi$-periodic. Therefore the extrema occur at points for which $\frac{\partial g(r,\theta)}{\partial \theta}=0$. </p> <p>Then, taking the partial derivative with respect to $\theta$, and , we have </p> <p>$$\begin{align} \frac{\partial g(r,\theta)}{\partial \theta}&amp;=4rs^3c^2\,\left(\frac{(4c^2-3s^2)(c^6+4r^2s^8)-s^2c^2(32r^2s^6-6c^4)}{(c^6+4r^2s^8)^2}\right)\\\\ &amp;= 4rs^3c^2\,\left(\frac{(3+c^2)(c^6-4r^2s^8)}{(c^6+4r^2s^8)^2}\right)\tag 2 \end{align}$$</p> <p>We see that $\frac{\partial g(r,\theta)}{\partial \theta}=0$ when $\sin(\theta)=0$ or $\cos(\theta)=0$ or $\cos^6(\theta)=4r^2\sin^8(\theta)$. </p> <p>When $\sin(\theta)=0$ or $\cos(\theta)=0$, $g(r,\theta)=0$. When $\cos(\theta)^6=4r^2\sin^8(\theta)$, </p> <p>$$g(r,\theta)=\text{sgn}(\cos(\theta))\tag 3$$</p> <p>Finally, using $(3)$ in $(1)$ reveals</p> <p>$$\left|\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}\right|\le r|3\sin(\theta)+\cos(\theta)|$$</p> <p>whereupon applying the squeeze theorem yields the coveted limit</p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\lim_{(x,y)\to (0,0)}\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0}$$</p> </blockquote> Answer 7: <p>Hint: $u^2 + 4v^2 = (u-2v)^2 + 4uv \ge 4uv$, so:</p> <p>$$x^6 + 4y^8 = (x^3)^2 + 4(y^4)^2 \ge 4x^3 y^4$$</p> <p><strong>Edit:</strong> </p> <p>I pointed out in a comment that an argument by symmetry works.</p> <p>A slightly better approach might be as follows:</p> <p>$$\left| \frac{12x^3 y^5 + 4 x^4y^4}{x^6 + 4y^8} \right| \le 4\frac{3|x|^3 |y|^5 + x^4y^4}{x^6 + 4y^8}$$</p> <p>Now we use $x^6 + 4y^8 \ge 4|x|^3 y^4$ to see that the RHS $\to 0$.</p> Answer 8: <p>If we wish to use polar coordinates $(r,\theta)$, then we can write</p> <p>$$\begin{align} \frac{12x^3y^5+4x^4y^4}{x^6+4y^8}&amp;=r^2\left(\frac{12\cos^3(\theta)\sin^5(\theta)+4\cos^4(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}\right)\\\\ &amp;=r\left(3\sin(\theta)+\cos(\theta)\right)\left(\frac{4r\cos^3(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}\right)\tag1 \end{align}$$</p> <p>Let $ g(r,\theta)=\frac{4r\cos^3(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}$. Denote $\sin(\theta)$ by $s$ and $\cos(\theta)$ by $c$. </p> <p>We will view $g(r,\theta)$ as a function of $\theta\in \mathbb{R}$, which is differentiable and $2\pi$-periodic. Therefore the extrema occur at points for which $\frac{\partial g(r,\theta)}{\partial \theta}=0$. </p> <p>Then, taking the partial derivative with respect to $\theta$, and , we have </p> <p>$$\begin{align} \frac{\partial g(r,\theta)}{\partial \theta}&amp;=4rs^3c^2\,\left(\frac{(4c^2-3s^2)(c^6+4r^2s^8)-s^2c^2(32r^2s^6-6c^4)}{(c^6+4r^2s^8)^2}\right)\\\\ &amp;= 4rs^3c^2\,\left(\frac{(3+c^2)(c^6-4r^2s^8)}{(c^6+4r^2s^8)^2}\right)\tag 2 \end{align}$$</p> <p>We see that $\frac{\partial g(r,\theta)}{\partial \theta}=0$ when $\sin(\theta)=0$ or $\cos(\theta)=0$ or $\cos^6(\theta)=4r^2\sin^8(\theta)$. </p> <p>When $\sin(\theta)=0$ or $\cos(\theta)=0$, $g(r,\theta)=0$. When $\cos(\theta)^6=4r^2\sin^8(\theta)$, </p> <p>$$g(r,\theta)=\text{sgn}(\cos(\theta))\tag 3$$</p> <p>Finally, using $(3)$ in $(1)$ reveals</p> <p>$$\left|\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}\right|\le r|3\sin(\theta)+\cos(\theta)|$$</p> <p>whereupon applying the squeeze theorem yields the coveted limit</p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\lim_{(x,y)\to (0,0)}\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0}$$</p> </blockquote> Answer 9: <p>Hint: $u^2 + 4v^2 = (u-2v)^2 + 4uv \ge 4uv$, so:</p> <p>$$x^6 + 4y^8 = (x^3)^2 + 4(y^4)^2 \ge 4x^3 y^4$$</p> <p><strong>Edit:</strong> </p> <p>I pointed out in a comment that an argument by symmetry works.</p> <p>A slightly better approach might be as follows:</p> <p>$$\left| \frac{12x^3 y^5 + 4 x^4y^4}{x^6 + 4y^8} \right| \le 4\frac{3|x|^3 |y|^5 + x^4y^4}{x^6 + 4y^8}$$</p> <p>Now we use $x^6 + 4y^8 \ge 4|x|^3 y^4$ to see that the RHS $\to 0$.</p> Answer 10: <p>If we wish to use polar coordinates $(r,\theta)$, then we can write</p> <p>$$\begin{align} \frac{12x^3y^5+4x^4y^4}{x^6+4y^8}&amp;=r^2\left(\frac{12\cos^3(\theta)\sin^5(\theta)+4\cos^4(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}\right)\\\\ &amp;=r\left(3\sin(\theta)+\cos(\theta)\right)\left(\frac{4r\cos^3(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}\right)\tag1 \end{align}$$</p> <p>Let $ g(r,\theta)=\frac{4r\cos^3(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}$. Denote $\sin(\theta)$ by $s$ and $\cos(\theta)$ by $c$. </p> <p>We will view $g(r,\theta)$ as a function of $\theta\in \mathbb{R}$, which is differentiable and $2\pi$-periodic. Therefore the extrema occur at points for which $\frac{\partial g(r,\theta)}{\partial \theta}=0$. </p> <p>Then, taking the partial derivative with respect to $\theta$, and , we have </p> <p>$$\begin{align} \frac{\partial g(r,\theta)}{\partial \theta}&amp;=4rs^3c^2\,\left(\frac{(4c^2-3s^2)(c^6+4r^2s^8)-s^2c^2(32r^2s^6-6c^4)}{(c^6+4r^2s^8)^2}\right)\\\\ &amp;= 4rs^3c^2\,\left(\frac{(3+c^2)(c^6-4r^2s^8)}{(c^6+4r^2s^8)^2}\right)\tag 2 \end{align}$$</p> <p>We see that $\frac{\partial g(r,\theta)}{\partial \theta}=0$ when $\sin(\theta)=0$ or $\cos(\theta)=0$ or $\cos^6(\theta)=4r^2\sin^8(\theta)$. </p> <p>When $\sin(\theta)=0$ or $\cos(\theta)=0$, $g(r,\theta)=0$. When $\cos(\theta)^6=4r^2\sin^8(\theta)$, </p> <p>$$g(r,\theta)=\text{sgn}(\cos(\theta))\tag 3$$</p> <p>Finally, using $(3)$ in $(1)$ reveals</p> <p>$$\left|\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}\right|\le r|3\sin(\theta)+\cos(\theta)|$$</p> <p>whereupon applying the squeeze theorem yields the coveted limit</p> <blockquote> <p>$$\bbox[5px,border:2px solid #C0A000]{\lim_{(x,y)\to (0,0)}\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0}$$</p> </blockquote>
How can I show that $v_i\otimes v_j$ is non zero in $V \otimes_\mathbb {F}V$
2,114,636
5
https://math.stackexchange.com/questions/2114636/how-can-i-show-that-v-i-otimes-v-j-is-non-zero-in-v-otimes-mathbb-fv
<p>Let $V$ is a finite dimensional vector space over a field $\mathbb F.$ Let $\rm dim (V)=n&gt;0$ and $\mathcal {B}=\{v_1,\ldots,v_n\}$ be a basis of $V.$ Now we know dimension of $V \otimes_\mathbb {F}V$ is $n^2$ as $V \otimes_\mathbb {F}V \cong \mathbb {F^{n^2}}.$ Now since the set $\mathcal {A}=\{v_i\otimes v_j:1 \leq i,j\leq n\}$ spans $V \otimes_\mathbb {F}V$ and the number of elements in $\mathcal {A}$ is $n^2$, $\mathcal {A}$ forms a basis for $V \otimes_\mathbb {F}V$ as a vector space over $\mathbb F.$ In this way every element in $\mathcal {A}$ is non zero.</p> <blockquote> <p>Now my question is if I don't want to use the above arguments how can I show that for any $i$ and $j$ the element $v_i\otimes v_j$ is non zero in $V \otimes_\mathbb {F}V$. By the construction of tensor product if it can be shown then it will help me.</p> </blockquote> <p>Help me. Many Thanks.</p>
[ { "answer_id": 2114642, "score": 4, "text": "<p>Whatever your definition of tensor is, it should be true that any bilinear function\n$$\nV\\times V\\longrightarrow \\mathbb{F}\n$$\nmust factor uniquely through\n$$\nV\\times V\\longrightarrow V\\otimes_\\mathbb{F} V\\longrightarrow \\mathbb{F}\n$$\nNow take any linear functional $\\ell :V\\to\\mathbb{F}$ such that $\\ell(v_i)\\neq0 $ and $\\ell(v_j)\\neq 0$. Define a bilinear function $f:V\\times V\\to \\mathbb{F}$ as\n$$\nf(u,v) = \\ell(u)\\ell(v)\n$$\nSince $f$ is clearly bilinear, $f$ must factor through a homomorphism $g:V\\otimes_\\mathbb{F} V\\to \\mathbb{F}$, i.e. $g(u\\otimes v) = f(u,v)$. \nSince $g(v_i\\otimes v_j) = f(v_i,v_j) = \\ell(v_i)\\ell(v_j)\\neq 0$, $v_i\\otimes v_j$ is nonzero.</p>\n" }, { "answer_id": 2114643, "score": 2, "text": "<p>If $(v_i,v_j)$ are non-zero there is a bilinear application $\\beta : V \\times V \\to k$ with $\\beta(v_i,v_j) = 1$ and by the universal property of tensor product we get a map $b : V \\otimes V \\to k $ with $b(v_i \\otimes v_j) = 1$ so it cannot be zero. </p>\n" }, { "answer_id": 2114642, "score": 4, "text": "<p>Whatever your definition of tensor is, it should be true that any bilinear function\n$$\nV\\times V\\longrightarrow \\mathbb{F}\n$$\nmust factor uniquely through\n$$\nV\\times V\\longrightarrow V\\otimes_\\mathbb{F} V\\longrightarrow \\mathbb{F}\n$$\nNow take any linear functional $\\ell :V\\to\\mathbb{F}$ such that $\\ell(v_i)\\neq0 $ and $\\ell(v_j)\\neq 0$. Define a bilinear function $f:V\\times V\\to \\mathbb{F}$ as\n$$\nf(u,v) = \\ell(u)\\ell(v)\n$$\nSince $f$ is clearly bilinear, $f$ must factor through a homomorphism $g:V\\otimes_\\mathbb{F} V\\to \\mathbb{F}$, i.e. $g(u\\otimes v) = f(u,v)$. \nSince $g(v_i\\otimes v_j) = f(v_i,v_j) = \\ell(v_i)\\ell(v_j)\\neq 0$, $v_i\\otimes v_j$ is nonzero.</p>\n" }, { "answer_id": 2114643, "score": 2, "text": "<p>If $(v_i,v_j)$ are non-zero there is a bilinear application $\\beta : V \\times V \\to k$ with $\\beta(v_i,v_j) = 1$ and by the universal property of tensor product we get a map $b : V \\otimes V \\to k $ with $b(v_i \\otimes v_j) = 1$ so it cannot be zero. </p>\n" }, { "answer_id": 2114642, "score": 4, "text": "<p>Whatever your definition of tensor is, it should be true that any bilinear function\n$$\nV\\times V\\longrightarrow \\mathbb{F}\n$$\nmust factor uniquely through\n$$\nV\\times V\\longrightarrow V\\otimes_\\mathbb{F} V\\longrightarrow \\mathbb{F}\n$$\nNow take any linear functional $\\ell :V\\to\\mathbb{F}$ such that $\\ell(v_i)\\neq0 $ and $\\ell(v_j)\\neq 0$. Define a bilinear function $f:V\\times V\\to \\mathbb{F}$ as\n$$\nf(u,v) = \\ell(u)\\ell(v)\n$$\nSince $f$ is clearly bilinear, $f$ must factor through a homomorphism $g:V\\otimes_\\mathbb{F} V\\to \\mathbb{F}$, i.e. $g(u\\otimes v) = f(u,v)$. \nSince $g(v_i\\otimes v_j) = f(v_i,v_j) = \\ell(v_i)\\ell(v_j)\\neq 0$, $v_i\\otimes v_j$ is nonzero.</p>\n" }, { "answer_id": 2114643, "score": 2, "text": "<p>If $(v_i,v_j)$ are non-zero there is a bilinear application $\\beta : V \\times V \\to k$ with $\\beta(v_i,v_j) = 1$ and by the universal property of tensor product we get a map $b : V \\otimes V \\to k $ with $b(v_i \\otimes v_j) = 1$ so it cannot be zero. </p>\n" }, { "answer_id": 2114642, "score": 4, "text": "<p>Whatever your definition of tensor is, it should be true that any bilinear function\n$$\nV\\times V\\longrightarrow \\mathbb{F}\n$$\nmust factor uniquely through\n$$\nV\\times V\\longrightarrow V\\otimes_\\mathbb{F} V\\longrightarrow \\mathbb{F}\n$$\nNow take any linear functional $\\ell :V\\to\\mathbb{F}$ such that $\\ell(v_i)\\neq0 $ and $\\ell(v_j)\\neq 0$. Define a bilinear function $f:V\\times V\\to \\mathbb{F}$ as\n$$\nf(u,v) = \\ell(u)\\ell(v)\n$$\nSince $f$ is clearly bilinear, $f$ must factor through a homomorphism $g:V\\otimes_\\mathbb{F} V\\to \\mathbb{F}$, i.e. $g(u\\otimes v) = f(u,v)$. \nSince $g(v_i\\otimes v_j) = f(v_i,v_j) = \\ell(v_i)\\ell(v_j)\\neq 0$, $v_i\\otimes v_j$ is nonzero.</p>\n" }, { "answer_id": 2114643, "score": 2, "text": "<p>If $(v_i,v_j)$ are non-zero there is a bilinear application $\\beta : V \\times V \\to k$ with $\\beta(v_i,v_j) = 1$ and by the universal property of tensor product we get a map $b : V \\otimes V \\to k $ with $b(v_i \\otimes v_j) = 1$ so it cannot be zero. </p>\n" }, { "answer_id": 2114642, "score": 4, "text": "<p>Whatever your definition of tensor is, it should be true that any bilinear function\n$$\nV\\times V\\longrightarrow \\mathbb{F}\n$$\nmust factor uniquely through\n$$\nV\\times V\\longrightarrow V\\otimes_\\mathbb{F} V\\longrightarrow \\mathbb{F}\n$$\nNow take any linear functional $\\ell :V\\to\\mathbb{F}$ such that $\\ell(v_i)\\neq0 $ and $\\ell(v_j)\\neq 0$. Define a bilinear function $f:V\\times V\\to \\mathbb{F}$ as\n$$\nf(u,v) = \\ell(u)\\ell(v)\n$$\nSince $f$ is clearly bilinear, $f$ must factor through a homomorphism $g:V\\otimes_\\mathbb{F} V\\to \\mathbb{F}$, i.e. $g(u\\otimes v) = f(u,v)$. \nSince $g(v_i\\otimes v_j) = f(v_i,v_j) = \\ell(v_i)\\ell(v_j)\\neq 0$, $v_i\\otimes v_j$ is nonzero.</p>\n" }, { "answer_id": 2114643, "score": 2, "text": "<p>If $(v_i,v_j)$ are non-zero there is a bilinear application $\\beta : V \\times V \\to k$ with $\\beta(v_i,v_j) = 1$ and by the universal property of tensor product we get a map $b : V \\otimes V \\to k $ with $b(v_i \\otimes v_j) = 1$ so it cannot be zero. </p>\n" } ]
Question: How can I show that $v_i\otimes v_j$ is non zero in $V \otimes_\mathbb {F}V$ Answer 1: <p>Whatever your definition of tensor is, it should be true that any bilinear function $$ V\times V\longrightarrow \mathbb{F} $$ must factor uniquely through $$ V\times V\longrightarrow V\otimes_\mathbb{F} V\longrightarrow \mathbb{F} $$ Now take any linear functional $\ell :V\to\mathbb{F}$ such that $\ell(v_i)\neq0 $ and $\ell(v_j)\neq 0$. Define a bilinear function $f:V\times V\to \mathbb{F}$ as $$ f(u,v) = \ell(u)\ell(v) $$ Since $f$ is clearly bilinear, $f$ must factor through a homomorphism $g:V\otimes_\mathbb{F} V\to \mathbb{F}$, i.e. $g(u\otimes v) = f(u,v)$. Since $g(v_i\otimes v_j) = f(v_i,v_j) = \ell(v_i)\ell(v_j)\neq 0$, $v_i\otimes v_j$ is nonzero.</p> Answer 2: <p>If $(v_i,v_j)$ are non-zero there is a bilinear application $\beta : V \times V \to k$ with $\beta(v_i,v_j) = 1$ and by the universal property of tensor product we get a map $b : V \otimes V \to k $ with $b(v_i \otimes v_j) = 1$ so it cannot be zero. </p> Answer 3: <p>Whatever your definition of tensor is, it should be true that any bilinear function $$ V\times V\longrightarrow \mathbb{F} $$ must factor uniquely through $$ V\times V\longrightarrow V\otimes_\mathbb{F} V\longrightarrow \mathbb{F} $$ Now take any linear functional $\ell :V\to\mathbb{F}$ such that $\ell(v_i)\neq0 $ and $\ell(v_j)\neq 0$. Define a bilinear function $f:V\times V\to \mathbb{F}$ as $$ f(u,v) = \ell(u)\ell(v) $$ Since $f$ is clearly bilinear, $f$ must factor through a homomorphism $g:V\otimes_\mathbb{F} V\to \mathbb{F}$, i.e. $g(u\otimes v) = f(u,v)$. Since $g(v_i\otimes v_j) = f(v_i,v_j) = \ell(v_i)\ell(v_j)\neq 0$, $v_i\otimes v_j$ is nonzero.</p> Answer 4: <p>If $(v_i,v_j)$ are non-zero there is a bilinear application $\beta : V \times V \to k$ with $\beta(v_i,v_j) = 1$ and by the universal property of tensor product we get a map $b : V \otimes V \to k $ with $b(v_i \otimes v_j) = 1$ so it cannot be zero. </p> Answer 5: <p>Whatever your definition of tensor is, it should be true that any bilinear function $$ V\times V\longrightarrow \mathbb{F} $$ must factor uniquely through $$ V\times V\longrightarrow V\otimes_\mathbb{F} V\longrightarrow \mathbb{F} $$ Now take any linear functional $\ell :V\to\mathbb{F}$ such that $\ell(v_i)\neq0 $ and $\ell(v_j)\neq 0$. Define a bilinear function $f:V\times V\to \mathbb{F}$ as $$ f(u,v) = \ell(u)\ell(v) $$ Since $f$ is clearly bilinear, $f$ must factor through a homomorphism $g:V\otimes_\mathbb{F} V\to \mathbb{F}$, i.e. $g(u\otimes v) = f(u,v)$. Since $g(v_i\otimes v_j) = f(v_i,v_j) = \ell(v_i)\ell(v_j)\neq 0$, $v_i\otimes v_j$ is nonzero.</p> Answer 6: <p>If $(v_i,v_j)$ are non-zero there is a bilinear application $\beta : V \times V \to k$ with $\beta(v_i,v_j) = 1$ and by the universal property of tensor product we get a map $b : V \otimes V \to k $ with $b(v_i \otimes v_j) = 1$ so it cannot be zero. </p> Answer 7: <p>Whatever your definition of tensor is, it should be true that any bilinear function $$ V\times V\longrightarrow \mathbb{F} $$ must factor uniquely through $$ V\times V\longrightarrow V\otimes_\mathbb{F} V\longrightarrow \mathbb{F} $$ Now take any linear functional $\ell :V\to\mathbb{F}$ such that $\ell(v_i)\neq0 $ and $\ell(v_j)\neq 0$. Define a bilinear function $f:V\times V\to \mathbb{F}$ as $$ f(u,v) = \ell(u)\ell(v) $$ Since $f$ is clearly bilinear, $f$ must factor through a homomorphism $g:V\otimes_\mathbb{F} V\to \mathbb{F}$, i.e. $g(u\otimes v) = f(u,v)$. Since $g(v_i\otimes v_j) = f(v_i,v_j) = \ell(v_i)\ell(v_j)\neq 0$, $v_i\otimes v_j$ is nonzero.</p> Answer 8: <p>If $(v_i,v_j)$ are non-zero there is a bilinear application $\beta : V \times V \to k$ with $\beta(v_i,v_j) = 1$ and by the universal property of tensor product we get a map $b : V \otimes V \to k $ with $b(v_i \otimes v_j) = 1$ so it cannot be zero. </p> Answer 9: <p>Whatever your definition of tensor is, it should be true that any bilinear function $$ V\times V\longrightarrow \mathbb{F} $$ must factor uniquely through $$ V\times V\longrightarrow V\otimes_\mathbb{F} V\longrightarrow \mathbb{F} $$ Now take any linear functional $\ell :V\to\mathbb{F}$ such that $\ell(v_i)\neq0 $ and $\ell(v_j)\neq 0$. Define a bilinear function $f:V\times V\to \mathbb{F}$ as $$ f(u,v) = \ell(u)\ell(v) $$ Since $f$ is clearly bilinear, $f$ must factor through a homomorphism $g:V\otimes_\mathbb{F} V\to \mathbb{F}$, i.e. $g(u\otimes v) = f(u,v)$. Since $g(v_i\otimes v_j) = f(v_i,v_j) = \ell(v_i)\ell(v_j)\neq 0$, $v_i\otimes v_j$ is nonzero.</p> Answer 10: <p>If $(v_i,v_j)$ are non-zero there is a bilinear application $\beta : V \times V \to k$ with $\beta(v_i,v_j) = 1$ and by the universal property of tensor product we get a map $b : V \otimes V \to k $ with $b(v_i \otimes v_j) = 1$ so it cannot be zero. </p>
Find $\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2} $ without L&#39;hopital&#39;s rule
2,114,719
5
https://math.stackexchange.com/questions/2114719/find-lim-x-to-0-frac-ln-x21-x2-without-lhopitals-rule
<p>I have to find the limit <strong>without L'hopital's rule</strong>: $$\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2} $$</p> <p>Is it possible? I thought about using squeeze theorem or something, but it didn't work out. </p> <p>Hints are more than welcome! </p> <p>P.S - I didn't study Taylor series or Integrals yet. </p>
[ { "answer_id": 2114725, "score": 14, "text": "<p>$$\\begin{align}\n\\lim_{x \\to 0} \\frac{\\ln (x^2+1)} {x^2}&amp;=\\lim_{x \\to 0} \\ln (x^2+1)^{\\frac{1}{x^2}}\\\\\n&amp;=\\ln\\left(\\lim_{x \\to 0} (x^2+1)^{\\frac{1}{x^2}}\\right)\\\\\n&amp;=\\ln e=1\n \\end{align}$$ </p>\n" }, { "answer_id": 2114750, "score": 8, "text": "<p>Let $f(u)=e^u$. With $t=\\ln(x^2+1)$ we get</p>\n\n<p>$$\\lim_{x \\to 0} \\frac{\\ln (x^2+1)} {x^2}=\\lim_{t \\to 0} \\frac{t-0} {f(t)-f(0)}=\\frac{1}{f'(0)}=1.$$</p>\n" }, { "answer_id": 2114724, "score": 4, "text": "<p>If you know about equivalents, you have</p>\n\n<p>$$\\ln(x+1)\\underset{x\\to 0}{\\sim}x$$</p>\n\n<p>so</p>\n\n<p>$$\\ln(x^2+1)\\underset{x\\to 0}{\\sim}x^2.$$</p>\n\n<p>Therefore, </p>\n\n<p>$$\\lim_{x\\to 0} \\frac{\\ln(x^2+1)}{x^2}=\\lim_{x\\to0} \\frac{x^2}{x^2}=1.$$</p>\n" }, { "answer_id": 2114828, "score": 3, "text": "<p>With series:</p>\n\n<p>$ \\ln(x^2+1)=x^2-\\frac{x^4}{2}+\\frac{x^6}{3}-+...$ for $|x|&lt;1$,</p>\n\n<p>hence </p>\n\n<p>$\\frac{\\ln (x^2+1)} {x^2}=1-\\frac{x^2}{2}+\\frac{x^4}{3}-+... \\to 1$ for $x \\to 0$</p>\n" }, { "answer_id": 2114725, "score": 14, "text": "<p>$$\\begin{align}\n\\lim_{x \\to 0} \\frac{\\ln (x^2+1)} {x^2}&amp;=\\lim_{x \\to 0} \\ln (x^2+1)^{\\frac{1}{x^2}}\\\\\n&amp;=\\ln\\left(\\lim_{x \\to 0} (x^2+1)^{\\frac{1}{x^2}}\\right)\\\\\n&amp;=\\ln e=1\n \\end{align}$$ </p>\n" }, { "answer_id": 2114750, "score": 8, "text": "<p>Let $f(u)=e^u$. With $t=\\ln(x^2+1)$ we get</p>\n\n<p>$$\\lim_{x \\to 0} \\frac{\\ln (x^2+1)} {x^2}=\\lim_{t \\to 0} \\frac{t-0} {f(t)-f(0)}=\\frac{1}{f'(0)}=1.$$</p>\n" }, { "answer_id": 2114724, "score": 4, "text": "<p>If you know about equivalents, you have</p>\n\n<p>$$\\ln(x+1)\\underset{x\\to 0}{\\sim}x$$</p>\n\n<p>so</p>\n\n<p>$$\\ln(x^2+1)\\underset{x\\to 0}{\\sim}x^2.$$</p>\n\n<p>Therefore, </p>\n\n<p>$$\\lim_{x\\to 0} \\frac{\\ln(x^2+1)}{x^2}=\\lim_{x\\to0} \\frac{x^2}{x^2}=1.$$</p>\n" }, { "answer_id": 2114828, "score": 3, "text": "<p>With series:</p>\n\n<p>$ \\ln(x^2+1)=x^2-\\frac{x^4}{2}+\\frac{x^6}{3}-+...$ for $|x|&lt;1$,</p>\n\n<p>hence </p>\n\n<p>$\\frac{\\ln (x^2+1)} {x^2}=1-\\frac{x^2}{2}+\\frac{x^4}{3}-+... \\to 1$ for $x \\to 0$</p>\n" }, { "answer_id": 2114725, "score": 14, "text": "<p>$$\\begin{align}\n\\lim_{x \\to 0} \\frac{\\ln (x^2+1)} {x^2}&amp;=\\lim_{x \\to 0} \\ln (x^2+1)^{\\frac{1}{x^2}}\\\\\n&amp;=\\ln\\left(\\lim_{x \\to 0} (x^2+1)^{\\frac{1}{x^2}}\\right)\\\\\n&amp;=\\ln e=1\n \\end{align}$$ </p>\n" }, { "answer_id": 2114750, "score": 8, "text": "<p>Let $f(u)=e^u$. With $t=\\ln(x^2+1)$ we get</p>\n\n<p>$$\\lim_{x \\to 0} \\frac{\\ln (x^2+1)} {x^2}=\\lim_{t \\to 0} \\frac{t-0} {f(t)-f(0)}=\\frac{1}{f'(0)}=1.$$</p>\n" }, { "answer_id": 2114724, "score": 4, "text": "<p>If you know about equivalents, you have</p>\n\n<p>$$\\ln(x+1)\\underset{x\\to 0}{\\sim}x$$</p>\n\n<p>so</p>\n\n<p>$$\\ln(x^2+1)\\underset{x\\to 0}{\\sim}x^2.$$</p>\n\n<p>Therefore, </p>\n\n<p>$$\\lim_{x\\to 0} \\frac{\\ln(x^2+1)}{x^2}=\\lim_{x\\to0} \\frac{x^2}{x^2}=1.$$</p>\n" }, { "answer_id": 2114828, "score": 3, "text": "<p>With series:</p>\n\n<p>$ \\ln(x^2+1)=x^2-\\frac{x^4}{2}+\\frac{x^6}{3}-+...$ for $|x|&lt;1$,</p>\n\n<p>hence </p>\n\n<p>$\\frac{\\ln (x^2+1)} {x^2}=1-\\frac{x^2}{2}+\\frac{x^4}{3}-+... \\to 1$ for $x \\to 0$</p>\n" }, { "answer_id": 2114725, "score": 14, "text": "<p>$$\\begin{align}\n\\lim_{x \\to 0} \\frac{\\ln (x^2+1)} {x^2}&amp;=\\lim_{x \\to 0} \\ln (x^2+1)^{\\frac{1}{x^2}}\\\\\n&amp;=\\ln\\left(\\lim_{x \\to 0} (x^2+1)^{\\frac{1}{x^2}}\\right)\\\\\n&amp;=\\ln e=1\n \\end{align}$$ </p>\n" }, { "answer_id": 2114750, "score": 8, "text": "<p>Let $f(u)=e^u$. With $t=\\ln(x^2+1)$ we get</p>\n\n<p>$$\\lim_{x \\to 0} \\frac{\\ln (x^2+1)} {x^2}=\\lim_{t \\to 0} \\frac{t-0} {f(t)-f(0)}=\\frac{1}{f'(0)}=1.$$</p>\n" }, { "answer_id": 2114724, "score": 4, "text": "<p>If you know about equivalents, you have</p>\n\n<p>$$\\ln(x+1)\\underset{x\\to 0}{\\sim}x$$</p>\n\n<p>so</p>\n\n<p>$$\\ln(x^2+1)\\underset{x\\to 0}{\\sim}x^2.$$</p>\n\n<p>Therefore, </p>\n\n<p>$$\\lim_{x\\to 0} \\frac{\\ln(x^2+1)}{x^2}=\\lim_{x\\to0} \\frac{x^2}{x^2}=1.$$</p>\n" }, { "answer_id": 2114828, "score": 3, "text": "<p>With series:</p>\n\n<p>$ \\ln(x^2+1)=x^2-\\frac{x^4}{2}+\\frac{x^6}{3}-+...$ for $|x|&lt;1$,</p>\n\n<p>hence </p>\n\n<p>$\\frac{\\ln (x^2+1)} {x^2}=1-\\frac{x^2}{2}+\\frac{x^4}{3}-+... \\to 1$ for $x \\to 0$</p>\n" }, { "answer_id": 2114725, "score": 14, "text": "<p>$$\\begin{align}\n\\lim_{x \\to 0} \\frac{\\ln (x^2+1)} {x^2}&amp;=\\lim_{x \\to 0} \\ln (x^2+1)^{\\frac{1}{x^2}}\\\\\n&amp;=\\ln\\left(\\lim_{x \\to 0} (x^2+1)^{\\frac{1}{x^2}}\\right)\\\\\n&amp;=\\ln e=1\n \\end{align}$$ </p>\n" }, { "answer_id": 2114750, "score": 8, "text": "<p>Let $f(u)=e^u$. With $t=\\ln(x^2+1)$ we get</p>\n\n<p>$$\\lim_{x \\to 0} \\frac{\\ln (x^2+1)} {x^2}=\\lim_{t \\to 0} \\frac{t-0} {f(t)-f(0)}=\\frac{1}{f'(0)}=1.$$</p>\n" }, { "answer_id": 2114724, "score": 4, "text": "<p>If you know about equivalents, you have</p>\n\n<p>$$\\ln(x+1)\\underset{x\\to 0}{\\sim}x$$</p>\n\n<p>so</p>\n\n<p>$$\\ln(x^2+1)\\underset{x\\to 0}{\\sim}x^2.$$</p>\n\n<p>Therefore, </p>\n\n<p>$$\\lim_{x\\to 0} \\frac{\\ln(x^2+1)}{x^2}=\\lim_{x\\to0} \\frac{x^2}{x^2}=1.$$</p>\n" }, { "answer_id": 2114828, "score": 3, "text": "<p>With series:</p>\n\n<p>$ \\ln(x^2+1)=x^2-\\frac{x^4}{2}+\\frac{x^6}{3}-+...$ for $|x|&lt;1$,</p>\n\n<p>hence </p>\n\n<p>$\\frac{\\ln (x^2+1)} {x^2}=1-\\frac{x^2}{2}+\\frac{x^4}{3}-+... \\to 1$ for $x \\to 0$</p>\n" } ]
Question: Find $\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2} $ without L&#39;hopital&#39;s rule Answer 1: <p>$$\begin{align} \lim_{x \to 0} \frac{\ln (x^2+1)} {x^2}&amp;=\lim_{x \to 0} \ln (x^2+1)^{\frac{1}{x^2}}\\ &amp;=\ln\left(\lim_{x \to 0} (x^2+1)^{\frac{1}{x^2}}\right)\\ &amp;=\ln e=1 \end{align}$$ </p> Answer 2: <p>Let $f(u)=e^u$. With $t=\ln(x^2+1)$ we get</p> <p>$$\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2}=\lim_{t \to 0} \frac{t-0} {f(t)-f(0)}=\frac{1}{f'(0)}=1.$$</p> Answer 3: <p>If you know about equivalents, you have</p> <p>$$\ln(x+1)\underset{x\to 0}{\sim}x$$</p> <p>so</p> <p>$$\ln(x^2+1)\underset{x\to 0}{\sim}x^2.$$</p> <p>Therefore, </p> <p>$$\lim_{x\to 0} \frac{\ln(x^2+1)}{x^2}=\lim_{x\to0} \frac{x^2}{x^2}=1.$$</p> Answer 4: <p>With series:</p> <p>$ \ln(x^2+1)=x^2-\frac{x^4}{2}+\frac{x^6}{3}-+...$ for $|x|&lt;1$,</p> <p>hence </p> <p>$\frac{\ln (x^2+1)} {x^2}=1-\frac{x^2}{2}+\frac{x^4}{3}-+... \to 1$ for $x \to 0$</p> Answer 5: <p>$$\begin{align} \lim_{x \to 0} \frac{\ln (x^2+1)} {x^2}&amp;=\lim_{x \to 0} \ln (x^2+1)^{\frac{1}{x^2}}\\ &amp;=\ln\left(\lim_{x \to 0} (x^2+1)^{\frac{1}{x^2}}\right)\\ &amp;=\ln e=1 \end{align}$$ </p> Answer 6: <p>Let $f(u)=e^u$. With $t=\ln(x^2+1)$ we get</p> <p>$$\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2}=\lim_{t \to 0} \frac{t-0} {f(t)-f(0)}=\frac{1}{f'(0)}=1.$$</p> Answer 7: <p>If you know about equivalents, you have</p> <p>$$\ln(x+1)\underset{x\to 0}{\sim}x$$</p> <p>so</p> <p>$$\ln(x^2+1)\underset{x\to 0}{\sim}x^2.$$</p> <p>Therefore, </p> <p>$$\lim_{x\to 0} \frac{\ln(x^2+1)}{x^2}=\lim_{x\to0} \frac{x^2}{x^2}=1.$$</p> Answer 8: <p>With series:</p> <p>$ \ln(x^2+1)=x^2-\frac{x^4}{2}+\frac{x^6}{3}-+...$ for $|x|&lt;1$,</p> <p>hence </p> <p>$\frac{\ln (x^2+1)} {x^2}=1-\frac{x^2}{2}+\frac{x^4}{3}-+... \to 1$ for $x \to 0$</p> Answer 9: <p>$$\begin{align} \lim_{x \to 0} \frac{\ln (x^2+1)} {x^2}&amp;=\lim_{x \to 0} \ln (x^2+1)^{\frac{1}{x^2}}\\ &amp;=\ln\left(\lim_{x \to 0} (x^2+1)^{\frac{1}{x^2}}\right)\\ &amp;=\ln e=1 \end{align}$$ </p> Answer 10: <p>Let $f(u)=e^u$. With $t=\ln(x^2+1)$ we get</p> <p>$$\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2}=\lim_{t \to 0} \frac{t-0} {f(t)-f(0)}=\frac{1}{f'(0)}=1.$$</p> Answer 11: <p>If you know about equivalents, you have</p> <p>$$\ln(x+1)\underset{x\to 0}{\sim}x$$</p> <p>so</p> <p>$$\ln(x^2+1)\underset{x\to 0}{\sim}x^2.$$</p> <p>Therefore, </p> <p>$$\lim_{x\to 0} \frac{\ln(x^2+1)}{x^2}=\lim_{x\to0} \frac{x^2}{x^2}=1.$$</p> Answer 12: <p>With series:</p> <p>$ \ln(x^2+1)=x^2-\frac{x^4}{2}+\frac{x^6}{3}-+...$ for $|x|&lt;1$,</p> <p>hence </p> <p>$\frac{\ln (x^2+1)} {x^2}=1-\frac{x^2}{2}+\frac{x^4}{3}-+... \to 1$ for $x \to 0$</p> Answer 13: <p>$$\begin{align} \lim_{x \to 0} \frac{\ln (x^2+1)} {x^2}&amp;=\lim_{x \to 0} \ln (x^2+1)^{\frac{1}{x^2}}\\ &amp;=\ln\left(\lim_{x \to 0} (x^2+1)^{\frac{1}{x^2}}\right)\\ &amp;=\ln e=1 \end{align}$$ </p> Answer 14: <p>Let $f(u)=e^u$. With $t=\ln(x^2+1)$ we get</p> <p>$$\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2}=\lim_{t \to 0} \frac{t-0} {f(t)-f(0)}=\frac{1}{f'(0)}=1.$$</p> Answer 15: <p>If you know about equivalents, you have</p> <p>$$\ln(x+1)\underset{x\to 0}{\sim}x$$</p> <p>so</p> <p>$$\ln(x^2+1)\underset{x\to 0}{\sim}x^2.$$</p> <p>Therefore, </p> <p>$$\lim_{x\to 0} \frac{\ln(x^2+1)}{x^2}=\lim_{x\to0} \frac{x^2}{x^2}=1.$$</p> Answer 16: <p>With series:</p> <p>$ \ln(x^2+1)=x^2-\frac{x^4}{2}+\frac{x^6}{3}-+...$ for $|x|&lt;1$,</p> <p>hence </p> <p>$\frac{\ln (x^2+1)} {x^2}=1-\frac{x^2}{2}+\frac{x^4}{3}-+... \to 1$ for $x \to 0$</p> Answer 17: <p>$$\begin{align} \lim_{x \to 0} \frac{\ln (x^2+1)} {x^2}&amp;=\lim_{x \to 0} \ln (x^2+1)^{\frac{1}{x^2}}\\ &amp;=\ln\left(\lim_{x \to 0} (x^2+1)^{\frac{1}{x^2}}\right)\\ &amp;=\ln e=1 \end{align}$$ </p> Answer 18: <p>Let $f(u)=e^u$. With $t=\ln(x^2+1)$ we get</p> <p>$$\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2}=\lim_{t \to 0} \frac{t-0} {f(t)-f(0)}=\frac{1}{f'(0)}=1.$$</p> Answer 19: <p>If you know about equivalents, you have</p> <p>$$\ln(x+1)\underset{x\to 0}{\sim}x$$</p> <p>so</p> <p>$$\ln(x^2+1)\underset{x\to 0}{\sim}x^2.$$</p> <p>Therefore, </p> <p>$$\lim_{x\to 0} \frac{\ln(x^2+1)}{x^2}=\lim_{x\to0} \frac{x^2}{x^2}=1.$$</p> Answer 20: <p>With series:</p> <p>$ \ln(x^2+1)=x^2-\frac{x^4}{2}+\frac{x^6}{3}-+...$ for $|x|&lt;1$,</p> <p>hence </p> <p>$\frac{\ln (x^2+1)} {x^2}=1-\frac{x^2}{2}+\frac{x^4}{3}-+... \to 1$ for $x \to 0$</p>
Derivative of $\frac{1}{\sqrt{x-5}}$ via definition of derivative
2,115,482
5
https://math.stackexchange.com/questions/2115482/derivative-of-frac1-sqrtx-5-via-definition-of-derivative
<p>A high school student has asked me to help with a limit. The teacher wants them to calculate the derivative of $$\frac{1}{\sqrt{x-5}}$$ at the point $x=9$ using the definition of the derivative. AND! They don't know $(1+x)^\alpha \approx 1 + \alpha x$.</p> <p>I'm puzzled since don't know how to proceed without it.</p> <p>$$\left.\left(\dfrac{1}{\sqrt{x-5}}\right)'\,\right|_{x=9} = \lim_{h\to 0} \dfrac{1}{h}\left(\dfrac{1}{\sqrt{4+h}}-\dfrac{1}{2}\right)= \lim_{h\to 0} \dfrac{1}{2h}\left((1+h/4)^{-1/2}-1\right)\color{red}{{\bf=}}-1/16 $$</p> <p>Is there really a way to walk around?..</p> <p>BTW, what is the easiest way to derive $(1+x)^\alpha \approx 1 + \alpha x$ for $\alpha \in \mathbb{R}$? I forgot how we did this in school.</p>
[ { "answer_id": 2115488, "score": 3, "text": "<p>Hint</p>\n\n<p>Multiply numerator and denominator of\n$$\n\\frac{2 - \\sqrt{4+h}}{2\\sqrt{4+h}}\n$$\nby the conjugate\n$$\n2 + \\sqrt{4+h}\\ .\n$$</p>\n\n<p>This trick will work for square roots but not for other exponents.</p>\n" }, { "answer_id": 2115493, "score": 3, "text": "<p>You have \n$$\n\\frac1h\\,\\left(\\frac1{\\sqrt{4+h}}-\\frac12\\right)\n=\\frac{2-\\sqrt{4+h}}{2h\\sqrt{4+h}}\n=-\\frac1{2\\sqrt{4+h}(2+\\sqrt{4+h})},\n$$\nafter multiplying and dividing by $2+\\sqrt{4+h}$. </p>\n" }, { "answer_id": 2115490, "score": 2, "text": "<p><strong>Hint</strong>. One may use, as $h \\to 0$,\n$$\n\\dfrac{1}{h}\\left(\\dfrac{1}{\\sqrt{4+h}}-\\dfrac{1}{2}\\right)=\\dfrac{1}{h}\\left(\\dfrac{2-\\sqrt{4+h}}{2\\sqrt{4+h}}\\right)=\\dfrac{1}{h}\\cdot\\dfrac{4-(4+h)}{2\\sqrt{4+h}\\cdot(2+\\sqrt{4+h})}.\n$$</p>\n" }, { "answer_id": 2115488, "score": 3, "text": "<p>Hint</p>\n\n<p>Multiply numerator and denominator of\n$$\n\\frac{2 - \\sqrt{4+h}}{2\\sqrt{4+h}}\n$$\nby the conjugate\n$$\n2 + \\sqrt{4+h}\\ .\n$$</p>\n\n<p>This trick will work for square roots but not for other exponents.</p>\n" }, { "answer_id": 2115493, "score": 3, "text": "<p>You have \n$$\n\\frac1h\\,\\left(\\frac1{\\sqrt{4+h}}-\\frac12\\right)\n=\\frac{2-\\sqrt{4+h}}{2h\\sqrt{4+h}}\n=-\\frac1{2\\sqrt{4+h}(2+\\sqrt{4+h})},\n$$\nafter multiplying and dividing by $2+\\sqrt{4+h}$. </p>\n" }, { "answer_id": 2115490, "score": 2, "text": "<p><strong>Hint</strong>. One may use, as $h \\to 0$,\n$$\n\\dfrac{1}{h}\\left(\\dfrac{1}{\\sqrt{4+h}}-\\dfrac{1}{2}\\right)=\\dfrac{1}{h}\\left(\\dfrac{2-\\sqrt{4+h}}{2\\sqrt{4+h}}\\right)=\\dfrac{1}{h}\\cdot\\dfrac{4-(4+h)}{2\\sqrt{4+h}\\cdot(2+\\sqrt{4+h})}.\n$$</p>\n" }, { "answer_id": 2115488, "score": 3, "text": "<p>Hint</p>\n\n<p>Multiply numerator and denominator of\n$$\n\\frac{2 - \\sqrt{4+h}}{2\\sqrt{4+h}}\n$$\nby the conjugate\n$$\n2 + \\sqrt{4+h}\\ .\n$$</p>\n\n<p>This trick will work for square roots but not for other exponents.</p>\n" }, { "answer_id": 2115493, "score": 3, "text": "<p>You have \n$$\n\\frac1h\\,\\left(\\frac1{\\sqrt{4+h}}-\\frac12\\right)\n=\\frac{2-\\sqrt{4+h}}{2h\\sqrt{4+h}}\n=-\\frac1{2\\sqrt{4+h}(2+\\sqrt{4+h})},\n$$\nafter multiplying and dividing by $2+\\sqrt{4+h}$. </p>\n" }, { "answer_id": 2115490, "score": 2, "text": "<p><strong>Hint</strong>. One may use, as $h \\to 0$,\n$$\n\\dfrac{1}{h}\\left(\\dfrac{1}{\\sqrt{4+h}}-\\dfrac{1}{2}\\right)=\\dfrac{1}{h}\\left(\\dfrac{2-\\sqrt{4+h}}{2\\sqrt{4+h}}\\right)=\\dfrac{1}{h}\\cdot\\dfrac{4-(4+h)}{2\\sqrt{4+h}\\cdot(2+\\sqrt{4+h})}.\n$$</p>\n" }, { "answer_id": 2115488, "score": 3, "text": "<p>Hint</p>\n\n<p>Multiply numerator and denominator of\n$$\n\\frac{2 - \\sqrt{4+h}}{2\\sqrt{4+h}}\n$$\nby the conjugate\n$$\n2 + \\sqrt{4+h}\\ .\n$$</p>\n\n<p>This trick will work for square roots but not for other exponents.</p>\n" }, { "answer_id": 2115493, "score": 3, "text": "<p>You have \n$$\n\\frac1h\\,\\left(\\frac1{\\sqrt{4+h}}-\\frac12\\right)\n=\\frac{2-\\sqrt{4+h}}{2h\\sqrt{4+h}}\n=-\\frac1{2\\sqrt{4+h}(2+\\sqrt{4+h})},\n$$\nafter multiplying and dividing by $2+\\sqrt{4+h}$. </p>\n" }, { "answer_id": 2115490, "score": 2, "text": "<p><strong>Hint</strong>. One may use, as $h \\to 0$,\n$$\n\\dfrac{1}{h}\\left(\\dfrac{1}{\\sqrt{4+h}}-\\dfrac{1}{2}\\right)=\\dfrac{1}{h}\\left(\\dfrac{2-\\sqrt{4+h}}{2\\sqrt{4+h}}\\right)=\\dfrac{1}{h}\\cdot\\dfrac{4-(4+h)}{2\\sqrt{4+h}\\cdot(2+\\sqrt{4+h})}.\n$$</p>\n" }, { "answer_id": 2115488, "score": 3, "text": "<p>Hint</p>\n\n<p>Multiply numerator and denominator of\n$$\n\\frac{2 - \\sqrt{4+h}}{2\\sqrt{4+h}}\n$$\nby the conjugate\n$$\n2 + \\sqrt{4+h}\\ .\n$$</p>\n\n<p>This trick will work for square roots but not for other exponents.</p>\n" }, { "answer_id": 2115493, "score": 3, "text": "<p>You have \n$$\n\\frac1h\\,\\left(\\frac1{\\sqrt{4+h}}-\\frac12\\right)\n=\\frac{2-\\sqrt{4+h}}{2h\\sqrt{4+h}}\n=-\\frac1{2\\sqrt{4+h}(2+\\sqrt{4+h})},\n$$\nafter multiplying and dividing by $2+\\sqrt{4+h}$. </p>\n" }, { "answer_id": 2115490, "score": 2, "text": "<p><strong>Hint</strong>. One may use, as $h \\to 0$,\n$$\n\\dfrac{1}{h}\\left(\\dfrac{1}{\\sqrt{4+h}}-\\dfrac{1}{2}\\right)=\\dfrac{1}{h}\\left(\\dfrac{2-\\sqrt{4+h}}{2\\sqrt{4+h}}\\right)=\\dfrac{1}{h}\\cdot\\dfrac{4-(4+h)}{2\\sqrt{4+h}\\cdot(2+\\sqrt{4+h})}.\n$$</p>\n" } ]
Question: Derivative of $\frac{1}{\sqrt{x-5}}$ via definition of derivative Answer 1: <p>Hint</p> <p>Multiply numerator and denominator of $$ \frac{2 - \sqrt{4+h}}{2\sqrt{4+h}} $$ by the conjugate $$ 2 + \sqrt{4+h}\ . $$</p> <p>This trick will work for square roots but not for other exponents.</p> Answer 2: <p>You have $$ \frac1h\,\left(\frac1{\sqrt{4+h}}-\frac12\right) =\frac{2-\sqrt{4+h}}{2h\sqrt{4+h}} =-\frac1{2\sqrt{4+h}(2+\sqrt{4+h})}, $$ after multiplying and dividing by $2+\sqrt{4+h}$. </p> Answer 3: <p><strong>Hint</strong>. One may use, as $h \to 0$, $$ \dfrac{1}{h}\left(\dfrac{1}{\sqrt{4+h}}-\dfrac{1}{2}\right)=\dfrac{1}{h}\left(\dfrac{2-\sqrt{4+h}}{2\sqrt{4+h}}\right)=\dfrac{1}{h}\cdot\dfrac{4-(4+h)}{2\sqrt{4+h}\cdot(2+\sqrt{4+h})}. $$</p> Answer 4: <p>Hint</p> <p>Multiply numerator and denominator of $$ \frac{2 - \sqrt{4+h}}{2\sqrt{4+h}} $$ by the conjugate $$ 2 + \sqrt{4+h}\ . $$</p> <p>This trick will work for square roots but not for other exponents.</p> Answer 5: <p>You have $$ \frac1h\,\left(\frac1{\sqrt{4+h}}-\frac12\right) =\frac{2-\sqrt{4+h}}{2h\sqrt{4+h}} =-\frac1{2\sqrt{4+h}(2+\sqrt{4+h})}, $$ after multiplying and dividing by $2+\sqrt{4+h}$. </p> Answer 6: <p><strong>Hint</strong>. One may use, as $h \to 0$, $$ \dfrac{1}{h}\left(\dfrac{1}{\sqrt{4+h}}-\dfrac{1}{2}\right)=\dfrac{1}{h}\left(\dfrac{2-\sqrt{4+h}}{2\sqrt{4+h}}\right)=\dfrac{1}{h}\cdot\dfrac{4-(4+h)}{2\sqrt{4+h}\cdot(2+\sqrt{4+h})}. $$</p> Answer 7: <p>Hint</p> <p>Multiply numerator and denominator of $$ \frac{2 - \sqrt{4+h}}{2\sqrt{4+h}} $$ by the conjugate $$ 2 + \sqrt{4+h}\ . $$</p> <p>This trick will work for square roots but not for other exponents.</p> Answer 8: <p>You have $$ \frac1h\,\left(\frac1{\sqrt{4+h}}-\frac12\right) =\frac{2-\sqrt{4+h}}{2h\sqrt{4+h}} =-\frac1{2\sqrt{4+h}(2+\sqrt{4+h})}, $$ after multiplying and dividing by $2+\sqrt{4+h}$. </p> Answer 9: <p><strong>Hint</strong>. One may use, as $h \to 0$, $$ \dfrac{1}{h}\left(\dfrac{1}{\sqrt{4+h}}-\dfrac{1}{2}\right)=\dfrac{1}{h}\left(\dfrac{2-\sqrt{4+h}}{2\sqrt{4+h}}\right)=\dfrac{1}{h}\cdot\dfrac{4-(4+h)}{2\sqrt{4+h}\cdot(2+\sqrt{4+h})}. $$</p> Answer 10: <p>Hint</p> <p>Multiply numerator and denominator of $$ \frac{2 - \sqrt{4+h}}{2\sqrt{4+h}} $$ by the conjugate $$ 2 + \sqrt{4+h}\ . $$</p> <p>This trick will work for square roots but not for other exponents.</p> Answer 11: <p>You have $$ \frac1h\,\left(\frac1{\sqrt{4+h}}-\frac12\right) =\frac{2-\sqrt{4+h}}{2h\sqrt{4+h}} =-\frac1{2\sqrt{4+h}(2+\sqrt{4+h})}, $$ after multiplying and dividing by $2+\sqrt{4+h}$. </p> Answer 12: <p><strong>Hint</strong>. One may use, as $h \to 0$, $$ \dfrac{1}{h}\left(\dfrac{1}{\sqrt{4+h}}-\dfrac{1}{2}\right)=\dfrac{1}{h}\left(\dfrac{2-\sqrt{4+h}}{2\sqrt{4+h}}\right)=\dfrac{1}{h}\cdot\dfrac{4-(4+h)}{2\sqrt{4+h}\cdot(2+\sqrt{4+h})}. $$</p> Answer 13: <p>Hint</p> <p>Multiply numerator and denominator of $$ \frac{2 - \sqrt{4+h}}{2\sqrt{4+h}} $$ by the conjugate $$ 2 + \sqrt{4+h}\ . $$</p> <p>This trick will work for square roots but not for other exponents.</p> Answer 14: <p>You have $$ \frac1h\,\left(\frac1{\sqrt{4+h}}-\frac12\right) =\frac{2-\sqrt{4+h}}{2h\sqrt{4+h}} =-\frac1{2\sqrt{4+h}(2+\sqrt{4+h})}, $$ after multiplying and dividing by $2+\sqrt{4+h}$. </p> Answer 15: <p><strong>Hint</strong>. One may use, as $h \to 0$, $$ \dfrac{1}{h}\left(\dfrac{1}{\sqrt{4+h}}-\dfrac{1}{2}\right)=\dfrac{1}{h}\left(\dfrac{2-\sqrt{4+h}}{2\sqrt{4+h}}\right)=\dfrac{1}{h}\cdot\dfrac{4-(4+h)}{2\sqrt{4+h}\cdot(2+\sqrt{4+h})}. $$</p>
Checking my work for $21^{100}-12^{100}$ modulo $11$
2,115,489
5
https://math.stackexchange.com/questions/2115489/checking-my-work-for-21100-12100-modulo-11
<blockquote> <p><strong>Question 1:</strong> Is $21^{100}-12^{100}$ divisible by $11$?</p> </blockquote> <p><strong>My work:</strong> Note:$$\begin{align*} &amp; \color{red}{21^{100}}\equiv10^{100}\equiv 100^{50}\equiv1^{50}\equiv1 \mod 11\\ &amp; \color{blue}{12^{100}}\equiv 1^{100}\equiv 1\mod 11\end{align*}$$ Hence,$$\begin{align*}\color{red}{21^{100}}-\color{blue}{12^{100}}\equiv 1-1\equiv 0\mod 11\end{align*}$$ Hence, $21^{100}-12^{100}$ <em>is</em> divisible by $11$.</p> <hr> <blockquote> <p><strong>Actual Question:</strong></p> </blockquote> <ol> <li>Does my work hold? Is it logical and reasonable?</li> <li>Is there a quicker way of solving this problem?</li> </ol> <p>I am very new to modular arithematic (i.e I just learned it $10$ minutes ago) so tips on a simple book where I can learn a bit more in-depth stuff about modular arithematics greatly helps!</p>
[ { "answer_id": 2115499, "score": 2, "text": "<p>An easier approach for such problems is making use of <a href=\"https://en.wikipedia.org/wiki/Fermat&#39;s_little_theorem\" rel=\"nofollow noreferrer\">Fermat's_little_theorem</a>\nwhich says, for any integer $a$, <br>\n$a^{p-1}\\equiv 1 \\mod{p}$<br>\nif $p$ is prime and $gcd(a,p)=1$</p>\n\n<p>Using this, we can solve the problem as follows.</p>\n\n<p>$21^{11-1}\\equiv 1 \\pmod {11}\\\\\n21^{10}\\equiv 1 \\pmod {11}\\\\\n21^{100}\\equiv 1 \\pmod {11} \\cdots(1)$</p>\n\n<p>$12^{11-1}\\equiv 1 \\pmod {11}\\\\\n12^{10}\\equiv 1 \\pmod {11}\\\\\n12^{100}\\equiv 1 \\pmod {11} \\cdots(2)$</p>\n\n<p>Therefore $21^{100}-12^{100} \\equiv 0 \\pmod {11}$</p>\n" }, { "answer_id": 2115500, "score": 2, "text": "<p>Your calculation looks correct. It would be more conventional (and slightly less work) to write it as:\n$$ 21\\equiv -1 \\pmod{11} \\qquad\\text{and}\\qquad 12\\equiv 1 \\pmod{11} $$\nso\n$$ 21^{100} - 12^{100} \\equiv (-1)^{100} - 1^{100} \\equiv 1 - 1 \\equiv 0 \\pmod{11} $$\nsince $-1$ to an even power is always $1$.</p>\n" }, { "answer_id": 2115502, "score": 0, "text": "<p>$$\\begin{align}\n21^{100} - 12^{100} &amp; \\equiv (11 \\times 2 - 1)^{100} - (11 + 1)^{100} \\\\\n&amp; \\equiv (0 \\times 2 - 1)^{100} - (0 + 1)^{100} \\\\\n&amp; \\equiv (-1)^{100} - 1^{100} \\\\\n&amp; \\equiv 1 - 1 \\\\\n&amp; \\equiv 0 \\pmod {11}\n\\end{align}$$</p>\n" }, { "answer_id": 2115523, "score": 0, "text": "<p>$21^{100}-12^{100}\\equiv21^0-12^0\\equiv0($mod $11)$</p>\n\n<p>since</p>\n\n<p>$21^{2k}\\equiv12^{k}\\equiv1($mod $11)$</p>\n" }, { "answer_id": 2115499, "score": 2, "text": "<p>An easier approach for such problems is making use of <a href=\"https://en.wikipedia.org/wiki/Fermat&#39;s_little_theorem\" rel=\"nofollow noreferrer\">Fermat's_little_theorem</a>\nwhich says, for any integer $a$, <br>\n$a^{p-1}\\equiv 1 \\mod{p}$<br>\nif $p$ is prime and $gcd(a,p)=1$</p>\n\n<p>Using this, we can solve the problem as follows.</p>\n\n<p>$21^{11-1}\\equiv 1 \\pmod {11}\\\\\n21^{10}\\equiv 1 \\pmod {11}\\\\\n21^{100}\\equiv 1 \\pmod {11} \\cdots(1)$</p>\n\n<p>$12^{11-1}\\equiv 1 \\pmod {11}\\\\\n12^{10}\\equiv 1 \\pmod {11}\\\\\n12^{100}\\equiv 1 \\pmod {11} \\cdots(2)$</p>\n\n<p>Therefore $21^{100}-12^{100} \\equiv 0 \\pmod {11}$</p>\n" }, { "answer_id": 2115500, "score": 2, "text": "<p>Your calculation looks correct. It would be more conventional (and slightly less work) to write it as:\n$$ 21\\equiv -1 \\pmod{11} \\qquad\\text{and}\\qquad 12\\equiv 1 \\pmod{11} $$\nso\n$$ 21^{100} - 12^{100} \\equiv (-1)^{100} - 1^{100} \\equiv 1 - 1 \\equiv 0 \\pmod{11} $$\nsince $-1$ to an even power is always $1$.</p>\n" }, { "answer_id": 2115502, "score": 0, "text": "<p>$$\\begin{align}\n21^{100} - 12^{100} &amp; \\equiv (11 \\times 2 - 1)^{100} - (11 + 1)^{100} \\\\\n&amp; \\equiv (0 \\times 2 - 1)^{100} - (0 + 1)^{100} \\\\\n&amp; \\equiv (-1)^{100} - 1^{100} \\\\\n&amp; \\equiv 1 - 1 \\\\\n&amp; \\equiv 0 \\pmod {11}\n\\end{align}$$</p>\n" }, { "answer_id": 2115523, "score": 0, "text": "<p>$21^{100}-12^{100}\\equiv21^0-12^0\\equiv0($mod $11)$</p>\n\n<p>since</p>\n\n<p>$21^{2k}\\equiv12^{k}\\equiv1($mod $11)$</p>\n" }, { "answer_id": 2115499, "score": 2, "text": "<p>An easier approach for such problems is making use of <a href=\"https://en.wikipedia.org/wiki/Fermat&#39;s_little_theorem\" rel=\"nofollow noreferrer\">Fermat's_little_theorem</a>\nwhich says, for any integer $a$, <br>\n$a^{p-1}\\equiv 1 \\mod{p}$<br>\nif $p$ is prime and $gcd(a,p)=1$</p>\n\n<p>Using this, we can solve the problem as follows.</p>\n\n<p>$21^{11-1}\\equiv 1 \\pmod {11}\\\\\n21^{10}\\equiv 1 \\pmod {11}\\\\\n21^{100}\\equiv 1 \\pmod {11} \\cdots(1)$</p>\n\n<p>$12^{11-1}\\equiv 1 \\pmod {11}\\\\\n12^{10}\\equiv 1 \\pmod {11}\\\\\n12^{100}\\equiv 1 \\pmod {11} \\cdots(2)$</p>\n\n<p>Therefore $21^{100}-12^{100} \\equiv 0 \\pmod {11}$</p>\n" }, { "answer_id": 2115500, "score": 2, "text": "<p>Your calculation looks correct. It would be more conventional (and slightly less work) to write it as:\n$$ 21\\equiv -1 \\pmod{11} \\qquad\\text{and}\\qquad 12\\equiv 1 \\pmod{11} $$\nso\n$$ 21^{100} - 12^{100} \\equiv (-1)^{100} - 1^{100} \\equiv 1 - 1 \\equiv 0 \\pmod{11} $$\nsince $-1$ to an even power is always $1$.</p>\n" }, { "answer_id": 2115502, "score": 0, "text": "<p>$$\\begin{align}\n21^{100} - 12^{100} &amp; \\equiv (11 \\times 2 - 1)^{100} - (11 + 1)^{100} \\\\\n&amp; \\equiv (0 \\times 2 - 1)^{100} - (0 + 1)^{100} \\\\\n&amp; \\equiv (-1)^{100} - 1^{100} \\\\\n&amp; \\equiv 1 - 1 \\\\\n&amp; \\equiv 0 \\pmod {11}\n\\end{align}$$</p>\n" }, { "answer_id": 2115523, "score": 0, "text": "<p>$21^{100}-12^{100}\\equiv21^0-12^0\\equiv0($mod $11)$</p>\n\n<p>since</p>\n\n<p>$21^{2k}\\equiv12^{k}\\equiv1($mod $11)$</p>\n" }, { "answer_id": 2115499, "score": 2, "text": "<p>An easier approach for such problems is making use of <a href=\"https://en.wikipedia.org/wiki/Fermat&#39;s_little_theorem\" rel=\"nofollow noreferrer\">Fermat's_little_theorem</a>\nwhich says, for any integer $a$, <br>\n$a^{p-1}\\equiv 1 \\mod{p}$<br>\nif $p$ is prime and $gcd(a,p)=1$</p>\n\n<p>Using this, we can solve the problem as follows.</p>\n\n<p>$21^{11-1}\\equiv 1 \\pmod {11}\\\\\n21^{10}\\equiv 1 \\pmod {11}\\\\\n21^{100}\\equiv 1 \\pmod {11} \\cdots(1)$</p>\n\n<p>$12^{11-1}\\equiv 1 \\pmod {11}\\\\\n12^{10}\\equiv 1 \\pmod {11}\\\\\n12^{100}\\equiv 1 \\pmod {11} \\cdots(2)$</p>\n\n<p>Therefore $21^{100}-12^{100} \\equiv 0 \\pmod {11}$</p>\n" }, { "answer_id": 2115500, "score": 2, "text": "<p>Your calculation looks correct. It would be more conventional (and slightly less work) to write it as:\n$$ 21\\equiv -1 \\pmod{11} \\qquad\\text{and}\\qquad 12\\equiv 1 \\pmod{11} $$\nso\n$$ 21^{100} - 12^{100} \\equiv (-1)^{100} - 1^{100} \\equiv 1 - 1 \\equiv 0 \\pmod{11} $$\nsince $-1$ to an even power is always $1$.</p>\n" }, { "answer_id": 2115502, "score": 0, "text": "<p>$$\\begin{align}\n21^{100} - 12^{100} &amp; \\equiv (11 \\times 2 - 1)^{100} - (11 + 1)^{100} \\\\\n&amp; \\equiv (0 \\times 2 - 1)^{100} - (0 + 1)^{100} \\\\\n&amp; \\equiv (-1)^{100} - 1^{100} \\\\\n&amp; \\equiv 1 - 1 \\\\\n&amp; \\equiv 0 \\pmod {11}\n\\end{align}$$</p>\n" }, { "answer_id": 2115523, "score": 0, "text": "<p>$21^{100}-12^{100}\\equiv21^0-12^0\\equiv0($mod $11)$</p>\n\n<p>since</p>\n\n<p>$21^{2k}\\equiv12^{k}\\equiv1($mod $11)$</p>\n" }, { "answer_id": 2115499, "score": 2, "text": "<p>An easier approach for such problems is making use of <a href=\"https://en.wikipedia.org/wiki/Fermat&#39;s_little_theorem\" rel=\"nofollow noreferrer\">Fermat's_little_theorem</a>\nwhich says, for any integer $a$, <br>\n$a^{p-1}\\equiv 1 \\mod{p}$<br>\nif $p$ is prime and $gcd(a,p)=1$</p>\n\n<p>Using this, we can solve the problem as follows.</p>\n\n<p>$21^{11-1}\\equiv 1 \\pmod {11}\\\\\n21^{10}\\equiv 1 \\pmod {11}\\\\\n21^{100}\\equiv 1 \\pmod {11} \\cdots(1)$</p>\n\n<p>$12^{11-1}\\equiv 1 \\pmod {11}\\\\\n12^{10}\\equiv 1 \\pmod {11}\\\\\n12^{100}\\equiv 1 \\pmod {11} \\cdots(2)$</p>\n\n<p>Therefore $21^{100}-12^{100} \\equiv 0 \\pmod {11}$</p>\n" }, { "answer_id": 2115500, "score": 2, "text": "<p>Your calculation looks correct. It would be more conventional (and slightly less work) to write it as:\n$$ 21\\equiv -1 \\pmod{11} \\qquad\\text{and}\\qquad 12\\equiv 1 \\pmod{11} $$\nso\n$$ 21^{100} - 12^{100} \\equiv (-1)^{100} - 1^{100} \\equiv 1 - 1 \\equiv 0 \\pmod{11} $$\nsince $-1$ to an even power is always $1$.</p>\n" }, { "answer_id": 2115502, "score": 0, "text": "<p>$$\\begin{align}\n21^{100} - 12^{100} &amp; \\equiv (11 \\times 2 - 1)^{100} - (11 + 1)^{100} \\\\\n&amp; \\equiv (0 \\times 2 - 1)^{100} - (0 + 1)^{100} \\\\\n&amp; \\equiv (-1)^{100} - 1^{100} \\\\\n&amp; \\equiv 1 - 1 \\\\\n&amp; \\equiv 0 \\pmod {11}\n\\end{align}$$</p>\n" }, { "answer_id": 2115523, "score": 0, "text": "<p>$21^{100}-12^{100}\\equiv21^0-12^0\\equiv0($mod $11)$</p>\n\n<p>since</p>\n\n<p>$21^{2k}\\equiv12^{k}\\equiv1($mod $11)$</p>\n" } ]
Question: Checking my work for $21^{100}-12^{100}$ modulo $11$ Answer 1: <p>An easier approach for such problems is making use of <a href="https://en.wikipedia.org/wiki/Fermat&#39;s_little_theorem" rel="nofollow noreferrer">Fermat's_little_theorem</a> which says, for any integer $a$, <br> $a^{p-1}\equiv 1 \mod{p}$<br> if $p$ is prime and $gcd(a,p)=1$</p> <p>Using this, we can solve the problem as follows.</p> <p>$21^{11-1}\equiv 1 \pmod {11}\\ 21^{10}\equiv 1 \pmod {11}\\ 21^{100}\equiv 1 \pmod {11} \cdots(1)$</p> <p>$12^{11-1}\equiv 1 \pmod {11}\\ 12^{10}\equiv 1 \pmod {11}\\ 12^{100}\equiv 1 \pmod {11} \cdots(2)$</p> <p>Therefore $21^{100}-12^{100} \equiv 0 \pmod {11}$</p> Answer 2: <p>Your calculation looks correct. It would be more conventional (and slightly less work) to write it as: $$ 21\equiv -1 \pmod{11} \qquad\text{and}\qquad 12\equiv 1 \pmod{11} $$ so $$ 21^{100} - 12^{100} \equiv (-1)^{100} - 1^{100} \equiv 1 - 1 \equiv 0 \pmod{11} $$ since $-1$ to an even power is always $1$.</p> Answer 3: <p>$$\begin{align} 21^{100} - 12^{100} &amp; \equiv (11 \times 2 - 1)^{100} - (11 + 1)^{100} \\ &amp; \equiv (0 \times 2 - 1)^{100} - (0 + 1)^{100} \\ &amp; \equiv (-1)^{100} - 1^{100} \\ &amp; \equiv 1 - 1 \\ &amp; \equiv 0 \pmod {11} \end{align}$$</p> Answer 4: <p>$21^{100}-12^{100}\equiv21^0-12^0\equiv0($mod $11)$</p> <p>since</p> <p>$21^{2k}\equiv12^{k}\equiv1($mod $11)$</p> Answer 5: <p>An easier approach for such problems is making use of <a href="https://en.wikipedia.org/wiki/Fermat&#39;s_little_theorem" rel="nofollow noreferrer">Fermat's_little_theorem</a> which says, for any integer $a$, <br> $a^{p-1}\equiv 1 \mod{p}$<br> if $p$ is prime and $gcd(a,p)=1$</p> <p>Using this, we can solve the problem as follows.</p> <p>$21^{11-1}\equiv 1 \pmod {11}\\ 21^{10}\equiv 1 \pmod {11}\\ 21^{100}\equiv 1 \pmod {11} \cdots(1)$</p> <p>$12^{11-1}\equiv 1 \pmod {11}\\ 12^{10}\equiv 1 \pmod {11}\\ 12^{100}\equiv 1 \pmod {11} \cdots(2)$</p> <p>Therefore $21^{100}-12^{100} \equiv 0 \pmod {11}$</p> Answer 6: <p>Your calculation looks correct. It would be more conventional (and slightly less work) to write it as: $$ 21\equiv -1 \pmod{11} \qquad\text{and}\qquad 12\equiv 1 \pmod{11} $$ so $$ 21^{100} - 12^{100} \equiv (-1)^{100} - 1^{100} \equiv 1 - 1 \equiv 0 \pmod{11} $$ since $-1$ to an even power is always $1$.</p> Answer 7: <p>$$\begin{align} 21^{100} - 12^{100} &amp; \equiv (11 \times 2 - 1)^{100} - (11 + 1)^{100} \\ &amp; \equiv (0 \times 2 - 1)^{100} - (0 + 1)^{100} \\ &amp; \equiv (-1)^{100} - 1^{100} \\ &amp; \equiv 1 - 1 \\ &amp; \equiv 0 \pmod {11} \end{align}$$</p> Answer 8: <p>$21^{100}-12^{100}\equiv21^0-12^0\equiv0($mod $11)$</p> <p>since</p> <p>$21^{2k}\equiv12^{k}\equiv1($mod $11)$</p> Answer 9: <p>An easier approach for such problems is making use of <a href="https://en.wikipedia.org/wiki/Fermat&#39;s_little_theorem" rel="nofollow noreferrer">Fermat's_little_theorem</a> which says, for any integer $a$, <br> $a^{p-1}\equiv 1 \mod{p}$<br> if $p$ is prime and $gcd(a,p)=1$</p> <p>Using this, we can solve the problem as follows.</p> <p>$21^{11-1}\equiv 1 \pmod {11}\\ 21^{10}\equiv 1 \pmod {11}\\ 21^{100}\equiv 1 \pmod {11} \cdots(1)$</p> <p>$12^{11-1}\equiv 1 \pmod {11}\\ 12^{10}\equiv 1 \pmod {11}\\ 12^{100}\equiv 1 \pmod {11} \cdots(2)$</p> <p>Therefore $21^{100}-12^{100} \equiv 0 \pmod {11}$</p> Answer 10: <p>Your calculation looks correct. It would be more conventional (and slightly less work) to write it as: $$ 21\equiv -1 \pmod{11} \qquad\text{and}\qquad 12\equiv 1 \pmod{11} $$ so $$ 21^{100} - 12^{100} \equiv (-1)^{100} - 1^{100} \equiv 1 - 1 \equiv 0 \pmod{11} $$ since $-1$ to an even power is always $1$.</p> Answer 11: <p>$$\begin{align} 21^{100} - 12^{100} &amp; \equiv (11 \times 2 - 1)^{100} - (11 + 1)^{100} \\ &amp; \equiv (0 \times 2 - 1)^{100} - (0 + 1)^{100} \\ &amp; \equiv (-1)^{100} - 1^{100} \\ &amp; \equiv 1 - 1 \\ &amp; \equiv 0 \pmod {11} \end{align}$$</p> Answer 12: <p>$21^{100}-12^{100}\equiv21^0-12^0\equiv0($mod $11)$</p> <p>since</p> <p>$21^{2k}\equiv12^{k}\equiv1($mod $11)$</p> Answer 13: <p>An easier approach for such problems is making use of <a href="https://en.wikipedia.org/wiki/Fermat&#39;s_little_theorem" rel="nofollow noreferrer">Fermat's_little_theorem</a> which says, for any integer $a$, <br> $a^{p-1}\equiv 1 \mod{p}$<br> if $p$ is prime and $gcd(a,p)=1$</p> <p>Using this, we can solve the problem as follows.</p> <p>$21^{11-1}\equiv 1 \pmod {11}\\ 21^{10}\equiv 1 \pmod {11}\\ 21^{100}\equiv 1 \pmod {11} \cdots(1)$</p> <p>$12^{11-1}\equiv 1 \pmod {11}\\ 12^{10}\equiv 1 \pmod {11}\\ 12^{100}\equiv 1 \pmod {11} \cdots(2)$</p> <p>Therefore $21^{100}-12^{100} \equiv 0 \pmod {11}$</p> Answer 14: <p>Your calculation looks correct. It would be more conventional (and slightly less work) to write it as: $$ 21\equiv -1 \pmod{11} \qquad\text{and}\qquad 12\equiv 1 \pmod{11} $$ so $$ 21^{100} - 12^{100} \equiv (-1)^{100} - 1^{100} \equiv 1 - 1 \equiv 0 \pmod{11} $$ since $-1$ to an even power is always $1$.</p> Answer 15: <p>$$\begin{align} 21^{100} - 12^{100} &amp; \equiv (11 \times 2 - 1)^{100} - (11 + 1)^{100} \\ &amp; \equiv (0 \times 2 - 1)^{100} - (0 + 1)^{100} \\ &amp; \equiv (-1)^{100} - 1^{100} \\ &amp; \equiv 1 - 1 \\ &amp; \equiv 0 \pmod {11} \end{align}$$</p> Answer 16: <p>$21^{100}-12^{100}\equiv21^0-12^0\equiv0($mod $11)$</p> <p>since</p> <p>$21^{2k}\equiv12^{k}\equiv1($mod $11)$</p> Answer 17: <p>An easier approach for such problems is making use of <a href="https://en.wikipedia.org/wiki/Fermat&#39;s_little_theorem" rel="nofollow noreferrer">Fermat's_little_theorem</a> which says, for any integer $a$, <br> $a^{p-1}\equiv 1 \mod{p}$<br> if $p$ is prime and $gcd(a,p)=1$</p> <p>Using this, we can solve the problem as follows.</p> <p>$21^{11-1}\equiv 1 \pmod {11}\\ 21^{10}\equiv 1 \pmod {11}\\ 21^{100}\equiv 1 \pmod {11} \cdots(1)$</p> <p>$12^{11-1}\equiv 1 \pmod {11}\\ 12^{10}\equiv 1 \pmod {11}\\ 12^{100}\equiv 1 \pmod {11} \cdots(2)$</p> <p>Therefore $21^{100}-12^{100} \equiv 0 \pmod {11}$</p> Answer 18: <p>Your calculation looks correct. It would be more conventional (and slightly less work) to write it as: $$ 21\equiv -1 \pmod{11} \qquad\text{and}\qquad 12\equiv 1 \pmod{11} $$ so $$ 21^{100} - 12^{100} \equiv (-1)^{100} - 1^{100} \equiv 1 - 1 \equiv 0 \pmod{11} $$ since $-1$ to an even power is always $1$.</p> Answer 19: <p>$$\begin{align} 21^{100} - 12^{100} &amp; \equiv (11 \times 2 - 1)^{100} - (11 + 1)^{100} \\ &amp; \equiv (0 \times 2 - 1)^{100} - (0 + 1)^{100} \\ &amp; \equiv (-1)^{100} - 1^{100} \\ &amp; \equiv 1 - 1 \\ &amp; \equiv 0 \pmod {11} \end{align}$$</p> Answer 20: <p>$21^{100}-12^{100}\equiv21^0-12^0\equiv0($mod $11)$</p> <p>since</p> <p>$21^{2k}\equiv12^{k}\equiv1($mod $11)$</p>
Limit $\lim_{x\to 0} \frac{\tan ^3 x - \sin ^3 x}{x^5}$ without l&#39;H&#244;pital&#39;s rule.
2,116,279
5
https://math.stackexchange.com/questions/2116279/limit-lim-x-to-0-frac-tan-3-x-sin-3-xx5-without-lh%c3%b4pitals-rule
<p>I need to solve $$\lim_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5}$$</p> <p>I did like this:</p> <p>$\lim \limits_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5} = \lim \limits_{x\to 0} \dfrac{\tan ^3 x}{x^5} - \dfrac{\sin ^3 x}{x^5}$</p> <p>$=\dfrac 1{x^2} - \dfrac 1{x^2} =0$</p> <p>But it's wrong. Where I have gone wrong and how to do it?</p>
[ { "answer_id": 2116281, "score": 5, "text": "<p>HINT:</p>\n\n<p>$\\lim_{x\\to0}\\left(\\left(\\dfrac{\\tan x}x\\right)^3\\cdot\\dfrac1{x^2}-\\left(\\dfrac{\\sin x}x\\right)^3\\cdot\\dfrac1{x^2}\\right)$ is of the form $\\infty-\\infty$</p>\n\n<p>See <a href=\"https://en.wikipedia.org/wiki/Indeterminate_form#List_of_indeterminate_forms\">List of indeterminate forms</a></p>\n\n<p>Use \n$$\\dfrac1{\\cos^3x}\\cdot\\left(\\dfrac{\\sin x}x\\right)^3\\cdot\\dfrac{1-\\cos x}{x^2}\\cdot(1+\\cos x+\\cos^2x)$$</p>\n" }, { "answer_id": 2116294, "score": 4, "text": "<p>$$\\lim_{x\\to 0} \\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\lim_{x\\to 0} \\dfrac{\\frac{\\sin ^3 x}{\\cos^3x} - \\sin ^3 x}{x^5}=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x(1-\\cos^3x)}{x^5\\cos ^3 x}$$\nwith $\\cos x\\approx 1-\\frac12x^2$ as $x\\to0$\n$$\\lim_{x\\to 0} \\dfrac{\\sin ^3 x}{x^3}\\times\\dfrac{1-(1-\\frac12x^2)^3}{x^2\\times1}=\\lim_{x\\to 0}\\frac32-\\frac34x^2+\\frac18x^4=\\frac32$$</p>\n" }, { "answer_id": 2116322, "score": 3, "text": "<p>$$\\begin{align}\\lim_{x\\to 0} \\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}&amp;=\\lim_{x\\to 0} \\dfrac{\\frac{\\sin ^3 x}{\\cos^3x} - \\sin ^3 x}{x^5}\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x(1-\\cos^3x)}{x^5\\cos ^3 x}\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x\\Big[(1-\\cos x)(1+\\cos x+\\cos^2x)\\Big]}{x^5\\cos ^3 x}\\\\\n&amp;=\\lim_{x\\to 0} \\left(\\dfrac{\\sin ^3 x\\Big[(1-\\cos x)(1+\\cos x+\\cos^2x)\\Big]}{x^5\\cos ^3 x}\\cdot\\frac{1+\\cos x}{1+\\cos x}\\right), \\quad\\text{note that }(1-\\cos x)(1+\\cos x)=\\sin^2x\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^5 x\\quad(1+\\cos x+\\cos^2x)}{x^5\\cos ^3 x\\quad(1+\\cos x)}\\\\\n&amp;=\\lim_{x\\to 0}\\left[\\left(\\frac{\\sin x}{x}\\right)^5\\cdot \\frac{1}{\\cos^3x}\\cdot\\frac{1+\\cos x+\\cos^2x}{1+\\cos x}\\right]\\\\\n&amp;=1^5\\cdot\\frac{1}{1^3}\\cdot\\frac{1+1+1^2}{1+1}=\\frac{3}{2}.\n\\end{align}$$</p>\n" }, { "answer_id": 2116328, "score": 2, "text": "<p>$$\\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\ldots=\\frac{\\sin^3x\\left(1-\\cos^3x\\right)}{x^5\\cos^3x}=\\tan^3x \\cdot\\frac{1-\\cos^3x}{x^5}$$ $$\\sim x^3 \\cdot\\frac{1-\\cos^3x}{x^5}=\\frac{1-\\cos^3x}{x^2}\\sim \\frac{1-\\left(1-x^2/2+o(x^2)\\right)^3}{x^2}$$ $$=\\frac{3x^2/2+o(x^2)}{x^2}=\\frac{3}{2}+\\frac{o(x^2)}{x^2}\\underbrace{\\to}_{\\text{ as }x\\to 0}\\frac{3}{2}+0=\\frac{3}{2}$$</p>\n" }, { "answer_id": 2116348, "score": 2, "text": "<p>$$\\frac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\tan^3x\\dfrac{1 - \\cos ^3 x}{x^5}=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{(1-\\cos x)(1+\\cos x+\\cos^2x)}{x^2}\\right)=\\\\\n=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{1-\\cos x}{x^2}\\right)(1+\\cos x+\\cos^2x)=\\\\\n=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{\\sin x}{x}\\right)^2\\left(\\dfrac{1}{1+\\cos x}\\right)(1+\\cos x+\\cos^2x)$$</p>\n\n<p>Now use the fundamental limits:</p>\n\n<p>$$\\lim_{x\\to 0}\\frac{\\sin x }{x}=1=\\lim_{x\\to 0}\\frac{\\tan x }{x}$$</p>\n" }, { "answer_id": 2116281, "score": 5, "text": "<p>HINT:</p>\n\n<p>$\\lim_{x\\to0}\\left(\\left(\\dfrac{\\tan x}x\\right)^3\\cdot\\dfrac1{x^2}-\\left(\\dfrac{\\sin x}x\\right)^3\\cdot\\dfrac1{x^2}\\right)$ is of the form $\\infty-\\infty$</p>\n\n<p>See <a href=\"https://en.wikipedia.org/wiki/Indeterminate_form#List_of_indeterminate_forms\">List of indeterminate forms</a></p>\n\n<p>Use \n$$\\dfrac1{\\cos^3x}\\cdot\\left(\\dfrac{\\sin x}x\\right)^3\\cdot\\dfrac{1-\\cos x}{x^2}\\cdot(1+\\cos x+\\cos^2x)$$</p>\n" }, { "answer_id": 2116294, "score": 4, "text": "<p>$$\\lim_{x\\to 0} \\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\lim_{x\\to 0} \\dfrac{\\frac{\\sin ^3 x}{\\cos^3x} - \\sin ^3 x}{x^5}=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x(1-\\cos^3x)}{x^5\\cos ^3 x}$$\nwith $\\cos x\\approx 1-\\frac12x^2$ as $x\\to0$\n$$\\lim_{x\\to 0} \\dfrac{\\sin ^3 x}{x^3}\\times\\dfrac{1-(1-\\frac12x^2)^3}{x^2\\times1}=\\lim_{x\\to 0}\\frac32-\\frac34x^2+\\frac18x^4=\\frac32$$</p>\n" }, { "answer_id": 2116322, "score": 3, "text": "<p>$$\\begin{align}\\lim_{x\\to 0} \\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}&amp;=\\lim_{x\\to 0} \\dfrac{\\frac{\\sin ^3 x}{\\cos^3x} - \\sin ^3 x}{x^5}\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x(1-\\cos^3x)}{x^5\\cos ^3 x}\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x\\Big[(1-\\cos x)(1+\\cos x+\\cos^2x)\\Big]}{x^5\\cos ^3 x}\\\\\n&amp;=\\lim_{x\\to 0} \\left(\\dfrac{\\sin ^3 x\\Big[(1-\\cos x)(1+\\cos x+\\cos^2x)\\Big]}{x^5\\cos ^3 x}\\cdot\\frac{1+\\cos x}{1+\\cos x}\\right), \\quad\\text{note that }(1-\\cos x)(1+\\cos x)=\\sin^2x\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^5 x\\quad(1+\\cos x+\\cos^2x)}{x^5\\cos ^3 x\\quad(1+\\cos x)}\\\\\n&amp;=\\lim_{x\\to 0}\\left[\\left(\\frac{\\sin x}{x}\\right)^5\\cdot \\frac{1}{\\cos^3x}\\cdot\\frac{1+\\cos x+\\cos^2x}{1+\\cos x}\\right]\\\\\n&amp;=1^5\\cdot\\frac{1}{1^3}\\cdot\\frac{1+1+1^2}{1+1}=\\frac{3}{2}.\n\\end{align}$$</p>\n" }, { "answer_id": 2116328, "score": 2, "text": "<p>$$\\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\ldots=\\frac{\\sin^3x\\left(1-\\cos^3x\\right)}{x^5\\cos^3x}=\\tan^3x \\cdot\\frac{1-\\cos^3x}{x^5}$$ $$\\sim x^3 \\cdot\\frac{1-\\cos^3x}{x^5}=\\frac{1-\\cos^3x}{x^2}\\sim \\frac{1-\\left(1-x^2/2+o(x^2)\\right)^3}{x^2}$$ $$=\\frac{3x^2/2+o(x^2)}{x^2}=\\frac{3}{2}+\\frac{o(x^2)}{x^2}\\underbrace{\\to}_{\\text{ as }x\\to 0}\\frac{3}{2}+0=\\frac{3}{2}$$</p>\n" }, { "answer_id": 2116348, "score": 2, "text": "<p>$$\\frac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\tan^3x\\dfrac{1 - \\cos ^3 x}{x^5}=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{(1-\\cos x)(1+\\cos x+\\cos^2x)}{x^2}\\right)=\\\\\n=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{1-\\cos x}{x^2}\\right)(1+\\cos x+\\cos^2x)=\\\\\n=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{\\sin x}{x}\\right)^2\\left(\\dfrac{1}{1+\\cos x}\\right)(1+\\cos x+\\cos^2x)$$</p>\n\n<p>Now use the fundamental limits:</p>\n\n<p>$$\\lim_{x\\to 0}\\frac{\\sin x }{x}=1=\\lim_{x\\to 0}\\frac{\\tan x }{x}$$</p>\n" }, { "answer_id": 2116281, "score": 5, "text": "<p>HINT:</p>\n\n<p>$\\lim_{x\\to0}\\left(\\left(\\dfrac{\\tan x}x\\right)^3\\cdot\\dfrac1{x^2}-\\left(\\dfrac{\\sin x}x\\right)^3\\cdot\\dfrac1{x^2}\\right)$ is of the form $\\infty-\\infty$</p>\n\n<p>See <a href=\"https://en.wikipedia.org/wiki/Indeterminate_form#List_of_indeterminate_forms\">List of indeterminate forms</a></p>\n\n<p>Use \n$$\\dfrac1{\\cos^3x}\\cdot\\left(\\dfrac{\\sin x}x\\right)^3\\cdot\\dfrac{1-\\cos x}{x^2}\\cdot(1+\\cos x+\\cos^2x)$$</p>\n" }, { "answer_id": 2116294, "score": 4, "text": "<p>$$\\lim_{x\\to 0} \\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\lim_{x\\to 0} \\dfrac{\\frac{\\sin ^3 x}{\\cos^3x} - \\sin ^3 x}{x^5}=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x(1-\\cos^3x)}{x^5\\cos ^3 x}$$\nwith $\\cos x\\approx 1-\\frac12x^2$ as $x\\to0$\n$$\\lim_{x\\to 0} \\dfrac{\\sin ^3 x}{x^3}\\times\\dfrac{1-(1-\\frac12x^2)^3}{x^2\\times1}=\\lim_{x\\to 0}\\frac32-\\frac34x^2+\\frac18x^4=\\frac32$$</p>\n" }, { "answer_id": 2116322, "score": 3, "text": "<p>$$\\begin{align}\\lim_{x\\to 0} \\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}&amp;=\\lim_{x\\to 0} \\dfrac{\\frac{\\sin ^3 x}{\\cos^3x} - \\sin ^3 x}{x^5}\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x(1-\\cos^3x)}{x^5\\cos ^3 x}\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x\\Big[(1-\\cos x)(1+\\cos x+\\cos^2x)\\Big]}{x^5\\cos ^3 x}\\\\\n&amp;=\\lim_{x\\to 0} \\left(\\dfrac{\\sin ^3 x\\Big[(1-\\cos x)(1+\\cos x+\\cos^2x)\\Big]}{x^5\\cos ^3 x}\\cdot\\frac{1+\\cos x}{1+\\cos x}\\right), \\quad\\text{note that }(1-\\cos x)(1+\\cos x)=\\sin^2x\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^5 x\\quad(1+\\cos x+\\cos^2x)}{x^5\\cos ^3 x\\quad(1+\\cos x)}\\\\\n&amp;=\\lim_{x\\to 0}\\left[\\left(\\frac{\\sin x}{x}\\right)^5\\cdot \\frac{1}{\\cos^3x}\\cdot\\frac{1+\\cos x+\\cos^2x}{1+\\cos x}\\right]\\\\\n&amp;=1^5\\cdot\\frac{1}{1^3}\\cdot\\frac{1+1+1^2}{1+1}=\\frac{3}{2}.\n\\end{align}$$</p>\n" }, { "answer_id": 2116328, "score": 2, "text": "<p>$$\\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\ldots=\\frac{\\sin^3x\\left(1-\\cos^3x\\right)}{x^5\\cos^3x}=\\tan^3x \\cdot\\frac{1-\\cos^3x}{x^5}$$ $$\\sim x^3 \\cdot\\frac{1-\\cos^3x}{x^5}=\\frac{1-\\cos^3x}{x^2}\\sim \\frac{1-\\left(1-x^2/2+o(x^2)\\right)^3}{x^2}$$ $$=\\frac{3x^2/2+o(x^2)}{x^2}=\\frac{3}{2}+\\frac{o(x^2)}{x^2}\\underbrace{\\to}_{\\text{ as }x\\to 0}\\frac{3}{2}+0=\\frac{3}{2}$$</p>\n" }, { "answer_id": 2116348, "score": 2, "text": "<p>$$\\frac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\tan^3x\\dfrac{1 - \\cos ^3 x}{x^5}=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{(1-\\cos x)(1+\\cos x+\\cos^2x)}{x^2}\\right)=\\\\\n=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{1-\\cos x}{x^2}\\right)(1+\\cos x+\\cos^2x)=\\\\\n=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{\\sin x}{x}\\right)^2\\left(\\dfrac{1}{1+\\cos x}\\right)(1+\\cos x+\\cos^2x)$$</p>\n\n<p>Now use the fundamental limits:</p>\n\n<p>$$\\lim_{x\\to 0}\\frac{\\sin x }{x}=1=\\lim_{x\\to 0}\\frac{\\tan x }{x}$$</p>\n" }, { "answer_id": 2116281, "score": 5, "text": "<p>HINT:</p>\n\n<p>$\\lim_{x\\to0}\\left(\\left(\\dfrac{\\tan x}x\\right)^3\\cdot\\dfrac1{x^2}-\\left(\\dfrac{\\sin x}x\\right)^3\\cdot\\dfrac1{x^2}\\right)$ is of the form $\\infty-\\infty$</p>\n\n<p>See <a href=\"https://en.wikipedia.org/wiki/Indeterminate_form#List_of_indeterminate_forms\">List of indeterminate forms</a></p>\n\n<p>Use \n$$\\dfrac1{\\cos^3x}\\cdot\\left(\\dfrac{\\sin x}x\\right)^3\\cdot\\dfrac{1-\\cos x}{x^2}\\cdot(1+\\cos x+\\cos^2x)$$</p>\n" }, { "answer_id": 2116294, "score": 4, "text": "<p>$$\\lim_{x\\to 0} \\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\lim_{x\\to 0} \\dfrac{\\frac{\\sin ^3 x}{\\cos^3x} - \\sin ^3 x}{x^5}=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x(1-\\cos^3x)}{x^5\\cos ^3 x}$$\nwith $\\cos x\\approx 1-\\frac12x^2$ as $x\\to0$\n$$\\lim_{x\\to 0} \\dfrac{\\sin ^3 x}{x^3}\\times\\dfrac{1-(1-\\frac12x^2)^3}{x^2\\times1}=\\lim_{x\\to 0}\\frac32-\\frac34x^2+\\frac18x^4=\\frac32$$</p>\n" }, { "answer_id": 2116322, "score": 3, "text": "<p>$$\\begin{align}\\lim_{x\\to 0} \\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}&amp;=\\lim_{x\\to 0} \\dfrac{\\frac{\\sin ^3 x}{\\cos^3x} - \\sin ^3 x}{x^5}\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x(1-\\cos^3x)}{x^5\\cos ^3 x}\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x\\Big[(1-\\cos x)(1+\\cos x+\\cos^2x)\\Big]}{x^5\\cos ^3 x}\\\\\n&amp;=\\lim_{x\\to 0} \\left(\\dfrac{\\sin ^3 x\\Big[(1-\\cos x)(1+\\cos x+\\cos^2x)\\Big]}{x^5\\cos ^3 x}\\cdot\\frac{1+\\cos x}{1+\\cos x}\\right), \\quad\\text{note that }(1-\\cos x)(1+\\cos x)=\\sin^2x\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^5 x\\quad(1+\\cos x+\\cos^2x)}{x^5\\cos ^3 x\\quad(1+\\cos x)}\\\\\n&amp;=\\lim_{x\\to 0}\\left[\\left(\\frac{\\sin x}{x}\\right)^5\\cdot \\frac{1}{\\cos^3x}\\cdot\\frac{1+\\cos x+\\cos^2x}{1+\\cos x}\\right]\\\\\n&amp;=1^5\\cdot\\frac{1}{1^3}\\cdot\\frac{1+1+1^2}{1+1}=\\frac{3}{2}.\n\\end{align}$$</p>\n" }, { "answer_id": 2116328, "score": 2, "text": "<p>$$\\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\ldots=\\frac{\\sin^3x\\left(1-\\cos^3x\\right)}{x^5\\cos^3x}=\\tan^3x \\cdot\\frac{1-\\cos^3x}{x^5}$$ $$\\sim x^3 \\cdot\\frac{1-\\cos^3x}{x^5}=\\frac{1-\\cos^3x}{x^2}\\sim \\frac{1-\\left(1-x^2/2+o(x^2)\\right)^3}{x^2}$$ $$=\\frac{3x^2/2+o(x^2)}{x^2}=\\frac{3}{2}+\\frac{o(x^2)}{x^2}\\underbrace{\\to}_{\\text{ as }x\\to 0}\\frac{3}{2}+0=\\frac{3}{2}$$</p>\n" }, { "answer_id": 2116348, "score": 2, "text": "<p>$$\\frac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\tan^3x\\dfrac{1 - \\cos ^3 x}{x^5}=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{(1-\\cos x)(1+\\cos x+\\cos^2x)}{x^2}\\right)=\\\\\n=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{1-\\cos x}{x^2}\\right)(1+\\cos x+\\cos^2x)=\\\\\n=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{\\sin x}{x}\\right)^2\\left(\\dfrac{1}{1+\\cos x}\\right)(1+\\cos x+\\cos^2x)$$</p>\n\n<p>Now use the fundamental limits:</p>\n\n<p>$$\\lim_{x\\to 0}\\frac{\\sin x }{x}=1=\\lim_{x\\to 0}\\frac{\\tan x }{x}$$</p>\n" }, { "answer_id": 2116281, "score": 5, "text": "<p>HINT:</p>\n\n<p>$\\lim_{x\\to0}\\left(\\left(\\dfrac{\\tan x}x\\right)^3\\cdot\\dfrac1{x^2}-\\left(\\dfrac{\\sin x}x\\right)^3\\cdot\\dfrac1{x^2}\\right)$ is of the form $\\infty-\\infty$</p>\n\n<p>See <a href=\"https://en.wikipedia.org/wiki/Indeterminate_form#List_of_indeterminate_forms\">List of indeterminate forms</a></p>\n\n<p>Use \n$$\\dfrac1{\\cos^3x}\\cdot\\left(\\dfrac{\\sin x}x\\right)^3\\cdot\\dfrac{1-\\cos x}{x^2}\\cdot(1+\\cos x+\\cos^2x)$$</p>\n" }, { "answer_id": 2116294, "score": 4, "text": "<p>$$\\lim_{x\\to 0} \\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\lim_{x\\to 0} \\dfrac{\\frac{\\sin ^3 x}{\\cos^3x} - \\sin ^3 x}{x^5}=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x(1-\\cos^3x)}{x^5\\cos ^3 x}$$\nwith $\\cos x\\approx 1-\\frac12x^2$ as $x\\to0$\n$$\\lim_{x\\to 0} \\dfrac{\\sin ^3 x}{x^3}\\times\\dfrac{1-(1-\\frac12x^2)^3}{x^2\\times1}=\\lim_{x\\to 0}\\frac32-\\frac34x^2+\\frac18x^4=\\frac32$$</p>\n" }, { "answer_id": 2116322, "score": 3, "text": "<p>$$\\begin{align}\\lim_{x\\to 0} \\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}&amp;=\\lim_{x\\to 0} \\dfrac{\\frac{\\sin ^3 x}{\\cos^3x} - \\sin ^3 x}{x^5}\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x(1-\\cos^3x)}{x^5\\cos ^3 x}\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^3 x\\Big[(1-\\cos x)(1+\\cos x+\\cos^2x)\\Big]}{x^5\\cos ^3 x}\\\\\n&amp;=\\lim_{x\\to 0} \\left(\\dfrac{\\sin ^3 x\\Big[(1-\\cos x)(1+\\cos x+\\cos^2x)\\Big]}{x^5\\cos ^3 x}\\cdot\\frac{1+\\cos x}{1+\\cos x}\\right), \\quad\\text{note that }(1-\\cos x)(1+\\cos x)=\\sin^2x\\\\\n&amp;=\\lim_{x\\to 0} \\dfrac{\\sin ^5 x\\quad(1+\\cos x+\\cos^2x)}{x^5\\cos ^3 x\\quad(1+\\cos x)}\\\\\n&amp;=\\lim_{x\\to 0}\\left[\\left(\\frac{\\sin x}{x}\\right)^5\\cdot \\frac{1}{\\cos^3x}\\cdot\\frac{1+\\cos x+\\cos^2x}{1+\\cos x}\\right]\\\\\n&amp;=1^5\\cdot\\frac{1}{1^3}\\cdot\\frac{1+1+1^2}{1+1}=\\frac{3}{2}.\n\\end{align}$$</p>\n" }, { "answer_id": 2116328, "score": 2, "text": "<p>$$\\dfrac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\ldots=\\frac{\\sin^3x\\left(1-\\cos^3x\\right)}{x^5\\cos^3x}=\\tan^3x \\cdot\\frac{1-\\cos^3x}{x^5}$$ $$\\sim x^3 \\cdot\\frac{1-\\cos^3x}{x^5}=\\frac{1-\\cos^3x}{x^2}\\sim \\frac{1-\\left(1-x^2/2+o(x^2)\\right)^3}{x^2}$$ $$=\\frac{3x^2/2+o(x^2)}{x^2}=\\frac{3}{2}+\\frac{o(x^2)}{x^2}\\underbrace{\\to}_{\\text{ as }x\\to 0}\\frac{3}{2}+0=\\frac{3}{2}$$</p>\n" }, { "answer_id": 2116348, "score": 2, "text": "<p>$$\\frac{\\tan ^3 x - \\sin ^3 x}{x^5}=\\tan^3x\\dfrac{1 - \\cos ^3 x}{x^5}=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{(1-\\cos x)(1+\\cos x+\\cos^2x)}{x^2}\\right)=\\\\\n=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{1-\\cos x}{x^2}\\right)(1+\\cos x+\\cos^2x)=\\\\\n=\\left(\\dfrac{\\tan x}{x}\\right)^3\\left(\\dfrac{\\sin x}{x}\\right)^2\\left(\\dfrac{1}{1+\\cos x}\\right)(1+\\cos x+\\cos^2x)$$</p>\n\n<p>Now use the fundamental limits:</p>\n\n<p>$$\\lim_{x\\to 0}\\frac{\\sin x }{x}=1=\\lim_{x\\to 0}\\frac{\\tan x }{x}$$</p>\n" } ]
Question: Limit $\lim_{x\to 0} \frac{\tan ^3 x - \sin ^3 x}{x^5}$ without l&#39;H&#244;pital&#39;s rule. Answer 1: <p>HINT:</p> <p>$\lim_{x\to0}\left(\left(\dfrac{\tan x}x\right)^3\cdot\dfrac1{x^2}-\left(\dfrac{\sin x}x\right)^3\cdot\dfrac1{x^2}\right)$ is of the form $\infty-\infty$</p> <p>See <a href="https://en.wikipedia.org/wiki/Indeterminate_form#List_of_indeterminate_forms">List of indeterminate forms</a></p> <p>Use $$\dfrac1{\cos^3x}\cdot\left(\dfrac{\sin x}x\right)^3\cdot\dfrac{1-\cos x}{x^2}\cdot(1+\cos x+\cos^2x)$$</p> Answer 2: <p>$$\lim_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5}=\lim_{x\to 0} \dfrac{\frac{\sin ^3 x}{\cos^3x} - \sin ^3 x}{x^5}=\lim_{x\to 0} \dfrac{\sin ^3 x(1-\cos^3x)}{x^5\cos ^3 x}$$ with $\cos x\approx 1-\frac12x^2$ as $x\to0$ $$\lim_{x\to 0} \dfrac{\sin ^3 x}{x^3}\times\dfrac{1-(1-\frac12x^2)^3}{x^2\times1}=\lim_{x\to 0}\frac32-\frac34x^2+\frac18x^4=\frac32$$</p> Answer 3: <p>$$\begin{align}\lim_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5}&amp;=\lim_{x\to 0} \dfrac{\frac{\sin ^3 x}{\cos^3x} - \sin ^3 x}{x^5}\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^3 x(1-\cos^3x)}{x^5\cos ^3 x}\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^3 x\Big[(1-\cos x)(1+\cos x+\cos^2x)\Big]}{x^5\cos ^3 x}\\ &amp;=\lim_{x\to 0} \left(\dfrac{\sin ^3 x\Big[(1-\cos x)(1+\cos x+\cos^2x)\Big]}{x^5\cos ^3 x}\cdot\frac{1+\cos x}{1+\cos x}\right), \quad\text{note that }(1-\cos x)(1+\cos x)=\sin^2x\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^5 x\quad(1+\cos x+\cos^2x)}{x^5\cos ^3 x\quad(1+\cos x)}\\ &amp;=\lim_{x\to 0}\left[\left(\frac{\sin x}{x}\right)^5\cdot \frac{1}{\cos^3x}\cdot\frac{1+\cos x+\cos^2x}{1+\cos x}\right]\\ &amp;=1^5\cdot\frac{1}{1^3}\cdot\frac{1+1+1^2}{1+1}=\frac{3}{2}. \end{align}$$</p> Answer 4: <p>$$\dfrac{\tan ^3 x - \sin ^3 x}{x^5}=\ldots=\frac{\sin^3x\left(1-\cos^3x\right)}{x^5\cos^3x}=\tan^3x \cdot\frac{1-\cos^3x}{x^5}$$ $$\sim x^3 \cdot\frac{1-\cos^3x}{x^5}=\frac{1-\cos^3x}{x^2}\sim \frac{1-\left(1-x^2/2+o(x^2)\right)^3}{x^2}$$ $$=\frac{3x^2/2+o(x^2)}{x^2}=\frac{3}{2}+\frac{o(x^2)}{x^2}\underbrace{\to}_{\text{ as }x\to 0}\frac{3}{2}+0=\frac{3}{2}$$</p> Answer 5: <p>$$\frac{\tan ^3 x - \sin ^3 x}{x^5}=\tan^3x\dfrac{1 - \cos ^3 x}{x^5}=\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{(1-\cos x)(1+\cos x+\cos^2x)}{x^2}\right)=\\ =\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{1-\cos x}{x^2}\right)(1+\cos x+\cos^2x)=\\ =\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{\sin x}{x}\right)^2\left(\dfrac{1}{1+\cos x}\right)(1+\cos x+\cos^2x)$$</p> <p>Now use the fundamental limits:</p> <p>$$\lim_{x\to 0}\frac{\sin x }{x}=1=\lim_{x\to 0}\frac{\tan x }{x}$$</p> Answer 6: <p>HINT:</p> <p>$\lim_{x\to0}\left(\left(\dfrac{\tan x}x\right)^3\cdot\dfrac1{x^2}-\left(\dfrac{\sin x}x\right)^3\cdot\dfrac1{x^2}\right)$ is of the form $\infty-\infty$</p> <p>See <a href="https://en.wikipedia.org/wiki/Indeterminate_form#List_of_indeterminate_forms">List of indeterminate forms</a></p> <p>Use $$\dfrac1{\cos^3x}\cdot\left(\dfrac{\sin x}x\right)^3\cdot\dfrac{1-\cos x}{x^2}\cdot(1+\cos x+\cos^2x)$$</p> Answer 7: <p>$$\lim_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5}=\lim_{x\to 0} \dfrac{\frac{\sin ^3 x}{\cos^3x} - \sin ^3 x}{x^5}=\lim_{x\to 0} \dfrac{\sin ^3 x(1-\cos^3x)}{x^5\cos ^3 x}$$ with $\cos x\approx 1-\frac12x^2$ as $x\to0$ $$\lim_{x\to 0} \dfrac{\sin ^3 x}{x^3}\times\dfrac{1-(1-\frac12x^2)^3}{x^2\times1}=\lim_{x\to 0}\frac32-\frac34x^2+\frac18x^4=\frac32$$</p> Answer 8: <p>$$\begin{align}\lim_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5}&amp;=\lim_{x\to 0} \dfrac{\frac{\sin ^3 x}{\cos^3x} - \sin ^3 x}{x^5}\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^3 x(1-\cos^3x)}{x^5\cos ^3 x}\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^3 x\Big[(1-\cos x)(1+\cos x+\cos^2x)\Big]}{x^5\cos ^3 x}\\ &amp;=\lim_{x\to 0} \left(\dfrac{\sin ^3 x\Big[(1-\cos x)(1+\cos x+\cos^2x)\Big]}{x^5\cos ^3 x}\cdot\frac{1+\cos x}{1+\cos x}\right), \quad\text{note that }(1-\cos x)(1+\cos x)=\sin^2x\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^5 x\quad(1+\cos x+\cos^2x)}{x^5\cos ^3 x\quad(1+\cos x)}\\ &amp;=\lim_{x\to 0}\left[\left(\frac{\sin x}{x}\right)^5\cdot \frac{1}{\cos^3x}\cdot\frac{1+\cos x+\cos^2x}{1+\cos x}\right]\\ &amp;=1^5\cdot\frac{1}{1^3}\cdot\frac{1+1+1^2}{1+1}=\frac{3}{2}. \end{align}$$</p> Answer 9: <p>$$\dfrac{\tan ^3 x - \sin ^3 x}{x^5}=\ldots=\frac{\sin^3x\left(1-\cos^3x\right)}{x^5\cos^3x}=\tan^3x \cdot\frac{1-\cos^3x}{x^5}$$ $$\sim x^3 \cdot\frac{1-\cos^3x}{x^5}=\frac{1-\cos^3x}{x^2}\sim \frac{1-\left(1-x^2/2+o(x^2)\right)^3}{x^2}$$ $$=\frac{3x^2/2+o(x^2)}{x^2}=\frac{3}{2}+\frac{o(x^2)}{x^2}\underbrace{\to}_{\text{ as }x\to 0}\frac{3}{2}+0=\frac{3}{2}$$</p> Answer 10: <p>$$\frac{\tan ^3 x - \sin ^3 x}{x^5}=\tan^3x\dfrac{1 - \cos ^3 x}{x^5}=\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{(1-\cos x)(1+\cos x+\cos^2x)}{x^2}\right)=\\ =\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{1-\cos x}{x^2}\right)(1+\cos x+\cos^2x)=\\ =\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{\sin x}{x}\right)^2\left(\dfrac{1}{1+\cos x}\right)(1+\cos x+\cos^2x)$$</p> <p>Now use the fundamental limits:</p> <p>$$\lim_{x\to 0}\frac{\sin x }{x}=1=\lim_{x\to 0}\frac{\tan x }{x}$$</p> Answer 11: <p>HINT:</p> <p>$\lim_{x\to0}\left(\left(\dfrac{\tan x}x\right)^3\cdot\dfrac1{x^2}-\left(\dfrac{\sin x}x\right)^3\cdot\dfrac1{x^2}\right)$ is of the form $\infty-\infty$</p> <p>See <a href="https://en.wikipedia.org/wiki/Indeterminate_form#List_of_indeterminate_forms">List of indeterminate forms</a></p> <p>Use $$\dfrac1{\cos^3x}\cdot\left(\dfrac{\sin x}x\right)^3\cdot\dfrac{1-\cos x}{x^2}\cdot(1+\cos x+\cos^2x)$$</p> Answer 12: <p>$$\lim_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5}=\lim_{x\to 0} \dfrac{\frac{\sin ^3 x}{\cos^3x} - \sin ^3 x}{x^5}=\lim_{x\to 0} \dfrac{\sin ^3 x(1-\cos^3x)}{x^5\cos ^3 x}$$ with $\cos x\approx 1-\frac12x^2$ as $x\to0$ $$\lim_{x\to 0} \dfrac{\sin ^3 x}{x^3}\times\dfrac{1-(1-\frac12x^2)^3}{x^2\times1}=\lim_{x\to 0}\frac32-\frac34x^2+\frac18x^4=\frac32$$</p> Answer 13: <p>$$\begin{align}\lim_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5}&amp;=\lim_{x\to 0} \dfrac{\frac{\sin ^3 x}{\cos^3x} - \sin ^3 x}{x^5}\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^3 x(1-\cos^3x)}{x^5\cos ^3 x}\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^3 x\Big[(1-\cos x)(1+\cos x+\cos^2x)\Big]}{x^5\cos ^3 x}\\ &amp;=\lim_{x\to 0} \left(\dfrac{\sin ^3 x\Big[(1-\cos x)(1+\cos x+\cos^2x)\Big]}{x^5\cos ^3 x}\cdot\frac{1+\cos x}{1+\cos x}\right), \quad\text{note that }(1-\cos x)(1+\cos x)=\sin^2x\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^5 x\quad(1+\cos x+\cos^2x)}{x^5\cos ^3 x\quad(1+\cos x)}\\ &amp;=\lim_{x\to 0}\left[\left(\frac{\sin x}{x}\right)^5\cdot \frac{1}{\cos^3x}\cdot\frac{1+\cos x+\cos^2x}{1+\cos x}\right]\\ &amp;=1^5\cdot\frac{1}{1^3}\cdot\frac{1+1+1^2}{1+1}=\frac{3}{2}. \end{align}$$</p> Answer 14: <p>$$\dfrac{\tan ^3 x - \sin ^3 x}{x^5}=\ldots=\frac{\sin^3x\left(1-\cos^3x\right)}{x^5\cos^3x}=\tan^3x \cdot\frac{1-\cos^3x}{x^5}$$ $$\sim x^3 \cdot\frac{1-\cos^3x}{x^5}=\frac{1-\cos^3x}{x^2}\sim \frac{1-\left(1-x^2/2+o(x^2)\right)^3}{x^2}$$ $$=\frac{3x^2/2+o(x^2)}{x^2}=\frac{3}{2}+\frac{o(x^2)}{x^2}\underbrace{\to}_{\text{ as }x\to 0}\frac{3}{2}+0=\frac{3}{2}$$</p> Answer 15: <p>$$\frac{\tan ^3 x - \sin ^3 x}{x^5}=\tan^3x\dfrac{1 - \cos ^3 x}{x^5}=\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{(1-\cos x)(1+\cos x+\cos^2x)}{x^2}\right)=\\ =\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{1-\cos x}{x^2}\right)(1+\cos x+\cos^2x)=\\ =\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{\sin x}{x}\right)^2\left(\dfrac{1}{1+\cos x}\right)(1+\cos x+\cos^2x)$$</p> <p>Now use the fundamental limits:</p> <p>$$\lim_{x\to 0}\frac{\sin x }{x}=1=\lim_{x\to 0}\frac{\tan x }{x}$$</p> Answer 16: <p>HINT:</p> <p>$\lim_{x\to0}\left(\left(\dfrac{\tan x}x\right)^3\cdot\dfrac1{x^2}-\left(\dfrac{\sin x}x\right)^3\cdot\dfrac1{x^2}\right)$ is of the form $\infty-\infty$</p> <p>See <a href="https://en.wikipedia.org/wiki/Indeterminate_form#List_of_indeterminate_forms">List of indeterminate forms</a></p> <p>Use $$\dfrac1{\cos^3x}\cdot\left(\dfrac{\sin x}x\right)^3\cdot\dfrac{1-\cos x}{x^2}\cdot(1+\cos x+\cos^2x)$$</p> Answer 17: <p>$$\lim_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5}=\lim_{x\to 0} \dfrac{\frac{\sin ^3 x}{\cos^3x} - \sin ^3 x}{x^5}=\lim_{x\to 0} \dfrac{\sin ^3 x(1-\cos^3x)}{x^5\cos ^3 x}$$ with $\cos x\approx 1-\frac12x^2$ as $x\to0$ $$\lim_{x\to 0} \dfrac{\sin ^3 x}{x^3}\times\dfrac{1-(1-\frac12x^2)^3}{x^2\times1}=\lim_{x\to 0}\frac32-\frac34x^2+\frac18x^4=\frac32$$</p> Answer 18: <p>$$\begin{align}\lim_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5}&amp;=\lim_{x\to 0} \dfrac{\frac{\sin ^3 x}{\cos^3x} - \sin ^3 x}{x^5}\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^3 x(1-\cos^3x)}{x^5\cos ^3 x}\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^3 x\Big[(1-\cos x)(1+\cos x+\cos^2x)\Big]}{x^5\cos ^3 x}\\ &amp;=\lim_{x\to 0} \left(\dfrac{\sin ^3 x\Big[(1-\cos x)(1+\cos x+\cos^2x)\Big]}{x^5\cos ^3 x}\cdot\frac{1+\cos x}{1+\cos x}\right), \quad\text{note that }(1-\cos x)(1+\cos x)=\sin^2x\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^5 x\quad(1+\cos x+\cos^2x)}{x^5\cos ^3 x\quad(1+\cos x)}\\ &amp;=\lim_{x\to 0}\left[\left(\frac{\sin x}{x}\right)^5\cdot \frac{1}{\cos^3x}\cdot\frac{1+\cos x+\cos^2x}{1+\cos x}\right]\\ &amp;=1^5\cdot\frac{1}{1^3}\cdot\frac{1+1+1^2}{1+1}=\frac{3}{2}. \end{align}$$</p> Answer 19: <p>$$\dfrac{\tan ^3 x - \sin ^3 x}{x^5}=\ldots=\frac{\sin^3x\left(1-\cos^3x\right)}{x^5\cos^3x}=\tan^3x \cdot\frac{1-\cos^3x}{x^5}$$ $$\sim x^3 \cdot\frac{1-\cos^3x}{x^5}=\frac{1-\cos^3x}{x^2}\sim \frac{1-\left(1-x^2/2+o(x^2)\right)^3}{x^2}$$ $$=\frac{3x^2/2+o(x^2)}{x^2}=\frac{3}{2}+\frac{o(x^2)}{x^2}\underbrace{\to}_{\text{ as }x\to 0}\frac{3}{2}+0=\frac{3}{2}$$</p> Answer 20: <p>$$\frac{\tan ^3 x - \sin ^3 x}{x^5}=\tan^3x\dfrac{1 - \cos ^3 x}{x^5}=\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{(1-\cos x)(1+\cos x+\cos^2x)}{x^2}\right)=\\ =\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{1-\cos x}{x^2}\right)(1+\cos x+\cos^2x)=\\ =\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{\sin x}{x}\right)^2\left(\dfrac{1}{1+\cos x}\right)(1+\cos x+\cos^2x)$$</p> <p>Now use the fundamental limits:</p> <p>$$\lim_{x\to 0}\frac{\sin x }{x}=1=\lim_{x\to 0}\frac{\tan x }{x}$$</p> Answer 21: <p>HINT:</p> <p>$\lim_{x\to0}\left(\left(\dfrac{\tan x}x\right)^3\cdot\dfrac1{x^2}-\left(\dfrac{\sin x}x\right)^3\cdot\dfrac1{x^2}\right)$ is of the form $\infty-\infty$</p> <p>See <a href="https://en.wikipedia.org/wiki/Indeterminate_form#List_of_indeterminate_forms">List of indeterminate forms</a></p> <p>Use $$\dfrac1{\cos^3x}\cdot\left(\dfrac{\sin x}x\right)^3\cdot\dfrac{1-\cos x}{x^2}\cdot(1+\cos x+\cos^2x)$$</p> Answer 22: <p>$$\lim_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5}=\lim_{x\to 0} \dfrac{\frac{\sin ^3 x}{\cos^3x} - \sin ^3 x}{x^5}=\lim_{x\to 0} \dfrac{\sin ^3 x(1-\cos^3x)}{x^5\cos ^3 x}$$ with $\cos x\approx 1-\frac12x^2$ as $x\to0$ $$\lim_{x\to 0} \dfrac{\sin ^3 x}{x^3}\times\dfrac{1-(1-\frac12x^2)^3}{x^2\times1}=\lim_{x\to 0}\frac32-\frac34x^2+\frac18x^4=\frac32$$</p> Answer 23: <p>$$\begin{align}\lim_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5}&amp;=\lim_{x\to 0} \dfrac{\frac{\sin ^3 x}{\cos^3x} - \sin ^3 x}{x^5}\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^3 x(1-\cos^3x)}{x^5\cos ^3 x}\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^3 x\Big[(1-\cos x)(1+\cos x+\cos^2x)\Big]}{x^5\cos ^3 x}\\ &amp;=\lim_{x\to 0} \left(\dfrac{\sin ^3 x\Big[(1-\cos x)(1+\cos x+\cos^2x)\Big]}{x^5\cos ^3 x}\cdot\frac{1+\cos x}{1+\cos x}\right), \quad\text{note that }(1-\cos x)(1+\cos x)=\sin^2x\\ &amp;=\lim_{x\to 0} \dfrac{\sin ^5 x\quad(1+\cos x+\cos^2x)}{x^5\cos ^3 x\quad(1+\cos x)}\\ &amp;=\lim_{x\to 0}\left[\left(\frac{\sin x}{x}\right)^5\cdot \frac{1}{\cos^3x}\cdot\frac{1+\cos x+\cos^2x}{1+\cos x}\right]\\ &amp;=1^5\cdot\frac{1}{1^3}\cdot\frac{1+1+1^2}{1+1}=\frac{3}{2}. \end{align}$$</p> Answer 24: <p>$$\dfrac{\tan ^3 x - \sin ^3 x}{x^5}=\ldots=\frac{\sin^3x\left(1-\cos^3x\right)}{x^5\cos^3x}=\tan^3x \cdot\frac{1-\cos^3x}{x^5}$$ $$\sim x^3 \cdot\frac{1-\cos^3x}{x^5}=\frac{1-\cos^3x}{x^2}\sim \frac{1-\left(1-x^2/2+o(x^2)\right)^3}{x^2}$$ $$=\frac{3x^2/2+o(x^2)}{x^2}=\frac{3}{2}+\frac{o(x^2)}{x^2}\underbrace{\to}_{\text{ as }x\to 0}\frac{3}{2}+0=\frac{3}{2}$$</p> Answer 25: <p>$$\frac{\tan ^3 x - \sin ^3 x}{x^5}=\tan^3x\dfrac{1 - \cos ^3 x}{x^5}=\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{(1-\cos x)(1+\cos x+\cos^2x)}{x^2}\right)=\\ =\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{1-\cos x}{x^2}\right)(1+\cos x+\cos^2x)=\\ =\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{\sin x}{x}\right)^2\left(\dfrac{1}{1+\cos x}\right)(1+\cos x+\cos^2x)$$</p> <p>Now use the fundamental limits:</p> <p>$$\lim_{x\to 0}\frac{\sin x }{x}=1=\lim_{x\to 0}\frac{\tan x }{x}$$</p>
Given $ x_1 + x_2 + .... + x_{1994} = 1994$ find all $x_i$
2,116,363
5
https://math.stackexchange.com/questions/2116363/given-x-1-x-2-x-1994-1994-find-all-x-i
<p>$ x_1 + x_2 + ..... + x_{1994} = 1994$</p> <p>$ x_1^3 + x_2^3 + .... +x_{1994}^3 = x_1^4 + x_2^4 + .... +x_{1994}^4$</p> <p>Find all $x_i$ where all are real numbers</p> <p>I tried to prove all are equal to 1 using inequality but couldn't do anything useful. I tried to prove base case where there are only $x_1 $ and $x_2$, but no luck that way. Any help would be appreciated.</p>
[ { "answer_id": 2116377, "score": 5, "text": "<p>We can apply the Tchebychev inequality for ordered sequences; it asserts that if for real numbers $a_1,...,a_n$ and $b_1,...,b_n$ we have $a_1≤...≤a_n$ and $b_1≤...≤b_n$ then\n$$\n\\sum_{i=1}^n a_ib_i≥\\frac{1}{n}\\left(\\sum_{i=1}^n a_i\\right)\\left(\\sum_{i=1}^n b_i\\right).\n$$\nwith equality if and only if one of the sequences is constant.</p>\n\n<p>We then see that $x_1^3,...,x_{1994}^3$ and $x_1-1,...,x_{1994}-1$ are ordered in the same way and thus\n$$\n0=\\sum_{i=1}^{1994} x_i^4-\\sum_{i=1}^{1994} x_i^3=\\sum_{i=1}^{1994} x_i^3(x_i-1)≥\\frac{1}{1994}\\left(\\sum_{i=1}^n x_i^3\\right)\\left(\\sum_{i=1}^n (x_i-1)\\right)=0.\n$$\nHence one of the sequences is constant and therefore $x_1=...=x_{1994}=1$.</p>\n" }, { "answer_id": 2116379, "score": 2, "text": "<p>$$1994\\sum_{i=1}^{1994}x^4\\geq\\sum_{i=1}^{1994}x_i\\sum_{i=1}^{1994}x_i^3$$\nThe equality occurs for $x_i=1$.</p>\n\n<p>The above inequality it's enough to prove for all $x_i\\geq0$, which is true by Muirhead.</p>\n" }, { "answer_id": 2116377, "score": 5, "text": "<p>We can apply the Tchebychev inequality for ordered sequences; it asserts that if for real numbers $a_1,...,a_n$ and $b_1,...,b_n$ we have $a_1≤...≤a_n$ and $b_1≤...≤b_n$ then\n$$\n\\sum_{i=1}^n a_ib_i≥\\frac{1}{n}\\left(\\sum_{i=1}^n a_i\\right)\\left(\\sum_{i=1}^n b_i\\right).\n$$\nwith equality if and only if one of the sequences is constant.</p>\n\n<p>We then see that $x_1^3,...,x_{1994}^3$ and $x_1-1,...,x_{1994}-1$ are ordered in the same way and thus\n$$\n0=\\sum_{i=1}^{1994} x_i^4-\\sum_{i=1}^{1994} x_i^3=\\sum_{i=1}^{1994} x_i^3(x_i-1)≥\\frac{1}{1994}\\left(\\sum_{i=1}^n x_i^3\\right)\\left(\\sum_{i=1}^n (x_i-1)\\right)=0.\n$$\nHence one of the sequences is constant and therefore $x_1=...=x_{1994}=1$.</p>\n" }, { "answer_id": 2116379, "score": 2, "text": "<p>$$1994\\sum_{i=1}^{1994}x^4\\geq\\sum_{i=1}^{1994}x_i\\sum_{i=1}^{1994}x_i^3$$\nThe equality occurs for $x_i=1$.</p>\n\n<p>The above inequality it's enough to prove for all $x_i\\geq0$, which is true by Muirhead.</p>\n" }, { "answer_id": 2116377, "score": 5, "text": "<p>We can apply the Tchebychev inequality for ordered sequences; it asserts that if for real numbers $a_1,...,a_n$ and $b_1,...,b_n$ we have $a_1≤...≤a_n$ and $b_1≤...≤b_n$ then\n$$\n\\sum_{i=1}^n a_ib_i≥\\frac{1}{n}\\left(\\sum_{i=1}^n a_i\\right)\\left(\\sum_{i=1}^n b_i\\right).\n$$\nwith equality if and only if one of the sequences is constant.</p>\n\n<p>We then see that $x_1^3,...,x_{1994}^3$ and $x_1-1,...,x_{1994}-1$ are ordered in the same way and thus\n$$\n0=\\sum_{i=1}^{1994} x_i^4-\\sum_{i=1}^{1994} x_i^3=\\sum_{i=1}^{1994} x_i^3(x_i-1)≥\\frac{1}{1994}\\left(\\sum_{i=1}^n x_i^3\\right)\\left(\\sum_{i=1}^n (x_i-1)\\right)=0.\n$$\nHence one of the sequences is constant and therefore $x_1=...=x_{1994}=1$.</p>\n" }, { "answer_id": 2116379, "score": 2, "text": "<p>$$1994\\sum_{i=1}^{1994}x^4\\geq\\sum_{i=1}^{1994}x_i\\sum_{i=1}^{1994}x_i^3$$\nThe equality occurs for $x_i=1$.</p>\n\n<p>The above inequality it's enough to prove for all $x_i\\geq0$, which is true by Muirhead.</p>\n" }, { "answer_id": 2116377, "score": 5, "text": "<p>We can apply the Tchebychev inequality for ordered sequences; it asserts that if for real numbers $a_1,...,a_n$ and $b_1,...,b_n$ we have $a_1≤...≤a_n$ and $b_1≤...≤b_n$ then\n$$\n\\sum_{i=1}^n a_ib_i≥\\frac{1}{n}\\left(\\sum_{i=1}^n a_i\\right)\\left(\\sum_{i=1}^n b_i\\right).\n$$\nwith equality if and only if one of the sequences is constant.</p>\n\n<p>We then see that $x_1^3,...,x_{1994}^3$ and $x_1-1,...,x_{1994}-1$ are ordered in the same way and thus\n$$\n0=\\sum_{i=1}^{1994} x_i^4-\\sum_{i=1}^{1994} x_i^3=\\sum_{i=1}^{1994} x_i^3(x_i-1)≥\\frac{1}{1994}\\left(\\sum_{i=1}^n x_i^3\\right)\\left(\\sum_{i=1}^n (x_i-1)\\right)=0.\n$$\nHence one of the sequences is constant and therefore $x_1=...=x_{1994}=1$.</p>\n" }, { "answer_id": 2116379, "score": 2, "text": "<p>$$1994\\sum_{i=1}^{1994}x^4\\geq\\sum_{i=1}^{1994}x_i\\sum_{i=1}^{1994}x_i^3$$\nThe equality occurs for $x_i=1$.</p>\n\n<p>The above inequality it's enough to prove for all $x_i\\geq0$, which is true by Muirhead.</p>\n" }, { "answer_id": 2116377, "score": 5, "text": "<p>We can apply the Tchebychev inequality for ordered sequences; it asserts that if for real numbers $a_1,...,a_n$ and $b_1,...,b_n$ we have $a_1≤...≤a_n$ and $b_1≤...≤b_n$ then\n$$\n\\sum_{i=1}^n a_ib_i≥\\frac{1}{n}\\left(\\sum_{i=1}^n a_i\\right)\\left(\\sum_{i=1}^n b_i\\right).\n$$\nwith equality if and only if one of the sequences is constant.</p>\n\n<p>We then see that $x_1^3,...,x_{1994}^3$ and $x_1-1,...,x_{1994}-1$ are ordered in the same way and thus\n$$\n0=\\sum_{i=1}^{1994} x_i^4-\\sum_{i=1}^{1994} x_i^3=\\sum_{i=1}^{1994} x_i^3(x_i-1)≥\\frac{1}{1994}\\left(\\sum_{i=1}^n x_i^3\\right)\\left(\\sum_{i=1}^n (x_i-1)\\right)=0.\n$$\nHence one of the sequences is constant and therefore $x_1=...=x_{1994}=1$.</p>\n" }, { "answer_id": 2116379, "score": 2, "text": "<p>$$1994\\sum_{i=1}^{1994}x^4\\geq\\sum_{i=1}^{1994}x_i\\sum_{i=1}^{1994}x_i^3$$\nThe equality occurs for $x_i=1$.</p>\n\n<p>The above inequality it's enough to prove for all $x_i\\geq0$, which is true by Muirhead.</p>\n" } ]
Question: Given $ x_1 + x_2 + .... + x_{1994} = 1994$ find all $x_i$ Answer 1: <p>We can apply the Tchebychev inequality for ordered sequences; it asserts that if for real numbers $a_1,...,a_n$ and $b_1,...,b_n$ we have $a_1≤...≤a_n$ and $b_1≤...≤b_n$ then $$ \sum_{i=1}^n a_ib_i≥\frac{1}{n}\left(\sum_{i=1}^n a_i\right)\left(\sum_{i=1}^n b_i\right). $$ with equality if and only if one of the sequences is constant.</p> <p>We then see that $x_1^3,...,x_{1994}^3$ and $x_1-1,...,x_{1994}-1$ are ordered in the same way and thus $$ 0=\sum_{i=1}^{1994} x_i^4-\sum_{i=1}^{1994} x_i^3=\sum_{i=1}^{1994} x_i^3(x_i-1)≥\frac{1}{1994}\left(\sum_{i=1}^n x_i^3\right)\left(\sum_{i=1}^n (x_i-1)\right)=0. $$ Hence one of the sequences is constant and therefore $x_1=...=x_{1994}=1$.</p> Answer 2: <p>$$1994\sum_{i=1}^{1994}x^4\geq\sum_{i=1}^{1994}x_i\sum_{i=1}^{1994}x_i^3$$ The equality occurs for $x_i=1$.</p> <p>The above inequality it's enough to prove for all $x_i\geq0$, which is true by Muirhead.</p> Answer 3: <p>We can apply the Tchebychev inequality for ordered sequences; it asserts that if for real numbers $a_1,...,a_n$ and $b_1,...,b_n$ we have $a_1≤...≤a_n$ and $b_1≤...≤b_n$ then $$ \sum_{i=1}^n a_ib_i≥\frac{1}{n}\left(\sum_{i=1}^n a_i\right)\left(\sum_{i=1}^n b_i\right). $$ with equality if and only if one of the sequences is constant.</p> <p>We then see that $x_1^3,...,x_{1994}^3$ and $x_1-1,...,x_{1994}-1$ are ordered in the same way and thus $$ 0=\sum_{i=1}^{1994} x_i^4-\sum_{i=1}^{1994} x_i^3=\sum_{i=1}^{1994} x_i^3(x_i-1)≥\frac{1}{1994}\left(\sum_{i=1}^n x_i^3\right)\left(\sum_{i=1}^n (x_i-1)\right)=0. $$ Hence one of the sequences is constant and therefore $x_1=...=x_{1994}=1$.</p> Answer 4: <p>$$1994\sum_{i=1}^{1994}x^4\geq\sum_{i=1}^{1994}x_i\sum_{i=1}^{1994}x_i^3$$ The equality occurs for $x_i=1$.</p> <p>The above inequality it's enough to prove for all $x_i\geq0$, which is true by Muirhead.</p> Answer 5: <p>We can apply the Tchebychev inequality for ordered sequences; it asserts that if for real numbers $a_1,...,a_n$ and $b_1,...,b_n$ we have $a_1≤...≤a_n$ and $b_1≤...≤b_n$ then $$ \sum_{i=1}^n a_ib_i≥\frac{1}{n}\left(\sum_{i=1}^n a_i\right)\left(\sum_{i=1}^n b_i\right). $$ with equality if and only if one of the sequences is constant.</p> <p>We then see that $x_1^3,...,x_{1994}^3$ and $x_1-1,...,x_{1994}-1$ are ordered in the same way and thus $$ 0=\sum_{i=1}^{1994} x_i^4-\sum_{i=1}^{1994} x_i^3=\sum_{i=1}^{1994} x_i^3(x_i-1)≥\frac{1}{1994}\left(\sum_{i=1}^n x_i^3\right)\left(\sum_{i=1}^n (x_i-1)\right)=0. $$ Hence one of the sequences is constant and therefore $x_1=...=x_{1994}=1$.</p> Answer 6: <p>$$1994\sum_{i=1}^{1994}x^4\geq\sum_{i=1}^{1994}x_i\sum_{i=1}^{1994}x_i^3$$ The equality occurs for $x_i=1$.</p> <p>The above inequality it's enough to prove for all $x_i\geq0$, which is true by Muirhead.</p> Answer 7: <p>We can apply the Tchebychev inequality for ordered sequences; it asserts that if for real numbers $a_1,...,a_n$ and $b_1,...,b_n$ we have $a_1≤...≤a_n$ and $b_1≤...≤b_n$ then $$ \sum_{i=1}^n a_ib_i≥\frac{1}{n}\left(\sum_{i=1}^n a_i\right)\left(\sum_{i=1}^n b_i\right). $$ with equality if and only if one of the sequences is constant.</p> <p>We then see that $x_1^3,...,x_{1994}^3$ and $x_1-1,...,x_{1994}-1$ are ordered in the same way and thus $$ 0=\sum_{i=1}^{1994} x_i^4-\sum_{i=1}^{1994} x_i^3=\sum_{i=1}^{1994} x_i^3(x_i-1)≥\frac{1}{1994}\left(\sum_{i=1}^n x_i^3\right)\left(\sum_{i=1}^n (x_i-1)\right)=0. $$ Hence one of the sequences is constant and therefore $x_1=...=x_{1994}=1$.</p> Answer 8: <p>$$1994\sum_{i=1}^{1994}x^4\geq\sum_{i=1}^{1994}x_i\sum_{i=1}^{1994}x_i^3$$ The equality occurs for $x_i=1$.</p> <p>The above inequality it's enough to prove for all $x_i\geq0$, which is true by Muirhead.</p> Answer 9: <p>We can apply the Tchebychev inequality for ordered sequences; it asserts that if for real numbers $a_1,...,a_n$ and $b_1,...,b_n$ we have $a_1≤...≤a_n$ and $b_1≤...≤b_n$ then $$ \sum_{i=1}^n a_ib_i≥\frac{1}{n}\left(\sum_{i=1}^n a_i\right)\left(\sum_{i=1}^n b_i\right). $$ with equality if and only if one of the sequences is constant.</p> <p>We then see that $x_1^3,...,x_{1994}^3$ and $x_1-1,...,x_{1994}-1$ are ordered in the same way and thus $$ 0=\sum_{i=1}^{1994} x_i^4-\sum_{i=1}^{1994} x_i^3=\sum_{i=1}^{1994} x_i^3(x_i-1)≥\frac{1}{1994}\left(\sum_{i=1}^n x_i^3\right)\left(\sum_{i=1}^n (x_i-1)\right)=0. $$ Hence one of the sequences is constant and therefore $x_1=...=x_{1994}=1$.</p> Answer 10: <p>$$1994\sum_{i=1}^{1994}x^4\geq\sum_{i=1}^{1994}x_i\sum_{i=1}^{1994}x_i^3$$ The equality occurs for $x_i=1$.</p> <p>The above inequality it's enough to prove for all $x_i\geq0$, which is true by Muirhead.</p>
Find joint density function of X and X+Y (exponential distribution)
2,116,572
5
https://math.stackexchange.com/questions/2116572/find-joint-density-function-of-x-and-xy-exponential-distribution
<p>I tried finding the following density function:</p> <blockquote> <p>Let $X$ and $Y$ be independent random variables, each having the exponential distribution with parameter $\lambda$. Find the joint density function of $X$ and $Z=X+Y$.</p> </blockquote> <p>$\begin{aligned}f_{X,X+Y}=&amp;\frac{\partial}{\partial x\partial z}\mathbb P (X\leq x,Y\leq z-x)\\=&amp;\int_{-\infty}^x\int_{-\infty}^{z-x}f(u,v)\,\mathrm dv=\frac{\partial}{\partial z}\left[\frac{\partial}{\partial x}\int_0^x\int_0^{z-x}\lambda^2e^{-\lambda u-\lambda v}\,\mathrm dv\,\mathrm d u\right]\\ =&amp;\frac{\partial}{\partial z}\int_0^{z-x}\lambda^2e^{-\lambda x-\lambda v}\,\mathrm dv=\frac{\partial}{\partial z}\int_x^{z}\lambda^2e^{-\lambda x-\lambda(v-x)}\,\mathrm dv\\ =&amp;\lambda^2e^{-\lambda x-\lambda(z-x)}=\lambda^2e^{-\lambda z} \end{aligned}$</p> <p>But I don't think this is correct, because $\mathbb P(X\leq x,X+Y\leq z)$ isn't equal to $\mathbb P(X\leq x,Y\leq z-x)$. But I can't think of anything else.</p> <p>Could someone help me a little bit with this exercise?</p> <p>I'm going to try the following:</p> <p>$\mathbb P(X\leq x,Z\leq z)=\int_{-\infty}^x\int_{-\infty}^zf_X(u)f_Z(v)\,\mathrm du\,\mathrm dv$,</p> <p>where I use the convolution formula. If I made no mistake, this gives</p> <p>$f_{X+Y}(z)=z\lambda^2e^{-\lambda z}$, if $z&gt;0$.</p> <p>Then calculating the joint density function:</p> <p>$\begin{aligned}f_{X,X+Y}(x,z)=\frac{\partial}{\partial z}\left[\int_{-\infty}^z\lambda^3ve^{-\lambda x-\lambda v}\,\mathrm dv\right]. \end{aligned}$</p> <p>Oh wait, this is also incorrect, because $X$ and $X+Y$ are dependent.</p> <p>I really don't know how to proceed; could someone help?</p> <p>I've also looked at this <a href="https://math.stackexchange.com/questions/595155/joint-density-function-of-x-and-xy-standard-normal-random-variables">post</a>, but they mention the Jacobian, and my teacher has chosen to skip the paragraph about the Jacobian, so it wouldn't make sense if I had to use that.</p>
[ { "answer_id": 2116764, "score": 5, "text": "<p>Directly, by the Jacobian change of variables, we have:</p>\n\n<p>$$\\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ f_{X,Y}(x,z-x)~\\lVert\\tfrac{\\partial(x,z-x)}{\\partial (x,z)}\\rVert\\\\[1ex]&amp; =~ \\lambda^2\\mathsf e^{-\\lambda(x+z-x)}\\mathbf 1_{(x,z)\\in\\Bbb R^2\\wedge0\\leqslant x\\leqslant z}\\\\[1ex] &amp;=~ \\lambda^2\\mathsf e^{-\\lambda z}\\mathbf 1_{(x,z)\\in\\Bbb R^2\\wedge 0\\leqslant x\\leqslant z}\\end{align}$$</p>\n\n<p>(The Jacobian determinant is, conveniently, $1$ in this case).</p>\n\n<hr>\n\n<p>By your method. (Using the Fundamental Theorem of Calculus:)</p>\n\n<p>$$\\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ \\dfrac{\\partial^2\\qquad }{\\partial~x~\\partial~z}\\mathsf P(X\\leq x,X+Y\\leq z) \\\\[1ex] &amp;=~ \\dfrac{\\partial^2\\qquad }{\\partial~x~\\partial~z} \\int_0^x f_X(s)\\mathsf P(Y\\leq z-X\\mid X=s)\\operatorname d s \\\\[1ex] &amp;=~ f_X(x)\\,\\dfrac{\\partial\\quad}{\\partial~ z~}\\mathsf P(Y\\leq z-x) \\\\[1ex] &amp;= f_X(x)\\,\\dfrac{\\partial\\quad}{\\partial~ z~}\\int_0^{z-x}f_Y(t)\\operatorname d t \\\\[1ex] &amp;=~ f_X(x)\\,f_Y(z-x) \\\\[1ex] &amp;=~ \\text{(see above)}\\end{align} $$</p>\n" }, { "answer_id": 2116729, "score": 1, "text": "<p>In this solution I try to do everything step by step without referring to more advanced knowledge. The most honorable OP has to accept that this problem is not that simple. (Elementary but not simple; not tricky but needs work.)</p>\n\n<p>Let's concentrate on the common distribution function:</p>\n\n<p>$$F_{X,X+Y}(x,y)=P(X&lt;x\\cap X+Y&lt;y)=$$\n$$=\\int_0^{\\infty}P(X&lt;x\\ \\cap\\ X+Y&lt;y \\mid X=z)\\ \\lambda e^{-\\lambda z} \\ dz.\\tag1$$</p>\n\n<p>After substituting $z$ for $X$ we get\n$$P(X&lt;x\\ \\cap\\ X+Y&lt;y \\mid X=z)=P(z&lt;x\\cap Y&lt;y-z)=\\mathbb I_{z&lt;x}P(Y&lt;y-z)$$\nbecause $X$ and $Y$ are independent.</p>\n\n<p>Since $\\mathbb I_{z&lt;x}P(Y&lt;y-z)=0$ if $\\min(x,y)\\leq z$, $(1)$ will be</p>\n\n<p>$$\\lambda\\int_0^{\\min(x,y)}P(Y&lt;y-z) e^{-\\lambda z}\\ dz=$$</p>\n\n<p>$$=\\lambda\\int_0^{\\min(x,y)}\\left(1-e^{-\\lambda(y-z)}\\right)e^{-\\lambda z}\\ dz=$$\n$$=\\lambda\\int_0^{\\min(x,y)}e^{-\\lambda z}\\ dz-\\lambda e^{-\\lambda y}\\int_0^{\\min(x,y)}\\ dz=$$\n$$=1-e^{-\\lambda \\min(x,y)}-\\lambda e^{-\\lambda y}\\min(x,y).$$</p>\n\n<p>Now, let's compute the common density. For $x&lt;y$ we have </p>\n\n<p>$$f_{X,X+Y}(x,y)=\\frac{\\partial^2 F_{X,X+Y}}{\\partial y \\partial x}=\\frac{\\partial}{\\partial y}\\frac{\\partial}{\\partial x}\\left[1-e^{-\\lambda x}-\\lambda e^{-\\lambda y}x\\right]=$$\n$$=\\lambda\\frac{\\partial}{\\partial y}\\left(e^{-\\lambda x}-e^{-\\lambda y}\\right)=\\lambda^2e^{-\\lambda y}.$$</p>\n\n<p>But if $y\\leq x$ then the operation above results in $0$.</p>\n\n<p>As a result we have </p>\n\n<p>$$f_{X,X+Y}(x,y)=\\begin{cases}\n\\lambda^2e^{-\\lambda y}&amp;\\text{ if } 0\\leq x\\leq y\\\\\n0&amp; \\text{ otherwise}.\n\\end{cases}$$</p>\n" }, { "answer_id": 2116764, "score": 5, "text": "<p>Directly, by the Jacobian change of variables, we have:</p>\n\n<p>$$\\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ f_{X,Y}(x,z-x)~\\lVert\\tfrac{\\partial(x,z-x)}{\\partial (x,z)}\\rVert\\\\[1ex]&amp; =~ \\lambda^2\\mathsf e^{-\\lambda(x+z-x)}\\mathbf 1_{(x,z)\\in\\Bbb R^2\\wedge0\\leqslant x\\leqslant z}\\\\[1ex] &amp;=~ \\lambda^2\\mathsf e^{-\\lambda z}\\mathbf 1_{(x,z)\\in\\Bbb R^2\\wedge 0\\leqslant x\\leqslant z}\\end{align}$$</p>\n\n<p>(The Jacobian determinant is, conveniently, $1$ in this case).</p>\n\n<hr>\n\n<p>By your method. (Using the Fundamental Theorem of Calculus:)</p>\n\n<p>$$\\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ \\dfrac{\\partial^2\\qquad }{\\partial~x~\\partial~z}\\mathsf P(X\\leq x,X+Y\\leq z) \\\\[1ex] &amp;=~ \\dfrac{\\partial^2\\qquad }{\\partial~x~\\partial~z} \\int_0^x f_X(s)\\mathsf P(Y\\leq z-X\\mid X=s)\\operatorname d s \\\\[1ex] &amp;=~ f_X(x)\\,\\dfrac{\\partial\\quad}{\\partial~ z~}\\mathsf P(Y\\leq z-x) \\\\[1ex] &amp;= f_X(x)\\,\\dfrac{\\partial\\quad}{\\partial~ z~}\\int_0^{z-x}f_Y(t)\\operatorname d t \\\\[1ex] &amp;=~ f_X(x)\\,f_Y(z-x) \\\\[1ex] &amp;=~ \\text{(see above)}\\end{align} $$</p>\n" }, { "answer_id": 2116729, "score": 1, "text": "<p>In this solution I try to do everything step by step without referring to more advanced knowledge. The most honorable OP has to accept that this problem is not that simple. (Elementary but not simple; not tricky but needs work.)</p>\n\n<p>Let's concentrate on the common distribution function:</p>\n\n<p>$$F_{X,X+Y}(x,y)=P(X&lt;x\\cap X+Y&lt;y)=$$\n$$=\\int_0^{\\infty}P(X&lt;x\\ \\cap\\ X+Y&lt;y \\mid X=z)\\ \\lambda e^{-\\lambda z} \\ dz.\\tag1$$</p>\n\n<p>After substituting $z$ for $X$ we get\n$$P(X&lt;x\\ \\cap\\ X+Y&lt;y \\mid X=z)=P(z&lt;x\\cap Y&lt;y-z)=\\mathbb I_{z&lt;x}P(Y&lt;y-z)$$\nbecause $X$ and $Y$ are independent.</p>\n\n<p>Since $\\mathbb I_{z&lt;x}P(Y&lt;y-z)=0$ if $\\min(x,y)\\leq z$, $(1)$ will be</p>\n\n<p>$$\\lambda\\int_0^{\\min(x,y)}P(Y&lt;y-z) e^{-\\lambda z}\\ dz=$$</p>\n\n<p>$$=\\lambda\\int_0^{\\min(x,y)}\\left(1-e^{-\\lambda(y-z)}\\right)e^{-\\lambda z}\\ dz=$$\n$$=\\lambda\\int_0^{\\min(x,y)}e^{-\\lambda z}\\ dz-\\lambda e^{-\\lambda y}\\int_0^{\\min(x,y)}\\ dz=$$\n$$=1-e^{-\\lambda \\min(x,y)}-\\lambda e^{-\\lambda y}\\min(x,y).$$</p>\n\n<p>Now, let's compute the common density. For $x&lt;y$ we have </p>\n\n<p>$$f_{X,X+Y}(x,y)=\\frac{\\partial^2 F_{X,X+Y}}{\\partial y \\partial x}=\\frac{\\partial}{\\partial y}\\frac{\\partial}{\\partial x}\\left[1-e^{-\\lambda x}-\\lambda e^{-\\lambda y}x\\right]=$$\n$$=\\lambda\\frac{\\partial}{\\partial y}\\left(e^{-\\lambda x}-e^{-\\lambda y}\\right)=\\lambda^2e^{-\\lambda y}.$$</p>\n\n<p>But if $y\\leq x$ then the operation above results in $0$.</p>\n\n<p>As a result we have </p>\n\n<p>$$f_{X,X+Y}(x,y)=\\begin{cases}\n\\lambda^2e^{-\\lambda y}&amp;\\text{ if } 0\\leq x\\leq y\\\\\n0&amp; \\text{ otherwise}.\n\\end{cases}$$</p>\n" }, { "answer_id": 2116764, "score": 5, "text": "<p>Directly, by the Jacobian change of variables, we have:</p>\n\n<p>$$\\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ f_{X,Y}(x,z-x)~\\lVert\\tfrac{\\partial(x,z-x)}{\\partial (x,z)}\\rVert\\\\[1ex]&amp; =~ \\lambda^2\\mathsf e^{-\\lambda(x+z-x)}\\mathbf 1_{(x,z)\\in\\Bbb R^2\\wedge0\\leqslant x\\leqslant z}\\\\[1ex] &amp;=~ \\lambda^2\\mathsf e^{-\\lambda z}\\mathbf 1_{(x,z)\\in\\Bbb R^2\\wedge 0\\leqslant x\\leqslant z}\\end{align}$$</p>\n\n<p>(The Jacobian determinant is, conveniently, $1$ in this case).</p>\n\n<hr>\n\n<p>By your method. (Using the Fundamental Theorem of Calculus:)</p>\n\n<p>$$\\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ \\dfrac{\\partial^2\\qquad }{\\partial~x~\\partial~z}\\mathsf P(X\\leq x,X+Y\\leq z) \\\\[1ex] &amp;=~ \\dfrac{\\partial^2\\qquad }{\\partial~x~\\partial~z} \\int_0^x f_X(s)\\mathsf P(Y\\leq z-X\\mid X=s)\\operatorname d s \\\\[1ex] &amp;=~ f_X(x)\\,\\dfrac{\\partial\\quad}{\\partial~ z~}\\mathsf P(Y\\leq z-x) \\\\[1ex] &amp;= f_X(x)\\,\\dfrac{\\partial\\quad}{\\partial~ z~}\\int_0^{z-x}f_Y(t)\\operatorname d t \\\\[1ex] &amp;=~ f_X(x)\\,f_Y(z-x) \\\\[1ex] &amp;=~ \\text{(see above)}\\end{align} $$</p>\n" }, { "answer_id": 2116729, "score": 1, "text": "<p>In this solution I try to do everything step by step without referring to more advanced knowledge. The most honorable OP has to accept that this problem is not that simple. (Elementary but not simple; not tricky but needs work.)</p>\n\n<p>Let's concentrate on the common distribution function:</p>\n\n<p>$$F_{X,X+Y}(x,y)=P(X&lt;x\\cap X+Y&lt;y)=$$\n$$=\\int_0^{\\infty}P(X&lt;x\\ \\cap\\ X+Y&lt;y \\mid X=z)\\ \\lambda e^{-\\lambda z} \\ dz.\\tag1$$</p>\n\n<p>After substituting $z$ for $X$ we get\n$$P(X&lt;x\\ \\cap\\ X+Y&lt;y \\mid X=z)=P(z&lt;x\\cap Y&lt;y-z)=\\mathbb I_{z&lt;x}P(Y&lt;y-z)$$\nbecause $X$ and $Y$ are independent.</p>\n\n<p>Since $\\mathbb I_{z&lt;x}P(Y&lt;y-z)=0$ if $\\min(x,y)\\leq z$, $(1)$ will be</p>\n\n<p>$$\\lambda\\int_0^{\\min(x,y)}P(Y&lt;y-z) e^{-\\lambda z}\\ dz=$$</p>\n\n<p>$$=\\lambda\\int_0^{\\min(x,y)}\\left(1-e^{-\\lambda(y-z)}\\right)e^{-\\lambda z}\\ dz=$$\n$$=\\lambda\\int_0^{\\min(x,y)}e^{-\\lambda z}\\ dz-\\lambda e^{-\\lambda y}\\int_0^{\\min(x,y)}\\ dz=$$\n$$=1-e^{-\\lambda \\min(x,y)}-\\lambda e^{-\\lambda y}\\min(x,y).$$</p>\n\n<p>Now, let's compute the common density. For $x&lt;y$ we have </p>\n\n<p>$$f_{X,X+Y}(x,y)=\\frac{\\partial^2 F_{X,X+Y}}{\\partial y \\partial x}=\\frac{\\partial}{\\partial y}\\frac{\\partial}{\\partial x}\\left[1-e^{-\\lambda x}-\\lambda e^{-\\lambda y}x\\right]=$$\n$$=\\lambda\\frac{\\partial}{\\partial y}\\left(e^{-\\lambda x}-e^{-\\lambda y}\\right)=\\lambda^2e^{-\\lambda y}.$$</p>\n\n<p>But if $y\\leq x$ then the operation above results in $0$.</p>\n\n<p>As a result we have </p>\n\n<p>$$f_{X,X+Y}(x,y)=\\begin{cases}\n\\lambda^2e^{-\\lambda y}&amp;\\text{ if } 0\\leq x\\leq y\\\\\n0&amp; \\text{ otherwise}.\n\\end{cases}$$</p>\n" }, { "answer_id": 2116764, "score": 5, "text": "<p>Directly, by the Jacobian change of variables, we have:</p>\n\n<p>$$\\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ f_{X,Y}(x,z-x)~\\lVert\\tfrac{\\partial(x,z-x)}{\\partial (x,z)}\\rVert\\\\[1ex]&amp; =~ \\lambda^2\\mathsf e^{-\\lambda(x+z-x)}\\mathbf 1_{(x,z)\\in\\Bbb R^2\\wedge0\\leqslant x\\leqslant z}\\\\[1ex] &amp;=~ \\lambda^2\\mathsf e^{-\\lambda z}\\mathbf 1_{(x,z)\\in\\Bbb R^2\\wedge 0\\leqslant x\\leqslant z}\\end{align}$$</p>\n\n<p>(The Jacobian determinant is, conveniently, $1$ in this case).</p>\n\n<hr>\n\n<p>By your method. (Using the Fundamental Theorem of Calculus:)</p>\n\n<p>$$\\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ \\dfrac{\\partial^2\\qquad }{\\partial~x~\\partial~z}\\mathsf P(X\\leq x,X+Y\\leq z) \\\\[1ex] &amp;=~ \\dfrac{\\partial^2\\qquad }{\\partial~x~\\partial~z} \\int_0^x f_X(s)\\mathsf P(Y\\leq z-X\\mid X=s)\\operatorname d s \\\\[1ex] &amp;=~ f_X(x)\\,\\dfrac{\\partial\\quad}{\\partial~ z~}\\mathsf P(Y\\leq z-x) \\\\[1ex] &amp;= f_X(x)\\,\\dfrac{\\partial\\quad}{\\partial~ z~}\\int_0^{z-x}f_Y(t)\\operatorname d t \\\\[1ex] &amp;=~ f_X(x)\\,f_Y(z-x) \\\\[1ex] &amp;=~ \\text{(see above)}\\end{align} $$</p>\n" }, { "answer_id": 2116729, "score": 1, "text": "<p>In this solution I try to do everything step by step without referring to more advanced knowledge. The most honorable OP has to accept that this problem is not that simple. (Elementary but not simple; not tricky but needs work.)</p>\n\n<p>Let's concentrate on the common distribution function:</p>\n\n<p>$$F_{X,X+Y}(x,y)=P(X&lt;x\\cap X+Y&lt;y)=$$\n$$=\\int_0^{\\infty}P(X&lt;x\\ \\cap\\ X+Y&lt;y \\mid X=z)\\ \\lambda e^{-\\lambda z} \\ dz.\\tag1$$</p>\n\n<p>After substituting $z$ for $X$ we get\n$$P(X&lt;x\\ \\cap\\ X+Y&lt;y \\mid X=z)=P(z&lt;x\\cap Y&lt;y-z)=\\mathbb I_{z&lt;x}P(Y&lt;y-z)$$\nbecause $X$ and $Y$ are independent.</p>\n\n<p>Since $\\mathbb I_{z&lt;x}P(Y&lt;y-z)=0$ if $\\min(x,y)\\leq z$, $(1)$ will be</p>\n\n<p>$$\\lambda\\int_0^{\\min(x,y)}P(Y&lt;y-z) e^{-\\lambda z}\\ dz=$$</p>\n\n<p>$$=\\lambda\\int_0^{\\min(x,y)}\\left(1-e^{-\\lambda(y-z)}\\right)e^{-\\lambda z}\\ dz=$$\n$$=\\lambda\\int_0^{\\min(x,y)}e^{-\\lambda z}\\ dz-\\lambda e^{-\\lambda y}\\int_0^{\\min(x,y)}\\ dz=$$\n$$=1-e^{-\\lambda \\min(x,y)}-\\lambda e^{-\\lambda y}\\min(x,y).$$</p>\n\n<p>Now, let's compute the common density. For $x&lt;y$ we have </p>\n\n<p>$$f_{X,X+Y}(x,y)=\\frac{\\partial^2 F_{X,X+Y}}{\\partial y \\partial x}=\\frac{\\partial}{\\partial y}\\frac{\\partial}{\\partial x}\\left[1-e^{-\\lambda x}-\\lambda e^{-\\lambda y}x\\right]=$$\n$$=\\lambda\\frac{\\partial}{\\partial y}\\left(e^{-\\lambda x}-e^{-\\lambda y}\\right)=\\lambda^2e^{-\\lambda y}.$$</p>\n\n<p>But if $y\\leq x$ then the operation above results in $0$.</p>\n\n<p>As a result we have </p>\n\n<p>$$f_{X,X+Y}(x,y)=\\begin{cases}\n\\lambda^2e^{-\\lambda y}&amp;\\text{ if } 0\\leq x\\leq y\\\\\n0&amp; \\text{ otherwise}.\n\\end{cases}$$</p>\n" }, { "answer_id": 2116764, "score": 5, "text": "<p>Directly, by the Jacobian change of variables, we have:</p>\n\n<p>$$\\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ f_{X,Y}(x,z-x)~\\lVert\\tfrac{\\partial(x,z-x)}{\\partial (x,z)}\\rVert\\\\[1ex]&amp; =~ \\lambda^2\\mathsf e^{-\\lambda(x+z-x)}\\mathbf 1_{(x,z)\\in\\Bbb R^2\\wedge0\\leqslant x\\leqslant z}\\\\[1ex] &amp;=~ \\lambda^2\\mathsf e^{-\\lambda z}\\mathbf 1_{(x,z)\\in\\Bbb R^2\\wedge 0\\leqslant x\\leqslant z}\\end{align}$$</p>\n\n<p>(The Jacobian determinant is, conveniently, $1$ in this case).</p>\n\n<hr>\n\n<p>By your method. (Using the Fundamental Theorem of Calculus:)</p>\n\n<p>$$\\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ \\dfrac{\\partial^2\\qquad }{\\partial~x~\\partial~z}\\mathsf P(X\\leq x,X+Y\\leq z) \\\\[1ex] &amp;=~ \\dfrac{\\partial^2\\qquad }{\\partial~x~\\partial~z} \\int_0^x f_X(s)\\mathsf P(Y\\leq z-X\\mid X=s)\\operatorname d s \\\\[1ex] &amp;=~ f_X(x)\\,\\dfrac{\\partial\\quad}{\\partial~ z~}\\mathsf P(Y\\leq z-x) \\\\[1ex] &amp;= f_X(x)\\,\\dfrac{\\partial\\quad}{\\partial~ z~}\\int_0^{z-x}f_Y(t)\\operatorname d t \\\\[1ex] &amp;=~ f_X(x)\\,f_Y(z-x) \\\\[1ex] &amp;=~ \\text{(see above)}\\end{align} $$</p>\n" }, { "answer_id": 2116729, "score": 1, "text": "<p>In this solution I try to do everything step by step without referring to more advanced knowledge. The most honorable OP has to accept that this problem is not that simple. (Elementary but not simple; not tricky but needs work.)</p>\n\n<p>Let's concentrate on the common distribution function:</p>\n\n<p>$$F_{X,X+Y}(x,y)=P(X&lt;x\\cap X+Y&lt;y)=$$\n$$=\\int_0^{\\infty}P(X&lt;x\\ \\cap\\ X+Y&lt;y \\mid X=z)\\ \\lambda e^{-\\lambda z} \\ dz.\\tag1$$</p>\n\n<p>After substituting $z$ for $X$ we get\n$$P(X&lt;x\\ \\cap\\ X+Y&lt;y \\mid X=z)=P(z&lt;x\\cap Y&lt;y-z)=\\mathbb I_{z&lt;x}P(Y&lt;y-z)$$\nbecause $X$ and $Y$ are independent.</p>\n\n<p>Since $\\mathbb I_{z&lt;x}P(Y&lt;y-z)=0$ if $\\min(x,y)\\leq z$, $(1)$ will be</p>\n\n<p>$$\\lambda\\int_0^{\\min(x,y)}P(Y&lt;y-z) e^{-\\lambda z}\\ dz=$$</p>\n\n<p>$$=\\lambda\\int_0^{\\min(x,y)}\\left(1-e^{-\\lambda(y-z)}\\right)e^{-\\lambda z}\\ dz=$$\n$$=\\lambda\\int_0^{\\min(x,y)}e^{-\\lambda z}\\ dz-\\lambda e^{-\\lambda y}\\int_0^{\\min(x,y)}\\ dz=$$\n$$=1-e^{-\\lambda \\min(x,y)}-\\lambda e^{-\\lambda y}\\min(x,y).$$</p>\n\n<p>Now, let's compute the common density. For $x&lt;y$ we have </p>\n\n<p>$$f_{X,X+Y}(x,y)=\\frac{\\partial^2 F_{X,X+Y}}{\\partial y \\partial x}=\\frac{\\partial}{\\partial y}\\frac{\\partial}{\\partial x}\\left[1-e^{-\\lambda x}-\\lambda e^{-\\lambda y}x\\right]=$$\n$$=\\lambda\\frac{\\partial}{\\partial y}\\left(e^{-\\lambda x}-e^{-\\lambda y}\\right)=\\lambda^2e^{-\\lambda y}.$$</p>\n\n<p>But if $y\\leq x$ then the operation above results in $0$.</p>\n\n<p>As a result we have </p>\n\n<p>$$f_{X,X+Y}(x,y)=\\begin{cases}\n\\lambda^2e^{-\\lambda y}&amp;\\text{ if } 0\\leq x\\leq y\\\\\n0&amp; \\text{ otherwise}.\n\\end{cases}$$</p>\n" } ]
Question: Find joint density function of X and X+Y (exponential distribution) Answer 1: <p>Directly, by the Jacobian change of variables, we have:</p> <p>$$\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ f_{X,Y}(x,z-x)~\lVert\tfrac{\partial(x,z-x)}{\partial (x,z)}\rVert\\[1ex]&amp; =~ \lambda^2\mathsf e^{-\lambda(x+z-x)}\mathbf 1_{(x,z)\in\Bbb R^2\wedge0\leqslant x\leqslant z}\\[1ex] &amp;=~ \lambda^2\mathsf e^{-\lambda z}\mathbf 1_{(x,z)\in\Bbb R^2\wedge 0\leqslant x\leqslant z}\end{align}$$</p> <p>(The Jacobian determinant is, conveniently, $1$ in this case).</p> <hr> <p>By your method. (Using the Fundamental Theorem of Calculus:)</p> <p>$$\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ \dfrac{\partial^2\qquad }{\partial~x~\partial~z}\mathsf P(X\leq x,X+Y\leq z) \\[1ex] &amp;=~ \dfrac{\partial^2\qquad }{\partial~x~\partial~z} \int_0^x f_X(s)\mathsf P(Y\leq z-X\mid X=s)\operatorname d s \\[1ex] &amp;=~ f_X(x)\,\dfrac{\partial\quad}{\partial~ z~}\mathsf P(Y\leq z-x) \\[1ex] &amp;= f_X(x)\,\dfrac{\partial\quad}{\partial~ z~}\int_0^{z-x}f_Y(t)\operatorname d t \\[1ex] &amp;=~ f_X(x)\,f_Y(z-x) \\[1ex] &amp;=~ \text{(see above)}\end{align} $$</p> Answer 2: <p>In this solution I try to do everything step by step without referring to more advanced knowledge. The most honorable OP has to accept that this problem is not that simple. (Elementary but not simple; not tricky but needs work.)</p> <p>Let's concentrate on the common distribution function:</p> <p>$$F_{X,X+Y}(x,y)=P(X&lt;x\cap X+Y&lt;y)=$$ $$=\int_0^{\infty}P(X&lt;x\ \cap\ X+Y&lt;y \mid X=z)\ \lambda e^{-\lambda z} \ dz.\tag1$$</p> <p>After substituting $z$ for $X$ we get $$P(X&lt;x\ \cap\ X+Y&lt;y \mid X=z)=P(z&lt;x\cap Y&lt;y-z)=\mathbb I_{z&lt;x}P(Y&lt;y-z)$$ because $X$ and $Y$ are independent.</p> <p>Since $\mathbb I_{z&lt;x}P(Y&lt;y-z)=0$ if $\min(x,y)\leq z$, $(1)$ will be</p> <p>$$\lambda\int_0^{\min(x,y)}P(Y&lt;y-z) e^{-\lambda z}\ dz=$$</p> <p>$$=\lambda\int_0^{\min(x,y)}\left(1-e^{-\lambda(y-z)}\right)e^{-\lambda z}\ dz=$$ $$=\lambda\int_0^{\min(x,y)}e^{-\lambda z}\ dz-\lambda e^{-\lambda y}\int_0^{\min(x,y)}\ dz=$$ $$=1-e^{-\lambda \min(x,y)}-\lambda e^{-\lambda y}\min(x,y).$$</p> <p>Now, let's compute the common density. For $x&lt;y$ we have </p> <p>$$f_{X,X+Y}(x,y)=\frac{\partial^2 F_{X,X+Y}}{\partial y \partial x}=\frac{\partial}{\partial y}\frac{\partial}{\partial x}\left[1-e^{-\lambda x}-\lambda e^{-\lambda y}x\right]=$$ $$=\lambda\frac{\partial}{\partial y}\left(e^{-\lambda x}-e^{-\lambda y}\right)=\lambda^2e^{-\lambda y}.$$</p> <p>But if $y\leq x$ then the operation above results in $0$.</p> <p>As a result we have </p> <p>$$f_{X,X+Y}(x,y)=\begin{cases} \lambda^2e^{-\lambda y}&amp;\text{ if } 0\leq x\leq y\\ 0&amp; \text{ otherwise}. \end{cases}$$</p> Answer 3: <p>Directly, by the Jacobian change of variables, we have:</p> <p>$$\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ f_{X,Y}(x,z-x)~\lVert\tfrac{\partial(x,z-x)}{\partial (x,z)}\rVert\\[1ex]&amp; =~ \lambda^2\mathsf e^{-\lambda(x+z-x)}\mathbf 1_{(x,z)\in\Bbb R^2\wedge0\leqslant x\leqslant z}\\[1ex] &amp;=~ \lambda^2\mathsf e^{-\lambda z}\mathbf 1_{(x,z)\in\Bbb R^2\wedge 0\leqslant x\leqslant z}\end{align}$$</p> <p>(The Jacobian determinant is, conveniently, $1$ in this case).</p> <hr> <p>By your method. (Using the Fundamental Theorem of Calculus:)</p> <p>$$\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ \dfrac{\partial^2\qquad }{\partial~x~\partial~z}\mathsf P(X\leq x,X+Y\leq z) \\[1ex] &amp;=~ \dfrac{\partial^2\qquad }{\partial~x~\partial~z} \int_0^x f_X(s)\mathsf P(Y\leq z-X\mid X=s)\operatorname d s \\[1ex] &amp;=~ f_X(x)\,\dfrac{\partial\quad}{\partial~ z~}\mathsf P(Y\leq z-x) \\[1ex] &amp;= f_X(x)\,\dfrac{\partial\quad}{\partial~ z~}\int_0^{z-x}f_Y(t)\operatorname d t \\[1ex] &amp;=~ f_X(x)\,f_Y(z-x) \\[1ex] &amp;=~ \text{(see above)}\end{align} $$</p> Answer 4: <p>In this solution I try to do everything step by step without referring to more advanced knowledge. The most honorable OP has to accept that this problem is not that simple. (Elementary but not simple; not tricky but needs work.)</p> <p>Let's concentrate on the common distribution function:</p> <p>$$F_{X,X+Y}(x,y)=P(X&lt;x\cap X+Y&lt;y)=$$ $$=\int_0^{\infty}P(X&lt;x\ \cap\ X+Y&lt;y \mid X=z)\ \lambda e^{-\lambda z} \ dz.\tag1$$</p> <p>After substituting $z$ for $X$ we get $$P(X&lt;x\ \cap\ X+Y&lt;y \mid X=z)=P(z&lt;x\cap Y&lt;y-z)=\mathbb I_{z&lt;x}P(Y&lt;y-z)$$ because $X$ and $Y$ are independent.</p> <p>Since $\mathbb I_{z&lt;x}P(Y&lt;y-z)=0$ if $\min(x,y)\leq z$, $(1)$ will be</p> <p>$$\lambda\int_0^{\min(x,y)}P(Y&lt;y-z) e^{-\lambda z}\ dz=$$</p> <p>$$=\lambda\int_0^{\min(x,y)}\left(1-e^{-\lambda(y-z)}\right)e^{-\lambda z}\ dz=$$ $$=\lambda\int_0^{\min(x,y)}e^{-\lambda z}\ dz-\lambda e^{-\lambda y}\int_0^{\min(x,y)}\ dz=$$ $$=1-e^{-\lambda \min(x,y)}-\lambda e^{-\lambda y}\min(x,y).$$</p> <p>Now, let's compute the common density. For $x&lt;y$ we have </p> <p>$$f_{X,X+Y}(x,y)=\frac{\partial^2 F_{X,X+Y}}{\partial y \partial x}=\frac{\partial}{\partial y}\frac{\partial}{\partial x}\left[1-e^{-\lambda x}-\lambda e^{-\lambda y}x\right]=$$ $$=\lambda\frac{\partial}{\partial y}\left(e^{-\lambda x}-e^{-\lambda y}\right)=\lambda^2e^{-\lambda y}.$$</p> <p>But if $y\leq x$ then the operation above results in $0$.</p> <p>As a result we have </p> <p>$$f_{X,X+Y}(x,y)=\begin{cases} \lambda^2e^{-\lambda y}&amp;\text{ if } 0\leq x\leq y\\ 0&amp; \text{ otherwise}. \end{cases}$$</p> Answer 5: <p>Directly, by the Jacobian change of variables, we have:</p> <p>$$\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ f_{X,Y}(x,z-x)~\lVert\tfrac{\partial(x,z-x)}{\partial (x,z)}\rVert\\[1ex]&amp; =~ \lambda^2\mathsf e^{-\lambda(x+z-x)}\mathbf 1_{(x,z)\in\Bbb R^2\wedge0\leqslant x\leqslant z}\\[1ex] &amp;=~ \lambda^2\mathsf e^{-\lambda z}\mathbf 1_{(x,z)\in\Bbb R^2\wedge 0\leqslant x\leqslant z}\end{align}$$</p> <p>(The Jacobian determinant is, conveniently, $1$ in this case).</p> <hr> <p>By your method. (Using the Fundamental Theorem of Calculus:)</p> <p>$$\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ \dfrac{\partial^2\qquad }{\partial~x~\partial~z}\mathsf P(X\leq x,X+Y\leq z) \\[1ex] &amp;=~ \dfrac{\partial^2\qquad }{\partial~x~\partial~z} \int_0^x f_X(s)\mathsf P(Y\leq z-X\mid X=s)\operatorname d s \\[1ex] &amp;=~ f_X(x)\,\dfrac{\partial\quad}{\partial~ z~}\mathsf P(Y\leq z-x) \\[1ex] &amp;= f_X(x)\,\dfrac{\partial\quad}{\partial~ z~}\int_0^{z-x}f_Y(t)\operatorname d t \\[1ex] &amp;=~ f_X(x)\,f_Y(z-x) \\[1ex] &amp;=~ \text{(see above)}\end{align} $$</p> Answer 6: <p>In this solution I try to do everything step by step without referring to more advanced knowledge. The most honorable OP has to accept that this problem is not that simple. (Elementary but not simple; not tricky but needs work.)</p> <p>Let's concentrate on the common distribution function:</p> <p>$$F_{X,X+Y}(x,y)=P(X&lt;x\cap X+Y&lt;y)=$$ $$=\int_0^{\infty}P(X&lt;x\ \cap\ X+Y&lt;y \mid X=z)\ \lambda e^{-\lambda z} \ dz.\tag1$$</p> <p>After substituting $z$ for $X$ we get $$P(X&lt;x\ \cap\ X+Y&lt;y \mid X=z)=P(z&lt;x\cap Y&lt;y-z)=\mathbb I_{z&lt;x}P(Y&lt;y-z)$$ because $X$ and $Y$ are independent.</p> <p>Since $\mathbb I_{z&lt;x}P(Y&lt;y-z)=0$ if $\min(x,y)\leq z$, $(1)$ will be</p> <p>$$\lambda\int_0^{\min(x,y)}P(Y&lt;y-z) e^{-\lambda z}\ dz=$$</p> <p>$$=\lambda\int_0^{\min(x,y)}\left(1-e^{-\lambda(y-z)}\right)e^{-\lambda z}\ dz=$$ $$=\lambda\int_0^{\min(x,y)}e^{-\lambda z}\ dz-\lambda e^{-\lambda y}\int_0^{\min(x,y)}\ dz=$$ $$=1-e^{-\lambda \min(x,y)}-\lambda e^{-\lambda y}\min(x,y).$$</p> <p>Now, let's compute the common density. For $x&lt;y$ we have </p> <p>$$f_{X,X+Y}(x,y)=\frac{\partial^2 F_{X,X+Y}}{\partial y \partial x}=\frac{\partial}{\partial y}\frac{\partial}{\partial x}\left[1-e^{-\lambda x}-\lambda e^{-\lambda y}x\right]=$$ $$=\lambda\frac{\partial}{\partial y}\left(e^{-\lambda x}-e^{-\lambda y}\right)=\lambda^2e^{-\lambda y}.$$</p> <p>But if $y\leq x$ then the operation above results in $0$.</p> <p>As a result we have </p> <p>$$f_{X,X+Y}(x,y)=\begin{cases} \lambda^2e^{-\lambda y}&amp;\text{ if } 0\leq x\leq y\\ 0&amp; \text{ otherwise}. \end{cases}$$</p> Answer 7: <p>Directly, by the Jacobian change of variables, we have:</p> <p>$$\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ f_{X,Y}(x,z-x)~\lVert\tfrac{\partial(x,z-x)}{\partial (x,z)}\rVert\\[1ex]&amp; =~ \lambda^2\mathsf e^{-\lambda(x+z-x)}\mathbf 1_{(x,z)\in\Bbb R^2\wedge0\leqslant x\leqslant z}\\[1ex] &amp;=~ \lambda^2\mathsf e^{-\lambda z}\mathbf 1_{(x,z)\in\Bbb R^2\wedge 0\leqslant x\leqslant z}\end{align}$$</p> <p>(The Jacobian determinant is, conveniently, $1$ in this case).</p> <hr> <p>By your method. (Using the Fundamental Theorem of Calculus:)</p> <p>$$\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ \dfrac{\partial^2\qquad }{\partial~x~\partial~z}\mathsf P(X\leq x,X+Y\leq z) \\[1ex] &amp;=~ \dfrac{\partial^2\qquad }{\partial~x~\partial~z} \int_0^x f_X(s)\mathsf P(Y\leq z-X\mid X=s)\operatorname d s \\[1ex] &amp;=~ f_X(x)\,\dfrac{\partial\quad}{\partial~ z~}\mathsf P(Y\leq z-x) \\[1ex] &amp;= f_X(x)\,\dfrac{\partial\quad}{\partial~ z~}\int_0^{z-x}f_Y(t)\operatorname d t \\[1ex] &amp;=~ f_X(x)\,f_Y(z-x) \\[1ex] &amp;=~ \text{(see above)}\end{align} $$</p> Answer 8: <p>In this solution I try to do everything step by step without referring to more advanced knowledge. The most honorable OP has to accept that this problem is not that simple. (Elementary but not simple; not tricky but needs work.)</p> <p>Let's concentrate on the common distribution function:</p> <p>$$F_{X,X+Y}(x,y)=P(X&lt;x\cap X+Y&lt;y)=$$ $$=\int_0^{\infty}P(X&lt;x\ \cap\ X+Y&lt;y \mid X=z)\ \lambda e^{-\lambda z} \ dz.\tag1$$</p> <p>After substituting $z$ for $X$ we get $$P(X&lt;x\ \cap\ X+Y&lt;y \mid X=z)=P(z&lt;x\cap Y&lt;y-z)=\mathbb I_{z&lt;x}P(Y&lt;y-z)$$ because $X$ and $Y$ are independent.</p> <p>Since $\mathbb I_{z&lt;x}P(Y&lt;y-z)=0$ if $\min(x,y)\leq z$, $(1)$ will be</p> <p>$$\lambda\int_0^{\min(x,y)}P(Y&lt;y-z) e^{-\lambda z}\ dz=$$</p> <p>$$=\lambda\int_0^{\min(x,y)}\left(1-e^{-\lambda(y-z)}\right)e^{-\lambda z}\ dz=$$ $$=\lambda\int_0^{\min(x,y)}e^{-\lambda z}\ dz-\lambda e^{-\lambda y}\int_0^{\min(x,y)}\ dz=$$ $$=1-e^{-\lambda \min(x,y)}-\lambda e^{-\lambda y}\min(x,y).$$</p> <p>Now, let's compute the common density. For $x&lt;y$ we have </p> <p>$$f_{X,X+Y}(x,y)=\frac{\partial^2 F_{X,X+Y}}{\partial y \partial x}=\frac{\partial}{\partial y}\frac{\partial}{\partial x}\left[1-e^{-\lambda x}-\lambda e^{-\lambda y}x\right]=$$ $$=\lambda\frac{\partial}{\partial y}\left(e^{-\lambda x}-e^{-\lambda y}\right)=\lambda^2e^{-\lambda y}.$$</p> <p>But if $y\leq x$ then the operation above results in $0$.</p> <p>As a result we have </p> <p>$$f_{X,X+Y}(x,y)=\begin{cases} \lambda^2e^{-\lambda y}&amp;\text{ if } 0\leq x\leq y\\ 0&amp; \text{ otherwise}. \end{cases}$$</p> Answer 9: <p>Directly, by the Jacobian change of variables, we have:</p> <p>$$\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ f_{X,Y}(x,z-x)~\lVert\tfrac{\partial(x,z-x)}{\partial (x,z)}\rVert\\[1ex]&amp; =~ \lambda^2\mathsf e^{-\lambda(x+z-x)}\mathbf 1_{(x,z)\in\Bbb R^2\wedge0\leqslant x\leqslant z}\\[1ex] &amp;=~ \lambda^2\mathsf e^{-\lambda z}\mathbf 1_{(x,z)\in\Bbb R^2\wedge 0\leqslant x\leqslant z}\end{align}$$</p> <p>(The Jacobian determinant is, conveniently, $1$ in this case).</p> <hr> <p>By your method. (Using the Fundamental Theorem of Calculus:)</p> <p>$$\begin{align}f_{X,X+Y}(x,z) ~&amp;=~ \dfrac{\partial^2\qquad }{\partial~x~\partial~z}\mathsf P(X\leq x,X+Y\leq z) \\[1ex] &amp;=~ \dfrac{\partial^2\qquad }{\partial~x~\partial~z} \int_0^x f_X(s)\mathsf P(Y\leq z-X\mid X=s)\operatorname d s \\[1ex] &amp;=~ f_X(x)\,\dfrac{\partial\quad}{\partial~ z~}\mathsf P(Y\leq z-x) \\[1ex] &amp;= f_X(x)\,\dfrac{\partial\quad}{\partial~ z~}\int_0^{z-x}f_Y(t)\operatorname d t \\[1ex] &amp;=~ f_X(x)\,f_Y(z-x) \\[1ex] &amp;=~ \text{(see above)}\end{align} $$</p> Answer 10: <p>In this solution I try to do everything step by step without referring to more advanced knowledge. The most honorable OP has to accept that this problem is not that simple. (Elementary but not simple; not tricky but needs work.)</p> <p>Let's concentrate on the common distribution function:</p> <p>$$F_{X,X+Y}(x,y)=P(X&lt;x\cap X+Y&lt;y)=$$ $$=\int_0^{\infty}P(X&lt;x\ \cap\ X+Y&lt;y \mid X=z)\ \lambda e^{-\lambda z} \ dz.\tag1$$</p> <p>After substituting $z$ for $X$ we get $$P(X&lt;x\ \cap\ X+Y&lt;y \mid X=z)=P(z&lt;x\cap Y&lt;y-z)=\mathbb I_{z&lt;x}P(Y&lt;y-z)$$ because $X$ and $Y$ are independent.</p> <p>Since $\mathbb I_{z&lt;x}P(Y&lt;y-z)=0$ if $\min(x,y)\leq z$, $(1)$ will be</p> <p>$$\lambda\int_0^{\min(x,y)}P(Y&lt;y-z) e^{-\lambda z}\ dz=$$</p> <p>$$=\lambda\int_0^{\min(x,y)}\left(1-e^{-\lambda(y-z)}\right)e^{-\lambda z}\ dz=$$ $$=\lambda\int_0^{\min(x,y)}e^{-\lambda z}\ dz-\lambda e^{-\lambda y}\int_0^{\min(x,y)}\ dz=$$ $$=1-e^{-\lambda \min(x,y)}-\lambda e^{-\lambda y}\min(x,y).$$</p> <p>Now, let's compute the common density. For $x&lt;y$ we have </p> <p>$$f_{X,X+Y}(x,y)=\frac{\partial^2 F_{X,X+Y}}{\partial y \partial x}=\frac{\partial}{\partial y}\frac{\partial}{\partial x}\left[1-e^{-\lambda x}-\lambda e^{-\lambda y}x\right]=$$ $$=\lambda\frac{\partial}{\partial y}\left(e^{-\lambda x}-e^{-\lambda y}\right)=\lambda^2e^{-\lambda y}.$$</p> <p>But if $y\leq x$ then the operation above results in $0$.</p> <p>As a result we have </p> <p>$$f_{X,X+Y}(x,y)=\begin{cases} \lambda^2e^{-\lambda y}&amp;\text{ if } 0\leq x\leq y\\ 0&amp; \text{ otherwise}. \end{cases}$$</p>
Does a single-line result in shorter average queue time?
2,117,151
5
https://math.stackexchange.com/questions/2117151/does-a-single-line-result-in-shorter-average-queue-time
<p>Set-up: Travelers are waiting in line to be processed by immigration officers. There are N officers, each with their own counter. No assumptions about line-switching, constant processing speed, homogeneous processing speed, or homogeneous entrance rate of travelers.</p> <p>Scenario A: Travelers line up in one of 3 lines, each line leading to one of the 3 counters.</p> <p>Scenario B: Travelers line up in a single line. The person at the head of the line will be directed to the next available officer.</p> <p>Additional assumption: All officers will be at 100% utilization rate. </p> <p>I argue that average queue time will be the same in both scenarios. Many co-workers, friends, and the following quotes disagree and believe that Scenario B (single-line) will result in a shorter average queue time. </p> <p>Who's right?</p> <hr> <p>"Finally, a single-line, multiple-server system has better performance in terms of waiting times than the same system with a line for each server." - Reid, Sanders; Operations Management</p> <p>"Research has proven that a single line, multi-server waiting system is faster than the multiple line approach." - <a href="http://blog.lavi.com/2014/08/07/single-line-queue/" rel="noreferrer">http://blog.lavi.com/2014/08/07/single-line-queue/</a></p> <p>"A Long Line for a Shorter Wait at the Supermarket" - <a href="http://www.nytimes.com/2007/06/23/business/23checkout.html" rel="noreferrer">http://www.nytimes.com/2007/06/23/business/23checkout.html</a></p>
[ { "answer_id": 2982104, "score": 1, "text": "<p>The average waiting time (mathematical expectation) is the same but the variation of waiting time for a single line is smaller than that of multiple lines (easy to demonstrate mathematically). Thus a risk-averse person would prefer a single line. The intuition is: if you end up in a faster server your time of waiting is much shortened; if you end up in a slow server then your time of waiting is much prolonged (multiplied by the number of people ahead of you plus you). For a single line, your waiting time is simply the server's speed, it is not exacerbated.</p>\n" }, { "answer_id": 3475637, "score": 0, "text": "<p>Note that in practice, using a single line causes extra time to get to the open counter. This decreases the effective utilization rate and, therefore, increases the queuing time.</p>\n" }, { "answer_id": 2982104, "score": 1, "text": "<p>The average waiting time (mathematical expectation) is the same but the variation of waiting time for a single line is smaller than that of multiple lines (easy to demonstrate mathematically). Thus a risk-averse person would prefer a single line. The intuition is: if you end up in a faster server your time of waiting is much shortened; if you end up in a slow server then your time of waiting is much prolonged (multiplied by the number of people ahead of you plus you). For a single line, your waiting time is simply the server's speed, it is not exacerbated.</p>\n" }, { "answer_id": 3475637, "score": 0, "text": "<p>Note that in practice, using a single line causes extra time to get to the open counter. This decreases the effective utilization rate and, therefore, increases the queuing time.</p>\n" }, { "answer_id": 2982104, "score": 1, "text": "<p>The average waiting time (mathematical expectation) is the same but the variation of waiting time for a single line is smaller than that of multiple lines (easy to demonstrate mathematically). Thus a risk-averse person would prefer a single line. The intuition is: if you end up in a faster server your time of waiting is much shortened; if you end up in a slow server then your time of waiting is much prolonged (multiplied by the number of people ahead of you plus you). For a single line, your waiting time is simply the server's speed, it is not exacerbated.</p>\n" }, { "answer_id": 3475637, "score": 0, "text": "<p>Note that in practice, using a single line causes extra time to get to the open counter. This decreases the effective utilization rate and, therefore, increases the queuing time.</p>\n" }, { "answer_id": 2982104, "score": 1, "text": "<p>The average waiting time (mathematical expectation) is the same but the variation of waiting time for a single line is smaller than that of multiple lines (easy to demonstrate mathematically). Thus a risk-averse person would prefer a single line. The intuition is: if you end up in a faster server your time of waiting is much shortened; if you end up in a slow server then your time of waiting is much prolonged (multiplied by the number of people ahead of you plus you). For a single line, your waiting time is simply the server's speed, it is not exacerbated.</p>\n" }, { "answer_id": 3475637, "score": 0, "text": "<p>Note that in practice, using a single line causes extra time to get to the open counter. This decreases the effective utilization rate and, therefore, increases the queuing time.</p>\n" }, { "answer_id": 2982104, "score": 1, "text": "<p>The average waiting time (mathematical expectation) is the same but the variation of waiting time for a single line is smaller than that of multiple lines (easy to demonstrate mathematically). Thus a risk-averse person would prefer a single line. The intuition is: if you end up in a faster server your time of waiting is much shortened; if you end up in a slow server then your time of waiting is much prolonged (multiplied by the number of people ahead of you plus you). For a single line, your waiting time is simply the server's speed, it is not exacerbated.</p>\n" }, { "answer_id": 3475637, "score": 0, "text": "<p>Note that in practice, using a single line causes extra time to get to the open counter. This decreases the effective utilization rate and, therefore, increases the queuing time.</p>\n" } ]
Question: Does a single-line result in shorter average queue time? Answer 1: <p>The average waiting time (mathematical expectation) is the same but the variation of waiting time for a single line is smaller than that of multiple lines (easy to demonstrate mathematically). Thus a risk-averse person would prefer a single line. The intuition is: if you end up in a faster server your time of waiting is much shortened; if you end up in a slow server then your time of waiting is much prolonged (multiplied by the number of people ahead of you plus you). For a single line, your waiting time is simply the server's speed, it is not exacerbated.</p> Answer 2: <p>Note that in practice, using a single line causes extra time to get to the open counter. This decreases the effective utilization rate and, therefore, increases the queuing time.</p> Answer 3: <p>The average waiting time (mathematical expectation) is the same but the variation of waiting time for a single line is smaller than that of multiple lines (easy to demonstrate mathematically). Thus a risk-averse person would prefer a single line. The intuition is: if you end up in a faster server your time of waiting is much shortened; if you end up in a slow server then your time of waiting is much prolonged (multiplied by the number of people ahead of you plus you). For a single line, your waiting time is simply the server's speed, it is not exacerbated.</p> Answer 4: <p>Note that in practice, using a single line causes extra time to get to the open counter. This decreases the effective utilization rate and, therefore, increases the queuing time.</p> Answer 5: <p>The average waiting time (mathematical expectation) is the same but the variation of waiting time for a single line is smaller than that of multiple lines (easy to demonstrate mathematically). Thus a risk-averse person would prefer a single line. The intuition is: if you end up in a faster server your time of waiting is much shortened; if you end up in a slow server then your time of waiting is much prolonged (multiplied by the number of people ahead of you plus you). For a single line, your waiting time is simply the server's speed, it is not exacerbated.</p> Answer 6: <p>Note that in practice, using a single line causes extra time to get to the open counter. This decreases the effective utilization rate and, therefore, increases the queuing time.</p> Answer 7: <p>The average waiting time (mathematical expectation) is the same but the variation of waiting time for a single line is smaller than that of multiple lines (easy to demonstrate mathematically). Thus a risk-averse person would prefer a single line. The intuition is: if you end up in a faster server your time of waiting is much shortened; if you end up in a slow server then your time of waiting is much prolonged (multiplied by the number of people ahead of you plus you). For a single line, your waiting time is simply the server's speed, it is not exacerbated.</p> Answer 8: <p>Note that in practice, using a single line causes extra time to get to the open counter. This decreases the effective utilization rate and, therefore, increases the queuing time.</p> Answer 9: <p>The average waiting time (mathematical expectation) is the same but the variation of waiting time for a single line is smaller than that of multiple lines (easy to demonstrate mathematically). Thus a risk-averse person would prefer a single line. The intuition is: if you end up in a faster server your time of waiting is much shortened; if you end up in a slow server then your time of waiting is much prolonged (multiplied by the number of people ahead of you plus you). For a single line, your waiting time is simply the server's speed, it is not exacerbated.</p> Answer 10: <p>Note that in practice, using a single line causes extra time to get to the open counter. This decreases the effective utilization rate and, therefore, increases the queuing time.</p>
Find when three numbers have the same remainder when divided by the same number
2,117,166
5
https://math.stackexchange.com/questions/2117166/find-when-three-numbers-have-the-same-remainder-when-divided-by-the-same-number
<p>If three numbers 112, 232, and 400 are each divided by the number D, each of their quotients will have the same remainder R. Find R where R>1</p> <p>How should I approach this?</p>
[ { "answer_id": 2117185, "score": 2, "text": "<p>Hint:\n$$112 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$232 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$400 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$120 \\equiv 0 \\mod D$$\n$$168 \\equiv 0 \\mod D$$</p>\n\n<p>Hence $D$ is a common factor of both $120$ and $168$.</p>\n\n<p>$$24 \\equiv 0 \\mod D$$</p>\n\n<p>$D \\in \\{1,2,3,4,6,8,12,24 \\}$</p>\n\n<p>Use $R&gt;1$ to identify which $D$ is possible.</p>\n" }, { "answer_id": 2117186, "score": 2, "text": "<p>We have $112 \\equiv 232 \\equiv 400 \\pmod{D}$. This is equivalent to $D$ dividing their pairwise differences: $D \\mid 120,168,288$ and so $D \\mid 24$. Therefore, $1,2,3,4,6,8,12,24$ are possibilities for $D$ but since $R &gt; 1$ we have only $D = 6,12,24$.</p>\n" }, { "answer_id": 2117187, "score": 2, "text": "<p>This is the question that when we subtract $x$ from $112, 232,400$, then we should get three numbers commonly divisible by some $D &gt; x$. That is, $D | 112 - x, 232 - x, 400-x$ for $x &lt; D$.</p>\n\n<p>Before we think twice, I've got the answer: $x=16$ , and the numbers are $96,216,384$, commonly divisible by $24 = D$.</p>\n\n<p>Note that the greatest common divisor of the numbers is $8$: $112 = 8*14, 232=8*29, 400=8*50$ . Hence, if $x$ is a multiple of $8$, then it is seen that the numbers $112-x, 232-x, 400-x$ are divisible by $8$ as well. However, if these all are divisible by $8$, so are their differences: $232-112 = 120, 400-232=168, 400-112 = 288$. These numbers, however, are also divisible by $24$. So we take the guess $24$ and it works out. So $R=16, D=24$ works.</p>\n\n<p>Anything else that must work must be a divisor of $24$, as we have seen. We can rule out some of the possibilities:</p>\n\n<p>1) $D=1,2,4,8$ can be ruled out, because on division by these, the remainder zero is left all the time.</p>\n\n<p>2) On division by $3$, they all leave remainder $1$. So here is one answer, but then $R&gt;1$ is given in the question, so this is not considered.</p>\n\n<p>3) On division by $6$ and $12$, they leave the remainder $4$.</p>\n\n<p>4) On division by $24$, they leave the remainder $16$.</p>\n\n<p>The following is a complete description of the problem's answer.</p>\n" }, { "answer_id": 2117247, "score": 1, "text": "<p>It's solvable by a <em>general</em> method, viz. <a href=\"https://math.stackexchange.com/a/1857626/242\">CRT = Chinese Remainder Theorem. Recall</a></p>\n<p><span class=\"math-container\">$$\\begin{align} \nr \\equiv a_1\\!\\!\\!\\pmod{d_1}\\\\\nr \\equiv a_2\\!\\!\\!\\pmod{d_2}\\\\\nr \\equiv a_3\\!\\!\\!\\pmod{d_3}\n\\end{align}$$</span></p>\n<p>is solvable <span class=\"math-container\">$\\!\\iff\\!$</span> the congruences are pairwise solvable <span class=\"math-container\">$\\!\\iff\\!$</span> <span class=\"math-container\">$\\,\\gcd(d_i,d_j)\\mid a_i-a_j\\,$</span> for all <span class=\"math-container\">$\\,i\\neq j$</span></p>\n<p>We have <span class=\"math-container\">$\\,a_i = 112,232,400\\,$</span> and <span class=\"math-container\">$\\,d_i = d\\,$</span> <span class=\"math-container\">$\\Rightarrow$</span> <span class=\"math-container\">$\\,\\gcd(d_i,d_j)=d,\\,$</span> so the solvability condition is</p>\n<p><span class=\"math-container\">$$ \\begin{align}d\\mid 232\\!-\\!112=120\\\\ d\\mid 400\\!-\\!232=168\\\\ d\\mid 400\\!-\\!112=288\\end{align}\\iff d\\mid\\gcd(120,168,288)=24\\qquad$$</span></p>\n<p>Now we need only check which divisors of <span class=\"math-container\">$24$</span> yield remainder <span class=\"math-container\">$\\,r&gt;1,\\,$</span> which is easy.</p>\n" }, { "answer_id": 2117185, "score": 2, "text": "<p>Hint:\n$$112 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$232 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$400 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$120 \\equiv 0 \\mod D$$\n$$168 \\equiv 0 \\mod D$$</p>\n\n<p>Hence $D$ is a common factor of both $120$ and $168$.</p>\n\n<p>$$24 \\equiv 0 \\mod D$$</p>\n\n<p>$D \\in \\{1,2,3,4,6,8,12,24 \\}$</p>\n\n<p>Use $R&gt;1$ to identify which $D$ is possible.</p>\n" }, { "answer_id": 2117186, "score": 2, "text": "<p>We have $112 \\equiv 232 \\equiv 400 \\pmod{D}$. This is equivalent to $D$ dividing their pairwise differences: $D \\mid 120,168,288$ and so $D \\mid 24$. Therefore, $1,2,3,4,6,8,12,24$ are possibilities for $D$ but since $R &gt; 1$ we have only $D = 6,12,24$.</p>\n" }, { "answer_id": 2117187, "score": 2, "text": "<p>This is the question that when we subtract $x$ from $112, 232,400$, then we should get three numbers commonly divisible by some $D &gt; x$. That is, $D | 112 - x, 232 - x, 400-x$ for $x &lt; D$.</p>\n\n<p>Before we think twice, I've got the answer: $x=16$ , and the numbers are $96,216,384$, commonly divisible by $24 = D$.</p>\n\n<p>Note that the greatest common divisor of the numbers is $8$: $112 = 8*14, 232=8*29, 400=8*50$ . Hence, if $x$ is a multiple of $8$, then it is seen that the numbers $112-x, 232-x, 400-x$ are divisible by $8$ as well. However, if these all are divisible by $8$, so are their differences: $232-112 = 120, 400-232=168, 400-112 = 288$. These numbers, however, are also divisible by $24$. So we take the guess $24$ and it works out. So $R=16, D=24$ works.</p>\n\n<p>Anything else that must work must be a divisor of $24$, as we have seen. We can rule out some of the possibilities:</p>\n\n<p>1) $D=1,2,4,8$ can be ruled out, because on division by these, the remainder zero is left all the time.</p>\n\n<p>2) On division by $3$, they all leave remainder $1$. So here is one answer, but then $R&gt;1$ is given in the question, so this is not considered.</p>\n\n<p>3) On division by $6$ and $12$, they leave the remainder $4$.</p>\n\n<p>4) On division by $24$, they leave the remainder $16$.</p>\n\n<p>The following is a complete description of the problem's answer.</p>\n" }, { "answer_id": 2117247, "score": 1, "text": "<p>It's solvable by a <em>general</em> method, viz. <a href=\"https://math.stackexchange.com/a/1857626/242\">CRT = Chinese Remainder Theorem. Recall</a></p>\n<p><span class=\"math-container\">$$\\begin{align} \nr \\equiv a_1\\!\\!\\!\\pmod{d_1}\\\\\nr \\equiv a_2\\!\\!\\!\\pmod{d_2}\\\\\nr \\equiv a_3\\!\\!\\!\\pmod{d_3}\n\\end{align}$$</span></p>\n<p>is solvable <span class=\"math-container\">$\\!\\iff\\!$</span> the congruences are pairwise solvable <span class=\"math-container\">$\\!\\iff\\!$</span> <span class=\"math-container\">$\\,\\gcd(d_i,d_j)\\mid a_i-a_j\\,$</span> for all <span class=\"math-container\">$\\,i\\neq j$</span></p>\n<p>We have <span class=\"math-container\">$\\,a_i = 112,232,400\\,$</span> and <span class=\"math-container\">$\\,d_i = d\\,$</span> <span class=\"math-container\">$\\Rightarrow$</span> <span class=\"math-container\">$\\,\\gcd(d_i,d_j)=d,\\,$</span> so the solvability condition is</p>\n<p><span class=\"math-container\">$$ \\begin{align}d\\mid 232\\!-\\!112=120\\\\ d\\mid 400\\!-\\!232=168\\\\ d\\mid 400\\!-\\!112=288\\end{align}\\iff d\\mid\\gcd(120,168,288)=24\\qquad$$</span></p>\n<p>Now we need only check which divisors of <span class=\"math-container\">$24$</span> yield remainder <span class=\"math-container\">$\\,r&gt;1,\\,$</span> which is easy.</p>\n" }, { "answer_id": 2117185, "score": 2, "text": "<p>Hint:\n$$112 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$232 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$400 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$120 \\equiv 0 \\mod D$$\n$$168 \\equiv 0 \\mod D$$</p>\n\n<p>Hence $D$ is a common factor of both $120$ and $168$.</p>\n\n<p>$$24 \\equiv 0 \\mod D$$</p>\n\n<p>$D \\in \\{1,2,3,4,6,8,12,24 \\}$</p>\n\n<p>Use $R&gt;1$ to identify which $D$ is possible.</p>\n" }, { "answer_id": 2117186, "score": 2, "text": "<p>We have $112 \\equiv 232 \\equiv 400 \\pmod{D}$. This is equivalent to $D$ dividing their pairwise differences: $D \\mid 120,168,288$ and so $D \\mid 24$. Therefore, $1,2,3,4,6,8,12,24$ are possibilities for $D$ but since $R &gt; 1$ we have only $D = 6,12,24$.</p>\n" }, { "answer_id": 2117187, "score": 2, "text": "<p>This is the question that when we subtract $x$ from $112, 232,400$, then we should get three numbers commonly divisible by some $D &gt; x$. That is, $D | 112 - x, 232 - x, 400-x$ for $x &lt; D$.</p>\n\n<p>Before we think twice, I've got the answer: $x=16$ , and the numbers are $96,216,384$, commonly divisible by $24 = D$.</p>\n\n<p>Note that the greatest common divisor of the numbers is $8$: $112 = 8*14, 232=8*29, 400=8*50$ . Hence, if $x$ is a multiple of $8$, then it is seen that the numbers $112-x, 232-x, 400-x$ are divisible by $8$ as well. However, if these all are divisible by $8$, so are their differences: $232-112 = 120, 400-232=168, 400-112 = 288$. These numbers, however, are also divisible by $24$. So we take the guess $24$ and it works out. So $R=16, D=24$ works.</p>\n\n<p>Anything else that must work must be a divisor of $24$, as we have seen. We can rule out some of the possibilities:</p>\n\n<p>1) $D=1,2,4,8$ can be ruled out, because on division by these, the remainder zero is left all the time.</p>\n\n<p>2) On division by $3$, they all leave remainder $1$. So here is one answer, but then $R&gt;1$ is given in the question, so this is not considered.</p>\n\n<p>3) On division by $6$ and $12$, they leave the remainder $4$.</p>\n\n<p>4) On division by $24$, they leave the remainder $16$.</p>\n\n<p>The following is a complete description of the problem's answer.</p>\n" }, { "answer_id": 2117247, "score": 1, "text": "<p>It's solvable by a <em>general</em> method, viz. <a href=\"https://math.stackexchange.com/a/1857626/242\">CRT = Chinese Remainder Theorem. Recall</a></p>\n<p><span class=\"math-container\">$$\\begin{align} \nr \\equiv a_1\\!\\!\\!\\pmod{d_1}\\\\\nr \\equiv a_2\\!\\!\\!\\pmod{d_2}\\\\\nr \\equiv a_3\\!\\!\\!\\pmod{d_3}\n\\end{align}$$</span></p>\n<p>is solvable <span class=\"math-container\">$\\!\\iff\\!$</span> the congruences are pairwise solvable <span class=\"math-container\">$\\!\\iff\\!$</span> <span class=\"math-container\">$\\,\\gcd(d_i,d_j)\\mid a_i-a_j\\,$</span> for all <span class=\"math-container\">$\\,i\\neq j$</span></p>\n<p>We have <span class=\"math-container\">$\\,a_i = 112,232,400\\,$</span> and <span class=\"math-container\">$\\,d_i = d\\,$</span> <span class=\"math-container\">$\\Rightarrow$</span> <span class=\"math-container\">$\\,\\gcd(d_i,d_j)=d,\\,$</span> so the solvability condition is</p>\n<p><span class=\"math-container\">$$ \\begin{align}d\\mid 232\\!-\\!112=120\\\\ d\\mid 400\\!-\\!232=168\\\\ d\\mid 400\\!-\\!112=288\\end{align}\\iff d\\mid\\gcd(120,168,288)=24\\qquad$$</span></p>\n<p>Now we need only check which divisors of <span class=\"math-container\">$24$</span> yield remainder <span class=\"math-container\">$\\,r&gt;1,\\,$</span> which is easy.</p>\n" }, { "answer_id": 2117185, "score": 2, "text": "<p>Hint:\n$$112 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$232 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$400 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$120 \\equiv 0 \\mod D$$\n$$168 \\equiv 0 \\mod D$$</p>\n\n<p>Hence $D$ is a common factor of both $120$ and $168$.</p>\n\n<p>$$24 \\equiv 0 \\mod D$$</p>\n\n<p>$D \\in \\{1,2,3,4,6,8,12,24 \\}$</p>\n\n<p>Use $R&gt;1$ to identify which $D$ is possible.</p>\n" }, { "answer_id": 2117186, "score": 2, "text": "<p>We have $112 \\equiv 232 \\equiv 400 \\pmod{D}$. This is equivalent to $D$ dividing their pairwise differences: $D \\mid 120,168,288$ and so $D \\mid 24$. Therefore, $1,2,3,4,6,8,12,24$ are possibilities for $D$ but since $R &gt; 1$ we have only $D = 6,12,24$.</p>\n" }, { "answer_id": 2117187, "score": 2, "text": "<p>This is the question that when we subtract $x$ from $112, 232,400$, then we should get three numbers commonly divisible by some $D &gt; x$. That is, $D | 112 - x, 232 - x, 400-x$ for $x &lt; D$.</p>\n\n<p>Before we think twice, I've got the answer: $x=16$ , and the numbers are $96,216,384$, commonly divisible by $24 = D$.</p>\n\n<p>Note that the greatest common divisor of the numbers is $8$: $112 = 8*14, 232=8*29, 400=8*50$ . Hence, if $x$ is a multiple of $8$, then it is seen that the numbers $112-x, 232-x, 400-x$ are divisible by $8$ as well. However, if these all are divisible by $8$, so are their differences: $232-112 = 120, 400-232=168, 400-112 = 288$. These numbers, however, are also divisible by $24$. So we take the guess $24$ and it works out. So $R=16, D=24$ works.</p>\n\n<p>Anything else that must work must be a divisor of $24$, as we have seen. We can rule out some of the possibilities:</p>\n\n<p>1) $D=1,2,4,8$ can be ruled out, because on division by these, the remainder zero is left all the time.</p>\n\n<p>2) On division by $3$, they all leave remainder $1$. So here is one answer, but then $R&gt;1$ is given in the question, so this is not considered.</p>\n\n<p>3) On division by $6$ and $12$, they leave the remainder $4$.</p>\n\n<p>4) On division by $24$, they leave the remainder $16$.</p>\n\n<p>The following is a complete description of the problem's answer.</p>\n" }, { "answer_id": 2117247, "score": 1, "text": "<p>It's solvable by a <em>general</em> method, viz. <a href=\"https://math.stackexchange.com/a/1857626/242\">CRT = Chinese Remainder Theorem. Recall</a></p>\n<p><span class=\"math-container\">$$\\begin{align} \nr \\equiv a_1\\!\\!\\!\\pmod{d_1}\\\\\nr \\equiv a_2\\!\\!\\!\\pmod{d_2}\\\\\nr \\equiv a_3\\!\\!\\!\\pmod{d_3}\n\\end{align}$$</span></p>\n<p>is solvable <span class=\"math-container\">$\\!\\iff\\!$</span> the congruences are pairwise solvable <span class=\"math-container\">$\\!\\iff\\!$</span> <span class=\"math-container\">$\\,\\gcd(d_i,d_j)\\mid a_i-a_j\\,$</span> for all <span class=\"math-container\">$\\,i\\neq j$</span></p>\n<p>We have <span class=\"math-container\">$\\,a_i = 112,232,400\\,$</span> and <span class=\"math-container\">$\\,d_i = d\\,$</span> <span class=\"math-container\">$\\Rightarrow$</span> <span class=\"math-container\">$\\,\\gcd(d_i,d_j)=d,\\,$</span> so the solvability condition is</p>\n<p><span class=\"math-container\">$$ \\begin{align}d\\mid 232\\!-\\!112=120\\\\ d\\mid 400\\!-\\!232=168\\\\ d\\mid 400\\!-\\!112=288\\end{align}\\iff d\\mid\\gcd(120,168,288)=24\\qquad$$</span></p>\n<p>Now we need only check which divisors of <span class=\"math-container\">$24$</span> yield remainder <span class=\"math-container\">$\\,r&gt;1,\\,$</span> which is easy.</p>\n" }, { "answer_id": 2117185, "score": 2, "text": "<p>Hint:\n$$112 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$232 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$400 - R \\equiv 0 \\mod D$$</p>\n\n<p>$$120 \\equiv 0 \\mod D$$\n$$168 \\equiv 0 \\mod D$$</p>\n\n<p>Hence $D$ is a common factor of both $120$ and $168$.</p>\n\n<p>$$24 \\equiv 0 \\mod D$$</p>\n\n<p>$D \\in \\{1,2,3,4,6,8,12,24 \\}$</p>\n\n<p>Use $R&gt;1$ to identify which $D$ is possible.</p>\n" }, { "answer_id": 2117186, "score": 2, "text": "<p>We have $112 \\equiv 232 \\equiv 400 \\pmod{D}$. This is equivalent to $D$ dividing their pairwise differences: $D \\mid 120,168,288$ and so $D \\mid 24$. Therefore, $1,2,3,4,6,8,12,24$ are possibilities for $D$ but since $R &gt; 1$ we have only $D = 6,12,24$.</p>\n" }, { "answer_id": 2117187, "score": 2, "text": "<p>This is the question that when we subtract $x$ from $112, 232,400$, then we should get three numbers commonly divisible by some $D &gt; x$. That is, $D | 112 - x, 232 - x, 400-x$ for $x &lt; D$.</p>\n\n<p>Before we think twice, I've got the answer: $x=16$ , and the numbers are $96,216,384$, commonly divisible by $24 = D$.</p>\n\n<p>Note that the greatest common divisor of the numbers is $8$: $112 = 8*14, 232=8*29, 400=8*50$ . Hence, if $x$ is a multiple of $8$, then it is seen that the numbers $112-x, 232-x, 400-x$ are divisible by $8$ as well. However, if these all are divisible by $8$, so are their differences: $232-112 = 120, 400-232=168, 400-112 = 288$. These numbers, however, are also divisible by $24$. So we take the guess $24$ and it works out. So $R=16, D=24$ works.</p>\n\n<p>Anything else that must work must be a divisor of $24$, as we have seen. We can rule out some of the possibilities:</p>\n\n<p>1) $D=1,2,4,8$ can be ruled out, because on division by these, the remainder zero is left all the time.</p>\n\n<p>2) On division by $3$, they all leave remainder $1$. So here is one answer, but then $R&gt;1$ is given in the question, so this is not considered.</p>\n\n<p>3) On division by $6$ and $12$, they leave the remainder $4$.</p>\n\n<p>4) On division by $24$, they leave the remainder $16$.</p>\n\n<p>The following is a complete description of the problem's answer.</p>\n" }, { "answer_id": 2117247, "score": 1, "text": "<p>It's solvable by a <em>general</em> method, viz. <a href=\"https://math.stackexchange.com/a/1857626/242\">CRT = Chinese Remainder Theorem. Recall</a></p>\n<p><span class=\"math-container\">$$\\begin{align} \nr \\equiv a_1\\!\\!\\!\\pmod{d_1}\\\\\nr \\equiv a_2\\!\\!\\!\\pmod{d_2}\\\\\nr \\equiv a_3\\!\\!\\!\\pmod{d_3}\n\\end{align}$$</span></p>\n<p>is solvable <span class=\"math-container\">$\\!\\iff\\!$</span> the congruences are pairwise solvable <span class=\"math-container\">$\\!\\iff\\!$</span> <span class=\"math-container\">$\\,\\gcd(d_i,d_j)\\mid a_i-a_j\\,$</span> for all <span class=\"math-container\">$\\,i\\neq j$</span></p>\n<p>We have <span class=\"math-container\">$\\,a_i = 112,232,400\\,$</span> and <span class=\"math-container\">$\\,d_i = d\\,$</span> <span class=\"math-container\">$\\Rightarrow$</span> <span class=\"math-container\">$\\,\\gcd(d_i,d_j)=d,\\,$</span> so the solvability condition is</p>\n<p><span class=\"math-container\">$$ \\begin{align}d\\mid 232\\!-\\!112=120\\\\ d\\mid 400\\!-\\!232=168\\\\ d\\mid 400\\!-\\!112=288\\end{align}\\iff d\\mid\\gcd(120,168,288)=24\\qquad$$</span></p>\n<p>Now we need only check which divisors of <span class=\"math-container\">$24$</span> yield remainder <span class=\"math-container\">$\\,r&gt;1,\\,$</span> which is easy.</p>\n" } ]
Question: Find when three numbers have the same remainder when divided by the same number Answer 1: <p>Hint: $$112 - R \equiv 0 \mod D$$</p> <p>$$232 - R \equiv 0 \mod D$$</p> <p>$$400 - R \equiv 0 \mod D$$</p> <p>$$120 \equiv 0 \mod D$$ $$168 \equiv 0 \mod D$$</p> <p>Hence $D$ is a common factor of both $120$ and $168$.</p> <p>$$24 \equiv 0 \mod D$$</p> <p>$D \in \{1,2,3,4,6,8,12,24 \}$</p> <p>Use $R&gt;1$ to identify which $D$ is possible.</p> Answer 2: <p>We have $112 \equiv 232 \equiv 400 \pmod{D}$. This is equivalent to $D$ dividing their pairwise differences: $D \mid 120,168,288$ and so $D \mid 24$. Therefore, $1,2,3,4,6,8,12,24$ are possibilities for $D$ but since $R &gt; 1$ we have only $D = 6,12,24$.</p> Answer 3: <p>This is the question that when we subtract $x$ from $112, 232,400$, then we should get three numbers commonly divisible by some $D &gt; x$. That is, $D | 112 - x, 232 - x, 400-x$ for $x &lt; D$.</p> <p>Before we think twice, I've got the answer: $x=16$ , and the numbers are $96,216,384$, commonly divisible by $24 = D$.</p> <p>Note that the greatest common divisor of the numbers is $8$: $112 = 8*14, 232=8*29, 400=8*50$ . Hence, if $x$ is a multiple of $8$, then it is seen that the numbers $112-x, 232-x, 400-x$ are divisible by $8$ as well. However, if these all are divisible by $8$, so are their differences: $232-112 = 120, 400-232=168, 400-112 = 288$. These numbers, however, are also divisible by $24$. So we take the guess $24$ and it works out. So $R=16, D=24$ works.</p> <p>Anything else that must work must be a divisor of $24$, as we have seen. We can rule out some of the possibilities:</p> <p>1) $D=1,2,4,8$ can be ruled out, because on division by these, the remainder zero is left all the time.</p> <p>2) On division by $3$, they all leave remainder $1$. So here is one answer, but then $R&gt;1$ is given in the question, so this is not considered.</p> <p>3) On division by $6$ and $12$, they leave the remainder $4$.</p> <p>4) On division by $24$, they leave the remainder $16$.</p> <p>The following is a complete description of the problem's answer.</p> Answer 4: <p>It's solvable by a <em>general</em> method, viz. <a href="https://math.stackexchange.com/a/1857626/242">CRT = Chinese Remainder Theorem. Recall</a></p> <p><span class="math-container">$$\begin{align} r \equiv a_1\!\!\!\pmod{d_1}\\ r \equiv a_2\!\!\!\pmod{d_2}\\ r \equiv a_3\!\!\!\pmod{d_3} \end{align}$$</span></p> <p>is solvable <span class="math-container">$\!\iff\!$</span> the congruences are pairwise solvable <span class="math-container">$\!\iff\!$</span> <span class="math-container">$\,\gcd(d_i,d_j)\mid a_i-a_j\,$</span> for all <span class="math-container">$\,i\neq j$</span></p> <p>We have <span class="math-container">$\,a_i = 112,232,400\,$</span> and <span class="math-container">$\,d_i = d\,$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$\,\gcd(d_i,d_j)=d,\,$</span> so the solvability condition is</p> <p><span class="math-container">$$ \begin{align}d\mid 232\!-\!112=120\\ d\mid 400\!-\!232=168\\ d\mid 400\!-\!112=288\end{align}\iff d\mid\gcd(120,168,288)=24\qquad$$</span></p> <p>Now we need only check which divisors of <span class="math-container">$24$</span> yield remainder <span class="math-container">$\,r&gt;1,\,$</span> which is easy.</p> Answer 5: <p>Hint: $$112 - R \equiv 0 \mod D$$</p> <p>$$232 - R \equiv 0 \mod D$$</p> <p>$$400 - R \equiv 0 \mod D$$</p> <p>$$120 \equiv 0 \mod D$$ $$168 \equiv 0 \mod D$$</p> <p>Hence $D$ is a common factor of both $120$ and $168$.</p> <p>$$24 \equiv 0 \mod D$$</p> <p>$D \in \{1,2,3,4,6,8,12,24 \}$</p> <p>Use $R&gt;1$ to identify which $D$ is possible.</p> Answer 6: <p>We have $112 \equiv 232 \equiv 400 \pmod{D}$. This is equivalent to $D$ dividing their pairwise differences: $D \mid 120,168,288$ and so $D \mid 24$. Therefore, $1,2,3,4,6,8,12,24$ are possibilities for $D$ but since $R &gt; 1$ we have only $D = 6,12,24$.</p> Answer 7: <p>This is the question that when we subtract $x$ from $112, 232,400$, then we should get three numbers commonly divisible by some $D &gt; x$. That is, $D | 112 - x, 232 - x, 400-x$ for $x &lt; D$.</p> <p>Before we think twice, I've got the answer: $x=16$ , and the numbers are $96,216,384$, commonly divisible by $24 = D$.</p> <p>Note that the greatest common divisor of the numbers is $8$: $112 = 8*14, 232=8*29, 400=8*50$ . Hence, if $x$ is a multiple of $8$, then it is seen that the numbers $112-x, 232-x, 400-x$ are divisible by $8$ as well. However, if these all are divisible by $8$, so are their differences: $232-112 = 120, 400-232=168, 400-112 = 288$. These numbers, however, are also divisible by $24$. So we take the guess $24$ and it works out. So $R=16, D=24$ works.</p> <p>Anything else that must work must be a divisor of $24$, as we have seen. We can rule out some of the possibilities:</p> <p>1) $D=1,2,4,8$ can be ruled out, because on division by these, the remainder zero is left all the time.</p> <p>2) On division by $3$, they all leave remainder $1$. So here is one answer, but then $R&gt;1$ is given in the question, so this is not considered.</p> <p>3) On division by $6$ and $12$, they leave the remainder $4$.</p> <p>4) On division by $24$, they leave the remainder $16$.</p> <p>The following is a complete description of the problem's answer.</p> Answer 8: <p>It's solvable by a <em>general</em> method, viz. <a href="https://math.stackexchange.com/a/1857626/242">CRT = Chinese Remainder Theorem. Recall</a></p> <p><span class="math-container">$$\begin{align} r \equiv a_1\!\!\!\pmod{d_1}\\ r \equiv a_2\!\!\!\pmod{d_2}\\ r \equiv a_3\!\!\!\pmod{d_3} \end{align}$$</span></p> <p>is solvable <span class="math-container">$\!\iff\!$</span> the congruences are pairwise solvable <span class="math-container">$\!\iff\!$</span> <span class="math-container">$\,\gcd(d_i,d_j)\mid a_i-a_j\,$</span> for all <span class="math-container">$\,i\neq j$</span></p> <p>We have <span class="math-container">$\,a_i = 112,232,400\,$</span> and <span class="math-container">$\,d_i = d\,$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$\,\gcd(d_i,d_j)=d,\,$</span> so the solvability condition is</p> <p><span class="math-container">$$ \begin{align}d\mid 232\!-\!112=120\\ d\mid 400\!-\!232=168\\ d\mid 400\!-\!112=288\end{align}\iff d\mid\gcd(120,168,288)=24\qquad$$</span></p> <p>Now we need only check which divisors of <span class="math-container">$24$</span> yield remainder <span class="math-container">$\,r&gt;1,\,$</span> which is easy.</p> Answer 9: <p>Hint: $$112 - R \equiv 0 \mod D$$</p> <p>$$232 - R \equiv 0 \mod D$$</p> <p>$$400 - R \equiv 0 \mod D$$</p> <p>$$120 \equiv 0 \mod D$$ $$168 \equiv 0 \mod D$$</p> <p>Hence $D$ is a common factor of both $120$ and $168$.</p> <p>$$24 \equiv 0 \mod D$$</p> <p>$D \in \{1,2,3,4,6,8,12,24 \}$</p> <p>Use $R&gt;1$ to identify which $D$ is possible.</p> Answer 10: <p>We have $112 \equiv 232 \equiv 400 \pmod{D}$. This is equivalent to $D$ dividing their pairwise differences: $D \mid 120,168,288$ and so $D \mid 24$. Therefore, $1,2,3,4,6,8,12,24$ are possibilities for $D$ but since $R &gt; 1$ we have only $D = 6,12,24$.</p> Answer 11: <p>This is the question that when we subtract $x$ from $112, 232,400$, then we should get three numbers commonly divisible by some $D &gt; x$. That is, $D | 112 - x, 232 - x, 400-x$ for $x &lt; D$.</p> <p>Before we think twice, I've got the answer: $x=16$ , and the numbers are $96,216,384$, commonly divisible by $24 = D$.</p> <p>Note that the greatest common divisor of the numbers is $8$: $112 = 8*14, 232=8*29, 400=8*50$ . Hence, if $x$ is a multiple of $8$, then it is seen that the numbers $112-x, 232-x, 400-x$ are divisible by $8$ as well. However, if these all are divisible by $8$, so are their differences: $232-112 = 120, 400-232=168, 400-112 = 288$. These numbers, however, are also divisible by $24$. So we take the guess $24$ and it works out. So $R=16, D=24$ works.</p> <p>Anything else that must work must be a divisor of $24$, as we have seen. We can rule out some of the possibilities:</p> <p>1) $D=1,2,4,8$ can be ruled out, because on division by these, the remainder zero is left all the time.</p> <p>2) On division by $3$, they all leave remainder $1$. So here is one answer, but then $R&gt;1$ is given in the question, so this is not considered.</p> <p>3) On division by $6$ and $12$, they leave the remainder $4$.</p> <p>4) On division by $24$, they leave the remainder $16$.</p> <p>The following is a complete description of the problem's answer.</p> Answer 12: <p>It's solvable by a <em>general</em> method, viz. <a href="https://math.stackexchange.com/a/1857626/242">CRT = Chinese Remainder Theorem. Recall</a></p> <p><span class="math-container">$$\begin{align} r \equiv a_1\!\!\!\pmod{d_1}\\ r \equiv a_2\!\!\!\pmod{d_2}\\ r \equiv a_3\!\!\!\pmod{d_3} \end{align}$$</span></p> <p>is solvable <span class="math-container">$\!\iff\!$</span> the congruences are pairwise solvable <span class="math-container">$\!\iff\!$</span> <span class="math-container">$\,\gcd(d_i,d_j)\mid a_i-a_j\,$</span> for all <span class="math-container">$\,i\neq j$</span></p> <p>We have <span class="math-container">$\,a_i = 112,232,400\,$</span> and <span class="math-container">$\,d_i = d\,$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$\,\gcd(d_i,d_j)=d,\,$</span> so the solvability condition is</p> <p><span class="math-container">$$ \begin{align}d\mid 232\!-\!112=120\\ d\mid 400\!-\!232=168\\ d\mid 400\!-\!112=288\end{align}\iff d\mid\gcd(120,168,288)=24\qquad$$</span></p> <p>Now we need only check which divisors of <span class="math-container">$24$</span> yield remainder <span class="math-container">$\,r&gt;1,\,$</span> which is easy.</p> Answer 13: <p>Hint: $$112 - R \equiv 0 \mod D$$</p> <p>$$232 - R \equiv 0 \mod D$$</p> <p>$$400 - R \equiv 0 \mod D$$</p> <p>$$120 \equiv 0 \mod D$$ $$168 \equiv 0 \mod D$$</p> <p>Hence $D$ is a common factor of both $120$ and $168$.</p> <p>$$24 \equiv 0 \mod D$$</p> <p>$D \in \{1,2,3,4,6,8,12,24 \}$</p> <p>Use $R&gt;1$ to identify which $D$ is possible.</p> Answer 14: <p>We have $112 \equiv 232 \equiv 400 \pmod{D}$. This is equivalent to $D$ dividing their pairwise differences: $D \mid 120,168,288$ and so $D \mid 24$. Therefore, $1,2,3,4,6,8,12,24$ are possibilities for $D$ but since $R &gt; 1$ we have only $D = 6,12,24$.</p> Answer 15: <p>This is the question that when we subtract $x$ from $112, 232,400$, then we should get three numbers commonly divisible by some $D &gt; x$. That is, $D | 112 - x, 232 - x, 400-x$ for $x &lt; D$.</p> <p>Before we think twice, I've got the answer: $x=16$ , and the numbers are $96,216,384$, commonly divisible by $24 = D$.</p> <p>Note that the greatest common divisor of the numbers is $8$: $112 = 8*14, 232=8*29, 400=8*50$ . Hence, if $x$ is a multiple of $8$, then it is seen that the numbers $112-x, 232-x, 400-x$ are divisible by $8$ as well. However, if these all are divisible by $8$, so are their differences: $232-112 = 120, 400-232=168, 400-112 = 288$. These numbers, however, are also divisible by $24$. So we take the guess $24$ and it works out. So $R=16, D=24$ works.</p> <p>Anything else that must work must be a divisor of $24$, as we have seen. We can rule out some of the possibilities:</p> <p>1) $D=1,2,4,8$ can be ruled out, because on division by these, the remainder zero is left all the time.</p> <p>2) On division by $3$, they all leave remainder $1$. So here is one answer, but then $R&gt;1$ is given in the question, so this is not considered.</p> <p>3) On division by $6$ and $12$, they leave the remainder $4$.</p> <p>4) On division by $24$, they leave the remainder $16$.</p> <p>The following is a complete description of the problem's answer.</p> Answer 16: <p>It's solvable by a <em>general</em> method, viz. <a href="https://math.stackexchange.com/a/1857626/242">CRT = Chinese Remainder Theorem. Recall</a></p> <p><span class="math-container">$$\begin{align} r \equiv a_1\!\!\!\pmod{d_1}\\ r \equiv a_2\!\!\!\pmod{d_2}\\ r \equiv a_3\!\!\!\pmod{d_3} \end{align}$$</span></p> <p>is solvable <span class="math-container">$\!\iff\!$</span> the congruences are pairwise solvable <span class="math-container">$\!\iff\!$</span> <span class="math-container">$\,\gcd(d_i,d_j)\mid a_i-a_j\,$</span> for all <span class="math-container">$\,i\neq j$</span></p> <p>We have <span class="math-container">$\,a_i = 112,232,400\,$</span> and <span class="math-container">$\,d_i = d\,$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$\,\gcd(d_i,d_j)=d,\,$</span> so the solvability condition is</p> <p><span class="math-container">$$ \begin{align}d\mid 232\!-\!112=120\\ d\mid 400\!-\!232=168\\ d\mid 400\!-\!112=288\end{align}\iff d\mid\gcd(120,168,288)=24\qquad$$</span></p> <p>Now we need only check which divisors of <span class="math-container">$24$</span> yield remainder <span class="math-container">$\,r&gt;1,\,$</span> which is easy.</p> Answer 17: <p>Hint: $$112 - R \equiv 0 \mod D$$</p> <p>$$232 - R \equiv 0 \mod D$$</p> <p>$$400 - R \equiv 0 \mod D$$</p> <p>$$120 \equiv 0 \mod D$$ $$168 \equiv 0 \mod D$$</p> <p>Hence $D$ is a common factor of both $120$ and $168$.</p> <p>$$24 \equiv 0 \mod D$$</p> <p>$D \in \{1,2,3,4,6,8,12,24 \}$</p> <p>Use $R&gt;1$ to identify which $D$ is possible.</p> Answer 18: <p>We have $112 \equiv 232 \equiv 400 \pmod{D}$. This is equivalent to $D$ dividing their pairwise differences: $D \mid 120,168,288$ and so $D \mid 24$. Therefore, $1,2,3,4,6,8,12,24$ are possibilities for $D$ but since $R &gt; 1$ we have only $D = 6,12,24$.</p> Answer 19: <p>This is the question that when we subtract $x$ from $112, 232,400$, then we should get three numbers commonly divisible by some $D &gt; x$. That is, $D | 112 - x, 232 - x, 400-x$ for $x &lt; D$.</p> <p>Before we think twice, I've got the answer: $x=16$ , and the numbers are $96,216,384$, commonly divisible by $24 = D$.</p> <p>Note that the greatest common divisor of the numbers is $8$: $112 = 8*14, 232=8*29, 400=8*50$ . Hence, if $x$ is a multiple of $8$, then it is seen that the numbers $112-x, 232-x, 400-x$ are divisible by $8$ as well. However, if these all are divisible by $8$, so are their differences: $232-112 = 120, 400-232=168, 400-112 = 288$. These numbers, however, are also divisible by $24$. So we take the guess $24$ and it works out. So $R=16, D=24$ works.</p> <p>Anything else that must work must be a divisor of $24$, as we have seen. We can rule out some of the possibilities:</p> <p>1) $D=1,2,4,8$ can be ruled out, because on division by these, the remainder zero is left all the time.</p> <p>2) On division by $3$, they all leave remainder $1$. So here is one answer, but then $R&gt;1$ is given in the question, so this is not considered.</p> <p>3) On division by $6$ and $12$, they leave the remainder $4$.</p> <p>4) On division by $24$, they leave the remainder $16$.</p> <p>The following is a complete description of the problem's answer.</p> Answer 20: <p>It's solvable by a <em>general</em> method, viz. <a href="https://math.stackexchange.com/a/1857626/242">CRT = Chinese Remainder Theorem. Recall</a></p> <p><span class="math-container">$$\begin{align} r \equiv a_1\!\!\!\pmod{d_1}\\ r \equiv a_2\!\!\!\pmod{d_2}\\ r \equiv a_3\!\!\!\pmod{d_3} \end{align}$$</span></p> <p>is solvable <span class="math-container">$\!\iff\!$</span> the congruences are pairwise solvable <span class="math-container">$\!\iff\!$</span> <span class="math-container">$\,\gcd(d_i,d_j)\mid a_i-a_j\,$</span> for all <span class="math-container">$\,i\neq j$</span></p> <p>We have <span class="math-container">$\,a_i = 112,232,400\,$</span> and <span class="math-container">$\,d_i = d\,$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$\,\gcd(d_i,d_j)=d,\,$</span> so the solvability condition is</p> <p><span class="math-container">$$ \begin{align}d\mid 232\!-\!112=120\\ d\mid 400\!-\!232=168\\ d\mid 400\!-\!112=288\end{align}\iff d\mid\gcd(120,168,288)=24\qquad$$</span></p> <p>Now we need only check which divisors of <span class="math-container">$24$</span> yield remainder <span class="math-container">$\,r&gt;1,\,$</span> which is easy.</p>
Expanding powers of $\tan^n \theta$
2,079,082
4
https://math.stackexchange.com/questions/2079082/expanding-powers-of-tann-theta
<p>I know there are standard techniques to expand $\sin^n \theta$ and $\cos^n \theta$ $(n \in \mathbb{N})$ in terms of sines and cosines of multiples of $\theta$. </p> <p>We take a complex number $z = \cos \theta + i \sin \theta$. Then </p> <p>$$ \cos k\theta = {1 \over 2} \left ( z^k + {1 \over z^k} \right ) $$</p> <p>where $k \in \mathbb{N}$ and</p> <p>$$ \cos^n \theta = {1 \over 2^n} \left ( z + {1 \over z} \right )^n . $$</p> <p>You just expand the second equation and repeatedly apply the first, to get your desired result. We do a similar thing for $\sin^n \theta$. </p> <p>My question is: Is there any standard technique to expand $\tan^n \theta$ as a sum of tangents or sines or cosines of multiples of $\theta$?</p> <p><strong>EDIT:</strong> I noticed that an expansion like what I want doesn't exist even for $\tan^2 \theta$. The best we can do is: $\tan^2 \theta = 1 - 2 {\tan \theta \over \tan 2\theta}$. Now that it seems like an expansion might not exist, can anyone please justify why it doesn't exist? But if something does exist, then what is it? I'd prefer the expansion to be something which is easily integrable, just like with the expansions for $\sin^n \theta$ and $\cos^n \theta$.</p>
[ { "answer_id": 2079085, "score": 1, "text": "<p>You can use this way to get $\\sin^n\\theta$ and $\\cos^n\\theta$ in terms of $\\sin\\theta$ and $\\cos\\theta,$ then $\\tan^n\\theta=\\frac{\\sin^n\\theta}{\\cos^n\\theta}$ can be obtained from above, if your goal is just to expand it as a sum of tangents or sines or cosines of multiples of $\\theta.$ (this result involves $\\sin$ and $\\cos$ terms.</p>\n" }, { "answer_id": 4751404, "score": 1, "text": "<p>Really late answer, but I feel this is worth something.</p>\n<p>So, I'm not sure about <span class=\"math-container\">$\\tan^{n} \\theta$</span>, but I think that it is much more common to express <span class=\"math-container\">$\\tan{n \\theta}$</span> in terms of powers of <span class=\"math-container\">$\\tan^{\\theta}$</span>, so here's a derivation for that... it might not be exactly what you want but I'm not even sure there is an analogue for <span class=\"math-container\">$\\tan^{n} \\theta$</span>, so I'm putting this as an answer.</p>\n<p>Suppose we want a polynomial <span class=\"math-container\">$F_n(x)=\\tan(n \\arctan{x})$</span>. From the definition, it is clear that <span class=\"math-container\">$F_n(\\tan{\\theta}) = \\tan{n \\theta}$</span>, so this is what we want. Clearly, <span class=\"math-container\">$F_n$</span> doesn't look like a polynomial right now, so let's rewrite it. I'll let <span class=\"math-container\">$\\theta = \\arctan{x}$</span> for simplicity.</p>\n<p><span class=\"math-container\">$$F_n(x)=\\tan(n \\arctan{x})=$$</span>\n<span class=\"math-container\">$$\\frac{\\sin (n \\theta)}{\\cos (n \\theta)}=$$</span>\n<span class=\"math-container\">$$\\frac{1}{i} \\cdot \\frac{(\\cos{\\theta} + i \\sin{\\theta})^n - (\\cos{\\theta} - i \\sin{\\theta})^n}{(\\cos{\\theta} + i \\sin{\\theta})^n+(\\cos{\\theta} - i \\sin{\\theta})^n}$$</span></p>\n<p>Where the last equality follows from considering De Moivre's formula. Now, this looks much more complicated, but remember that we have <span class=\"math-container\">$\\theta = \\arctan{x}$</span>. Considering our standard formulas, we remember that:</p>\n<p><span class=\"math-container\">$$\\cos(\\arctan{x})=\\frac{1}{\\sqrt{x^2+1}}$$</span>\n<span class=\"math-container\">$$\\sin(\\arctan{x})=\\frac{x}{\\sqrt{x^2+1}}$$</span></p>\n<p>So, we have that:</p>\n<p><span class=\"math-container\">$$F_n(x)=\\frac{1}{i} \\cdot \\frac{\\frac{(1+ix)^n}{\\sqrt{(x^2+1)^n}}-\\frac{(1-ix)^n}{\\sqrt{(x^2+1)^n}}}{\\frac{(1+ix)^n}{\\sqrt{(x^2+1)^n}}+\\frac{(1-ix)^n}{\\sqrt{(x^2+1)^n}}}=$$</span>\n<span class=\"math-container\">$$\\frac{1}{i} \\cdot \\frac{(1+ix)^n-(1-ix)^n}{(1+ix)^n+(1-ix)^n}$$</span></p>\n<p>Now, we can consider the binomial expansions of <span class=\"math-container\">$(1+ix)^n$</span> and <span class=\"math-container\">$(1-ix)^n$</span>. By the binomial theorem, we have that:</p>\n<p><span class=\"math-container\">$$(1+ix)^n=\\sum_{j=0}^{n} \\binom{n}{j} (ix)^j$$</span>\n<span class=\"math-container\">$$(1-ix)^n=\\sum_{j=0}^{n} \\binom{n}{j} (-ix)^j$$</span></p>\n<p>Now, in the numerator, all the terms with even powers of <span class=\"math-container\">$j$</span> will cancel, and in the denominator, all the terms with odd powers of <span class=\"math-container\">$j$</span> will cancel. So, in the numerator, we will be left with the imaginary part of <span class=\"math-container\">$(1+ix)^n$</span> times <span class=\"math-container\">$i$</span>, and in the denominator, we will be left with the real part of <span class=\"math-container\">$(1+ix)^n$</span>. So,</p>\n<p><span class=\"math-container\">$$F_n(x)=\\frac{1}{i} \\cdot \\frac{i \\cdot \\Im[(1+ix)^n]}{\\Re[(1+ix)^n]}=$$</span>\n<span class=\"math-container\">$$\\frac{\\Im[(1+ix)^n]}{\\Re[(1+ix)^n]}$$</span></p>\n<p>where <span class=\"math-container\">$\\Im[a+bi]=b$</span> and <span class=\"math-container\">$\\Re[a+bi]=a$</span>.</p>\n<p>Hence, we have the final result that\n<span class=\"math-container\">$$\\boxed{\\tan(n \\theta) = \\frac{\\Im[(1+i \\tan{\\theta})^n]}{\\Re[(1+i \\tan{\\theta})^n]}}$$</span></p>\n<p>You can derive an &quot;explicit&quot; formula by just writing out the binomial expansion terms, but I think it is much cleaner this way.</p>\n<p>As to your question regarding <span class=\"math-container\">$\\tan^{n} \\theta$</span>, I am almost certain that there is no expansion in terms of only sums of powers of tangent, sine, and cosine. I have absolutely no idea how to prove it; I think it would involve some higher branch of math such as Galois Theory(which I am by no means an expert). But, if you want an expansion in terms of <span class=\"math-container\">$z=\\cos{\\theta}+i \\sin{\\theta}$</span>, then just use the expressions you gave in your question statement for sine and cosine to get</p>\n<p><span class=\"math-container\">$$\\tan^n{\\theta}=\\frac{\\sin^n{\\theta}}{\\cos^n{\\theta}}=$$</span>\n<span class=\"math-container\">$$\\frac{(z-\\frac{1}{z})^n}{(z+\\frac{1}{z})^n}=$$</span>\n<span class=\"math-container\">$$\\left(\\frac{z^2-1}{z^2+1}\\right)^n$$</span></p>\n<p>For reference, here is where I found the formula for <span class=\"math-container\">$F_n(x)$</span>: <a href=\"https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf\" rel=\"nofollow noreferrer\">https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf</a></p>\n<p>This is a research paper on various properties of <span class=\"math-container\">$F_n$</span>, most of them after section 2(the derivation of formula for <span class=\"math-container\">$F_n$</span>) involve Galois theory, although section 6(applications and questions) might be understandable.</p>\n<p>Note: In the paper, it references the &quot;Chebyshev polynomials of the first kind&quot;, denoted <span class=\"math-container\">$T_n(x)$</span>, which are basically the cosine form of <span class=\"math-container\">$F_n(x)$</span>. They are defined as <span class=\"math-container\">$T_n(x)=\\cos(n \\arccos{x})$</span>.</p>\n" }, { "answer_id": 2079085, "score": 1, "text": "<p>You can use this way to get $\\sin^n\\theta$ and $\\cos^n\\theta$ in terms of $\\sin\\theta$ and $\\cos\\theta,$ then $\\tan^n\\theta=\\frac{\\sin^n\\theta}{\\cos^n\\theta}$ can be obtained from above, if your goal is just to expand it as a sum of tangents or sines or cosines of multiples of $\\theta.$ (this result involves $\\sin$ and $\\cos$ terms.</p>\n" }, { "answer_id": 4751404, "score": 1, "text": "<p>Really late answer, but I feel this is worth something.</p>\n<p>So, I'm not sure about <span class=\"math-container\">$\\tan^{n} \\theta$</span>, but I think that it is much more common to express <span class=\"math-container\">$\\tan{n \\theta}$</span> in terms of powers of <span class=\"math-container\">$\\tan^{\\theta}$</span>, so here's a derivation for that... it might not be exactly what you want but I'm not even sure there is an analogue for <span class=\"math-container\">$\\tan^{n} \\theta$</span>, so I'm putting this as an answer.</p>\n<p>Suppose we want a polynomial <span class=\"math-container\">$F_n(x)=\\tan(n \\arctan{x})$</span>. From the definition, it is clear that <span class=\"math-container\">$F_n(\\tan{\\theta}) = \\tan{n \\theta}$</span>, so this is what we want. Clearly, <span class=\"math-container\">$F_n$</span> doesn't look like a polynomial right now, so let's rewrite it. I'll let <span class=\"math-container\">$\\theta = \\arctan{x}$</span> for simplicity.</p>\n<p><span class=\"math-container\">$$F_n(x)=\\tan(n \\arctan{x})=$$</span>\n<span class=\"math-container\">$$\\frac{\\sin (n \\theta)}{\\cos (n \\theta)}=$$</span>\n<span class=\"math-container\">$$\\frac{1}{i} \\cdot \\frac{(\\cos{\\theta} + i \\sin{\\theta})^n - (\\cos{\\theta} - i \\sin{\\theta})^n}{(\\cos{\\theta} + i \\sin{\\theta})^n+(\\cos{\\theta} - i \\sin{\\theta})^n}$$</span></p>\n<p>Where the last equality follows from considering De Moivre's formula. Now, this looks much more complicated, but remember that we have <span class=\"math-container\">$\\theta = \\arctan{x}$</span>. Considering our standard formulas, we remember that:</p>\n<p><span class=\"math-container\">$$\\cos(\\arctan{x})=\\frac{1}{\\sqrt{x^2+1}}$$</span>\n<span class=\"math-container\">$$\\sin(\\arctan{x})=\\frac{x}{\\sqrt{x^2+1}}$$</span></p>\n<p>So, we have that:</p>\n<p><span class=\"math-container\">$$F_n(x)=\\frac{1}{i} \\cdot \\frac{\\frac{(1+ix)^n}{\\sqrt{(x^2+1)^n}}-\\frac{(1-ix)^n}{\\sqrt{(x^2+1)^n}}}{\\frac{(1+ix)^n}{\\sqrt{(x^2+1)^n}}+\\frac{(1-ix)^n}{\\sqrt{(x^2+1)^n}}}=$$</span>\n<span class=\"math-container\">$$\\frac{1}{i} \\cdot \\frac{(1+ix)^n-(1-ix)^n}{(1+ix)^n+(1-ix)^n}$$</span></p>\n<p>Now, we can consider the binomial expansions of <span class=\"math-container\">$(1+ix)^n$</span> and <span class=\"math-container\">$(1-ix)^n$</span>. By the binomial theorem, we have that:</p>\n<p><span class=\"math-container\">$$(1+ix)^n=\\sum_{j=0}^{n} \\binom{n}{j} (ix)^j$$</span>\n<span class=\"math-container\">$$(1-ix)^n=\\sum_{j=0}^{n} \\binom{n}{j} (-ix)^j$$</span></p>\n<p>Now, in the numerator, all the terms with even powers of <span class=\"math-container\">$j$</span> will cancel, and in the denominator, all the terms with odd powers of <span class=\"math-container\">$j$</span> will cancel. So, in the numerator, we will be left with the imaginary part of <span class=\"math-container\">$(1+ix)^n$</span> times <span class=\"math-container\">$i$</span>, and in the denominator, we will be left with the real part of <span class=\"math-container\">$(1+ix)^n$</span>. So,</p>\n<p><span class=\"math-container\">$$F_n(x)=\\frac{1}{i} \\cdot \\frac{i \\cdot \\Im[(1+ix)^n]}{\\Re[(1+ix)^n]}=$$</span>\n<span class=\"math-container\">$$\\frac{\\Im[(1+ix)^n]}{\\Re[(1+ix)^n]}$$</span></p>\n<p>where <span class=\"math-container\">$\\Im[a+bi]=b$</span> and <span class=\"math-container\">$\\Re[a+bi]=a$</span>.</p>\n<p>Hence, we have the final result that\n<span class=\"math-container\">$$\\boxed{\\tan(n \\theta) = \\frac{\\Im[(1+i \\tan{\\theta})^n]}{\\Re[(1+i \\tan{\\theta})^n]}}$$</span></p>\n<p>You can derive an &quot;explicit&quot; formula by just writing out the binomial expansion terms, but I think it is much cleaner this way.</p>\n<p>As to your question regarding <span class=\"math-container\">$\\tan^{n} \\theta$</span>, I am almost certain that there is no expansion in terms of only sums of powers of tangent, sine, and cosine. I have absolutely no idea how to prove it; I think it would involve some higher branch of math such as Galois Theory(which I am by no means an expert). But, if you want an expansion in terms of <span class=\"math-container\">$z=\\cos{\\theta}+i \\sin{\\theta}$</span>, then just use the expressions you gave in your question statement for sine and cosine to get</p>\n<p><span class=\"math-container\">$$\\tan^n{\\theta}=\\frac{\\sin^n{\\theta}}{\\cos^n{\\theta}}=$$</span>\n<span class=\"math-container\">$$\\frac{(z-\\frac{1}{z})^n}{(z+\\frac{1}{z})^n}=$$</span>\n<span class=\"math-container\">$$\\left(\\frac{z^2-1}{z^2+1}\\right)^n$$</span></p>\n<p>For reference, here is where I found the formula for <span class=\"math-container\">$F_n(x)$</span>: <a href=\"https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf\" rel=\"nofollow noreferrer\">https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf</a></p>\n<p>This is a research paper on various properties of <span class=\"math-container\">$F_n$</span>, most of them after section 2(the derivation of formula for <span class=\"math-container\">$F_n$</span>) involve Galois theory, although section 6(applications and questions) might be understandable.</p>\n<p>Note: In the paper, it references the &quot;Chebyshev polynomials of the first kind&quot;, denoted <span class=\"math-container\">$T_n(x)$</span>, which are basically the cosine form of <span class=\"math-container\">$F_n(x)$</span>. They are defined as <span class=\"math-container\">$T_n(x)=\\cos(n \\arccos{x})$</span>.</p>\n" }, { "answer_id": 2079085, "score": 1, "text": "<p>You can use this way to get $\\sin^n\\theta$ and $\\cos^n\\theta$ in terms of $\\sin\\theta$ and $\\cos\\theta,$ then $\\tan^n\\theta=\\frac{\\sin^n\\theta}{\\cos^n\\theta}$ can be obtained from above, if your goal is just to expand it as a sum of tangents or sines or cosines of multiples of $\\theta.$ (this result involves $\\sin$ and $\\cos$ terms.</p>\n" }, { "answer_id": 4751404, "score": 1, "text": "<p>Really late answer, but I feel this is worth something.</p>\n<p>So, I'm not sure about <span class=\"math-container\">$\\tan^{n} \\theta$</span>, but I think that it is much more common to express <span class=\"math-container\">$\\tan{n \\theta}$</span> in terms of powers of <span class=\"math-container\">$\\tan^{\\theta}$</span>, so here's a derivation for that... it might not be exactly what you want but I'm not even sure there is an analogue for <span class=\"math-container\">$\\tan^{n} \\theta$</span>, so I'm putting this as an answer.</p>\n<p>Suppose we want a polynomial <span class=\"math-container\">$F_n(x)=\\tan(n \\arctan{x})$</span>. From the definition, it is clear that <span class=\"math-container\">$F_n(\\tan{\\theta}) = \\tan{n \\theta}$</span>, so this is what we want. Clearly, <span class=\"math-container\">$F_n$</span> doesn't look like a polynomial right now, so let's rewrite it. I'll let <span class=\"math-container\">$\\theta = \\arctan{x}$</span> for simplicity.</p>\n<p><span class=\"math-container\">$$F_n(x)=\\tan(n \\arctan{x})=$$</span>\n<span class=\"math-container\">$$\\frac{\\sin (n \\theta)}{\\cos (n \\theta)}=$$</span>\n<span class=\"math-container\">$$\\frac{1}{i} \\cdot \\frac{(\\cos{\\theta} + i \\sin{\\theta})^n - (\\cos{\\theta} - i \\sin{\\theta})^n}{(\\cos{\\theta} + i \\sin{\\theta})^n+(\\cos{\\theta} - i \\sin{\\theta})^n}$$</span></p>\n<p>Where the last equality follows from considering De Moivre's formula. Now, this looks much more complicated, but remember that we have <span class=\"math-container\">$\\theta = \\arctan{x}$</span>. Considering our standard formulas, we remember that:</p>\n<p><span class=\"math-container\">$$\\cos(\\arctan{x})=\\frac{1}{\\sqrt{x^2+1}}$$</span>\n<span class=\"math-container\">$$\\sin(\\arctan{x})=\\frac{x}{\\sqrt{x^2+1}}$$</span></p>\n<p>So, we have that:</p>\n<p><span class=\"math-container\">$$F_n(x)=\\frac{1}{i} \\cdot \\frac{\\frac{(1+ix)^n}{\\sqrt{(x^2+1)^n}}-\\frac{(1-ix)^n}{\\sqrt{(x^2+1)^n}}}{\\frac{(1+ix)^n}{\\sqrt{(x^2+1)^n}}+\\frac{(1-ix)^n}{\\sqrt{(x^2+1)^n}}}=$$</span>\n<span class=\"math-container\">$$\\frac{1}{i} \\cdot \\frac{(1+ix)^n-(1-ix)^n}{(1+ix)^n+(1-ix)^n}$$</span></p>\n<p>Now, we can consider the binomial expansions of <span class=\"math-container\">$(1+ix)^n$</span> and <span class=\"math-container\">$(1-ix)^n$</span>. By the binomial theorem, we have that:</p>\n<p><span class=\"math-container\">$$(1+ix)^n=\\sum_{j=0}^{n} \\binom{n}{j} (ix)^j$$</span>\n<span class=\"math-container\">$$(1-ix)^n=\\sum_{j=0}^{n} \\binom{n}{j} (-ix)^j$$</span></p>\n<p>Now, in the numerator, all the terms with even powers of <span class=\"math-container\">$j$</span> will cancel, and in the denominator, all the terms with odd powers of <span class=\"math-container\">$j$</span> will cancel. So, in the numerator, we will be left with the imaginary part of <span class=\"math-container\">$(1+ix)^n$</span> times <span class=\"math-container\">$i$</span>, and in the denominator, we will be left with the real part of <span class=\"math-container\">$(1+ix)^n$</span>. So,</p>\n<p><span class=\"math-container\">$$F_n(x)=\\frac{1}{i} \\cdot \\frac{i \\cdot \\Im[(1+ix)^n]}{\\Re[(1+ix)^n]}=$$</span>\n<span class=\"math-container\">$$\\frac{\\Im[(1+ix)^n]}{\\Re[(1+ix)^n]}$$</span></p>\n<p>where <span class=\"math-container\">$\\Im[a+bi]=b$</span> and <span class=\"math-container\">$\\Re[a+bi]=a$</span>.</p>\n<p>Hence, we have the final result that\n<span class=\"math-container\">$$\\boxed{\\tan(n \\theta) = \\frac{\\Im[(1+i \\tan{\\theta})^n]}{\\Re[(1+i \\tan{\\theta})^n]}}$$</span></p>\n<p>You can derive an &quot;explicit&quot; formula by just writing out the binomial expansion terms, but I think it is much cleaner this way.</p>\n<p>As to your question regarding <span class=\"math-container\">$\\tan^{n} \\theta$</span>, I am almost certain that there is no expansion in terms of only sums of powers of tangent, sine, and cosine. I have absolutely no idea how to prove it; I think it would involve some higher branch of math such as Galois Theory(which I am by no means an expert). But, if you want an expansion in terms of <span class=\"math-container\">$z=\\cos{\\theta}+i \\sin{\\theta}$</span>, then just use the expressions you gave in your question statement for sine and cosine to get</p>\n<p><span class=\"math-container\">$$\\tan^n{\\theta}=\\frac{\\sin^n{\\theta}}{\\cos^n{\\theta}}=$$</span>\n<span class=\"math-container\">$$\\frac{(z-\\frac{1}{z})^n}{(z+\\frac{1}{z})^n}=$$</span>\n<span class=\"math-container\">$$\\left(\\frac{z^2-1}{z^2+1}\\right)^n$$</span></p>\n<p>For reference, here is where I found the formula for <span class=\"math-container\">$F_n(x)$</span>: <a href=\"https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf\" rel=\"nofollow noreferrer\">https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf</a></p>\n<p>This is a research paper on various properties of <span class=\"math-container\">$F_n$</span>, most of them after section 2(the derivation of formula for <span class=\"math-container\">$F_n$</span>) involve Galois theory, although section 6(applications and questions) might be understandable.</p>\n<p>Note: In the paper, it references the &quot;Chebyshev polynomials of the first kind&quot;, denoted <span class=\"math-container\">$T_n(x)$</span>, which are basically the cosine form of <span class=\"math-container\">$F_n(x)$</span>. They are defined as <span class=\"math-container\">$T_n(x)=\\cos(n \\arccos{x})$</span>.</p>\n" }, { "answer_id": 2079085, "score": 1, "text": "<p>You can use this way to get $\\sin^n\\theta$ and $\\cos^n\\theta$ in terms of $\\sin\\theta$ and $\\cos\\theta,$ then $\\tan^n\\theta=\\frac{\\sin^n\\theta}{\\cos^n\\theta}$ can be obtained from above, if your goal is just to expand it as a sum of tangents or sines or cosines of multiples of $\\theta.$ (this result involves $\\sin$ and $\\cos$ terms.</p>\n" }, { "answer_id": 4751404, "score": 1, "text": "<p>Really late answer, but I feel this is worth something.</p>\n<p>So, I'm not sure about <span class=\"math-container\">$\\tan^{n} \\theta$</span>, but I think that it is much more common to express <span class=\"math-container\">$\\tan{n \\theta}$</span> in terms of powers of <span class=\"math-container\">$\\tan^{\\theta}$</span>, so here's a derivation for that... it might not be exactly what you want but I'm not even sure there is an analogue for <span class=\"math-container\">$\\tan^{n} \\theta$</span>, so I'm putting this as an answer.</p>\n<p>Suppose we want a polynomial <span class=\"math-container\">$F_n(x)=\\tan(n \\arctan{x})$</span>. From the definition, it is clear that <span class=\"math-container\">$F_n(\\tan{\\theta}) = \\tan{n \\theta}$</span>, so this is what we want. Clearly, <span class=\"math-container\">$F_n$</span> doesn't look like a polynomial right now, so let's rewrite it. I'll let <span class=\"math-container\">$\\theta = \\arctan{x}$</span> for simplicity.</p>\n<p><span class=\"math-container\">$$F_n(x)=\\tan(n \\arctan{x})=$$</span>\n<span class=\"math-container\">$$\\frac{\\sin (n \\theta)}{\\cos (n \\theta)}=$$</span>\n<span class=\"math-container\">$$\\frac{1}{i} \\cdot \\frac{(\\cos{\\theta} + i \\sin{\\theta})^n - (\\cos{\\theta} - i \\sin{\\theta})^n}{(\\cos{\\theta} + i \\sin{\\theta})^n+(\\cos{\\theta} - i \\sin{\\theta})^n}$$</span></p>\n<p>Where the last equality follows from considering De Moivre's formula. Now, this looks much more complicated, but remember that we have <span class=\"math-container\">$\\theta = \\arctan{x}$</span>. Considering our standard formulas, we remember that:</p>\n<p><span class=\"math-container\">$$\\cos(\\arctan{x})=\\frac{1}{\\sqrt{x^2+1}}$$</span>\n<span class=\"math-container\">$$\\sin(\\arctan{x})=\\frac{x}{\\sqrt{x^2+1}}$$</span></p>\n<p>So, we have that:</p>\n<p><span class=\"math-container\">$$F_n(x)=\\frac{1}{i} \\cdot \\frac{\\frac{(1+ix)^n}{\\sqrt{(x^2+1)^n}}-\\frac{(1-ix)^n}{\\sqrt{(x^2+1)^n}}}{\\frac{(1+ix)^n}{\\sqrt{(x^2+1)^n}}+\\frac{(1-ix)^n}{\\sqrt{(x^2+1)^n}}}=$$</span>\n<span class=\"math-container\">$$\\frac{1}{i} \\cdot \\frac{(1+ix)^n-(1-ix)^n}{(1+ix)^n+(1-ix)^n}$$</span></p>\n<p>Now, we can consider the binomial expansions of <span class=\"math-container\">$(1+ix)^n$</span> and <span class=\"math-container\">$(1-ix)^n$</span>. By the binomial theorem, we have that:</p>\n<p><span class=\"math-container\">$$(1+ix)^n=\\sum_{j=0}^{n} \\binom{n}{j} (ix)^j$$</span>\n<span class=\"math-container\">$$(1-ix)^n=\\sum_{j=0}^{n} \\binom{n}{j} (-ix)^j$$</span></p>\n<p>Now, in the numerator, all the terms with even powers of <span class=\"math-container\">$j$</span> will cancel, and in the denominator, all the terms with odd powers of <span class=\"math-container\">$j$</span> will cancel. So, in the numerator, we will be left with the imaginary part of <span class=\"math-container\">$(1+ix)^n$</span> times <span class=\"math-container\">$i$</span>, and in the denominator, we will be left with the real part of <span class=\"math-container\">$(1+ix)^n$</span>. So,</p>\n<p><span class=\"math-container\">$$F_n(x)=\\frac{1}{i} \\cdot \\frac{i \\cdot \\Im[(1+ix)^n]}{\\Re[(1+ix)^n]}=$$</span>\n<span class=\"math-container\">$$\\frac{\\Im[(1+ix)^n]}{\\Re[(1+ix)^n]}$$</span></p>\n<p>where <span class=\"math-container\">$\\Im[a+bi]=b$</span> and <span class=\"math-container\">$\\Re[a+bi]=a$</span>.</p>\n<p>Hence, we have the final result that\n<span class=\"math-container\">$$\\boxed{\\tan(n \\theta) = \\frac{\\Im[(1+i \\tan{\\theta})^n]}{\\Re[(1+i \\tan{\\theta})^n]}}$$</span></p>\n<p>You can derive an &quot;explicit&quot; formula by just writing out the binomial expansion terms, but I think it is much cleaner this way.</p>\n<p>As to your question regarding <span class=\"math-container\">$\\tan^{n} \\theta$</span>, I am almost certain that there is no expansion in terms of only sums of powers of tangent, sine, and cosine. I have absolutely no idea how to prove it; I think it would involve some higher branch of math such as Galois Theory(which I am by no means an expert). But, if you want an expansion in terms of <span class=\"math-container\">$z=\\cos{\\theta}+i \\sin{\\theta}$</span>, then just use the expressions you gave in your question statement for sine and cosine to get</p>\n<p><span class=\"math-container\">$$\\tan^n{\\theta}=\\frac{\\sin^n{\\theta}}{\\cos^n{\\theta}}=$$</span>\n<span class=\"math-container\">$$\\frac{(z-\\frac{1}{z})^n}{(z+\\frac{1}{z})^n}=$$</span>\n<span class=\"math-container\">$$\\left(\\frac{z^2-1}{z^2+1}\\right)^n$$</span></p>\n<p>For reference, here is where I found the formula for <span class=\"math-container\">$F_n(x)$</span>: <a href=\"https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf\" rel=\"nofollow noreferrer\">https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf</a></p>\n<p>This is a research paper on various properties of <span class=\"math-container\">$F_n$</span>, most of them after section 2(the derivation of formula for <span class=\"math-container\">$F_n$</span>) involve Galois theory, although section 6(applications and questions) might be understandable.</p>\n<p>Note: In the paper, it references the &quot;Chebyshev polynomials of the first kind&quot;, denoted <span class=\"math-container\">$T_n(x)$</span>, which are basically the cosine form of <span class=\"math-container\">$F_n(x)$</span>. They are defined as <span class=\"math-container\">$T_n(x)=\\cos(n \\arccos{x})$</span>.</p>\n" }, { "answer_id": 2079085, "score": 1, "text": "<p>You can use this way to get $\\sin^n\\theta$ and $\\cos^n\\theta$ in terms of $\\sin\\theta$ and $\\cos\\theta,$ then $\\tan^n\\theta=\\frac{\\sin^n\\theta}{\\cos^n\\theta}$ can be obtained from above, if your goal is just to expand it as a sum of tangents or sines or cosines of multiples of $\\theta.$ (this result involves $\\sin$ and $\\cos$ terms.</p>\n" }, { "answer_id": 4751404, "score": 1, "text": "<p>Really late answer, but I feel this is worth something.</p>\n<p>So, I'm not sure about <span class=\"math-container\">$\\tan^{n} \\theta$</span>, but I think that it is much more common to express <span class=\"math-container\">$\\tan{n \\theta}$</span> in terms of powers of <span class=\"math-container\">$\\tan^{\\theta}$</span>, so here's a derivation for that... it might not be exactly what you want but I'm not even sure there is an analogue for <span class=\"math-container\">$\\tan^{n} \\theta$</span>, so I'm putting this as an answer.</p>\n<p>Suppose we want a polynomial <span class=\"math-container\">$F_n(x)=\\tan(n \\arctan{x})$</span>. From the definition, it is clear that <span class=\"math-container\">$F_n(\\tan{\\theta}) = \\tan{n \\theta}$</span>, so this is what we want. Clearly, <span class=\"math-container\">$F_n$</span> doesn't look like a polynomial right now, so let's rewrite it. I'll let <span class=\"math-container\">$\\theta = \\arctan{x}$</span> for simplicity.</p>\n<p><span class=\"math-container\">$$F_n(x)=\\tan(n \\arctan{x})=$$</span>\n<span class=\"math-container\">$$\\frac{\\sin (n \\theta)}{\\cos (n \\theta)}=$$</span>\n<span class=\"math-container\">$$\\frac{1}{i} \\cdot \\frac{(\\cos{\\theta} + i \\sin{\\theta})^n - (\\cos{\\theta} - i \\sin{\\theta})^n}{(\\cos{\\theta} + i \\sin{\\theta})^n+(\\cos{\\theta} - i \\sin{\\theta})^n}$$</span></p>\n<p>Where the last equality follows from considering De Moivre's formula. Now, this looks much more complicated, but remember that we have <span class=\"math-container\">$\\theta = \\arctan{x}$</span>. Considering our standard formulas, we remember that:</p>\n<p><span class=\"math-container\">$$\\cos(\\arctan{x})=\\frac{1}{\\sqrt{x^2+1}}$$</span>\n<span class=\"math-container\">$$\\sin(\\arctan{x})=\\frac{x}{\\sqrt{x^2+1}}$$</span></p>\n<p>So, we have that:</p>\n<p><span class=\"math-container\">$$F_n(x)=\\frac{1}{i} \\cdot \\frac{\\frac{(1+ix)^n}{\\sqrt{(x^2+1)^n}}-\\frac{(1-ix)^n}{\\sqrt{(x^2+1)^n}}}{\\frac{(1+ix)^n}{\\sqrt{(x^2+1)^n}}+\\frac{(1-ix)^n}{\\sqrt{(x^2+1)^n}}}=$$</span>\n<span class=\"math-container\">$$\\frac{1}{i} \\cdot \\frac{(1+ix)^n-(1-ix)^n}{(1+ix)^n+(1-ix)^n}$$</span></p>\n<p>Now, we can consider the binomial expansions of <span class=\"math-container\">$(1+ix)^n$</span> and <span class=\"math-container\">$(1-ix)^n$</span>. By the binomial theorem, we have that:</p>\n<p><span class=\"math-container\">$$(1+ix)^n=\\sum_{j=0}^{n} \\binom{n}{j} (ix)^j$$</span>\n<span class=\"math-container\">$$(1-ix)^n=\\sum_{j=0}^{n} \\binom{n}{j} (-ix)^j$$</span></p>\n<p>Now, in the numerator, all the terms with even powers of <span class=\"math-container\">$j$</span> will cancel, and in the denominator, all the terms with odd powers of <span class=\"math-container\">$j$</span> will cancel. So, in the numerator, we will be left with the imaginary part of <span class=\"math-container\">$(1+ix)^n$</span> times <span class=\"math-container\">$i$</span>, and in the denominator, we will be left with the real part of <span class=\"math-container\">$(1+ix)^n$</span>. So,</p>\n<p><span class=\"math-container\">$$F_n(x)=\\frac{1}{i} \\cdot \\frac{i \\cdot \\Im[(1+ix)^n]}{\\Re[(1+ix)^n]}=$$</span>\n<span class=\"math-container\">$$\\frac{\\Im[(1+ix)^n]}{\\Re[(1+ix)^n]}$$</span></p>\n<p>where <span class=\"math-container\">$\\Im[a+bi]=b$</span> and <span class=\"math-container\">$\\Re[a+bi]=a$</span>.</p>\n<p>Hence, we have the final result that\n<span class=\"math-container\">$$\\boxed{\\tan(n \\theta) = \\frac{\\Im[(1+i \\tan{\\theta})^n]}{\\Re[(1+i \\tan{\\theta})^n]}}$$</span></p>\n<p>You can derive an &quot;explicit&quot; formula by just writing out the binomial expansion terms, but I think it is much cleaner this way.</p>\n<p>As to your question regarding <span class=\"math-container\">$\\tan^{n} \\theta$</span>, I am almost certain that there is no expansion in terms of only sums of powers of tangent, sine, and cosine. I have absolutely no idea how to prove it; I think it would involve some higher branch of math such as Galois Theory(which I am by no means an expert). But, if you want an expansion in terms of <span class=\"math-container\">$z=\\cos{\\theta}+i \\sin{\\theta}$</span>, then just use the expressions you gave in your question statement for sine and cosine to get</p>\n<p><span class=\"math-container\">$$\\tan^n{\\theta}=\\frac{\\sin^n{\\theta}}{\\cos^n{\\theta}}=$$</span>\n<span class=\"math-container\">$$\\frac{(z-\\frac{1}{z})^n}{(z+\\frac{1}{z})^n}=$$</span>\n<span class=\"math-container\">$$\\left(\\frac{z^2-1}{z^2+1}\\right)^n$$</span></p>\n<p>For reference, here is where I found the formula for <span class=\"math-container\">$F_n(x)$</span>: <a href=\"https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf\" rel=\"nofollow noreferrer\">https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf</a></p>\n<p>This is a research paper on various properties of <span class=\"math-container\">$F_n$</span>, most of them after section 2(the derivation of formula for <span class=\"math-container\">$F_n$</span>) involve Galois theory, although section 6(applications and questions) might be understandable.</p>\n<p>Note: In the paper, it references the &quot;Chebyshev polynomials of the first kind&quot;, denoted <span class=\"math-container\">$T_n(x)$</span>, which are basically the cosine form of <span class=\"math-container\">$F_n(x)$</span>. They are defined as <span class=\"math-container\">$T_n(x)=\\cos(n \\arccos{x})$</span>.</p>\n" } ]
Question: Expanding powers of $\tan^n \theta$ Answer 1: <p>You can use this way to get $\sin^n\theta$ and $\cos^n\theta$ in terms of $\sin\theta$ and $\cos\theta,$ then $\tan^n\theta=\frac{\sin^n\theta}{\cos^n\theta}$ can be obtained from above, if your goal is just to expand it as a sum of tangents or sines or cosines of multiples of $\theta.$ (this result involves $\sin$ and $\cos$ terms.</p> Answer 2: <p>Really late answer, but I feel this is worth something.</p> <p>So, I'm not sure about <span class="math-container">$\tan^{n} \theta$</span>, but I think that it is much more common to express <span class="math-container">$\tan{n \theta}$</span> in terms of powers of <span class="math-container">$\tan^{\theta}$</span>, so here's a derivation for that... it might not be exactly what you want but I'm not even sure there is an analogue for <span class="math-container">$\tan^{n} \theta$</span>, so I'm putting this as an answer.</p> <p>Suppose we want a polynomial <span class="math-container">$F_n(x)=\tan(n \arctan{x})$</span>. From the definition, it is clear that <span class="math-container">$F_n(\tan{\theta}) = \tan{n \theta}$</span>, so this is what we want. Clearly, <span class="math-container">$F_n$</span> doesn't look like a polynomial right now, so let's rewrite it. I'll let <span class="math-container">$\theta = \arctan{x}$</span> for simplicity.</p> <p><span class="math-container">$$F_n(x)=\tan(n \arctan{x})=$$</span> <span class="math-container">$$\frac{\sin (n \theta)}{\cos (n \theta)}=$$</span> <span class="math-container">$$\frac{1}{i} \cdot \frac{(\cos{\theta} + i \sin{\theta})^n - (\cos{\theta} - i \sin{\theta})^n}{(\cos{\theta} + i \sin{\theta})^n+(\cos{\theta} - i \sin{\theta})^n}$$</span></p> <p>Where the last equality follows from considering De Moivre's formula. Now, this looks much more complicated, but remember that we have <span class="math-container">$\theta = \arctan{x}$</span>. Considering our standard formulas, we remember that:</p> <p><span class="math-container">$$\cos(\arctan{x})=\frac{1}{\sqrt{x^2+1}}$$</span> <span class="math-container">$$\sin(\arctan{x})=\frac{x}{\sqrt{x^2+1}}$$</span></p> <p>So, we have that:</p> <p><span class="math-container">$$F_n(x)=\frac{1}{i} \cdot \frac{\frac{(1+ix)^n}{\sqrt{(x^2+1)^n}}-\frac{(1-ix)^n}{\sqrt{(x^2+1)^n}}}{\frac{(1+ix)^n}{\sqrt{(x^2+1)^n}}+\frac{(1-ix)^n}{\sqrt{(x^2+1)^n}}}=$$</span> <span class="math-container">$$\frac{1}{i} \cdot \frac{(1+ix)^n-(1-ix)^n}{(1+ix)^n+(1-ix)^n}$$</span></p> <p>Now, we can consider the binomial expansions of <span class="math-container">$(1+ix)^n$</span> and <span class="math-container">$(1-ix)^n$</span>. By the binomial theorem, we have that:</p> <p><span class="math-container">$$(1+ix)^n=\sum_{j=0}^{n} \binom{n}{j} (ix)^j$$</span> <span class="math-container">$$(1-ix)^n=\sum_{j=0}^{n} \binom{n}{j} (-ix)^j$$</span></p> <p>Now, in the numerator, all the terms with even powers of <span class="math-container">$j$</span> will cancel, and in the denominator, all the terms with odd powers of <span class="math-container">$j$</span> will cancel. So, in the numerator, we will be left with the imaginary part of <span class="math-container">$(1+ix)^n$</span> times <span class="math-container">$i$</span>, and in the denominator, we will be left with the real part of <span class="math-container">$(1+ix)^n$</span>. So,</p> <p><span class="math-container">$$F_n(x)=\frac{1}{i} \cdot \frac{i \cdot \Im[(1+ix)^n]}{\Re[(1+ix)^n]}=$$</span> <span class="math-container">$$\frac{\Im[(1+ix)^n]}{\Re[(1+ix)^n]}$$</span></p> <p>where <span class="math-container">$\Im[a+bi]=b$</span> and <span class="math-container">$\Re[a+bi]=a$</span>.</p> <p>Hence, we have the final result that <span class="math-container">$$\boxed{\tan(n \theta) = \frac{\Im[(1+i \tan{\theta})^n]}{\Re[(1+i \tan{\theta})^n]}}$$</span></p> <p>You can derive an &quot;explicit&quot; formula by just writing out the binomial expansion terms, but I think it is much cleaner this way.</p> <p>As to your question regarding <span class="math-container">$\tan^{n} \theta$</span>, I am almost certain that there is no expansion in terms of only sums of powers of tangent, sine, and cosine. I have absolutely no idea how to prove it; I think it would involve some higher branch of math such as Galois Theory(which I am by no means an expert). But, if you want an expansion in terms of <span class="math-container">$z=\cos{\theta}+i \sin{\theta}$</span>, then just use the expressions you gave in your question statement for sine and cosine to get</p> <p><span class="math-container">$$\tan^n{\theta}=\frac{\sin^n{\theta}}{\cos^n{\theta}}=$$</span> <span class="math-container">$$\frac{(z-\frac{1}{z})^n}{(z+\frac{1}{z})^n}=$$</span> <span class="math-container">$$\left(\frac{z^2-1}{z^2+1}\right)^n$$</span></p> <p>For reference, here is where I found the formula for <span class="math-container">$F_n(x)$</span>: <a href="https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf" rel="nofollow noreferrer">https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf</a></p> <p>This is a research paper on various properties of <span class="math-container">$F_n$</span>, most of them after section 2(the derivation of formula for <span class="math-container">$F_n$</span>) involve Galois theory, although section 6(applications and questions) might be understandable.</p> <p>Note: In the paper, it references the &quot;Chebyshev polynomials of the first kind&quot;, denoted <span class="math-container">$T_n(x)$</span>, which are basically the cosine form of <span class="math-container">$F_n(x)$</span>. They are defined as <span class="math-container">$T_n(x)=\cos(n \arccos{x})$</span>.</p> Answer 3: <p>You can use this way to get $\sin^n\theta$ and $\cos^n\theta$ in terms of $\sin\theta$ and $\cos\theta,$ then $\tan^n\theta=\frac{\sin^n\theta}{\cos^n\theta}$ can be obtained from above, if your goal is just to expand it as a sum of tangents or sines or cosines of multiples of $\theta.$ (this result involves $\sin$ and $\cos$ terms.</p> Answer 4: <p>Really late answer, but I feel this is worth something.</p> <p>So, I'm not sure about <span class="math-container">$\tan^{n} \theta$</span>, but I think that it is much more common to express <span class="math-container">$\tan{n \theta}$</span> in terms of powers of <span class="math-container">$\tan^{\theta}$</span>, so here's a derivation for that... it might not be exactly what you want but I'm not even sure there is an analogue for <span class="math-container">$\tan^{n} \theta$</span>, so I'm putting this as an answer.</p> <p>Suppose we want a polynomial <span class="math-container">$F_n(x)=\tan(n \arctan{x})$</span>. From the definition, it is clear that <span class="math-container">$F_n(\tan{\theta}) = \tan{n \theta}$</span>, so this is what we want. Clearly, <span class="math-container">$F_n$</span> doesn't look like a polynomial right now, so let's rewrite it. I'll let <span class="math-container">$\theta = \arctan{x}$</span> for simplicity.</p> <p><span class="math-container">$$F_n(x)=\tan(n \arctan{x})=$$</span> <span class="math-container">$$\frac{\sin (n \theta)}{\cos (n \theta)}=$$</span> <span class="math-container">$$\frac{1}{i} \cdot \frac{(\cos{\theta} + i \sin{\theta})^n - (\cos{\theta} - i \sin{\theta})^n}{(\cos{\theta} + i \sin{\theta})^n+(\cos{\theta} - i \sin{\theta})^n}$$</span></p> <p>Where the last equality follows from considering De Moivre's formula. Now, this looks much more complicated, but remember that we have <span class="math-container">$\theta = \arctan{x}$</span>. Considering our standard formulas, we remember that:</p> <p><span class="math-container">$$\cos(\arctan{x})=\frac{1}{\sqrt{x^2+1}}$$</span> <span class="math-container">$$\sin(\arctan{x})=\frac{x}{\sqrt{x^2+1}}$$</span></p> <p>So, we have that:</p> <p><span class="math-container">$$F_n(x)=\frac{1}{i} \cdot \frac{\frac{(1+ix)^n}{\sqrt{(x^2+1)^n}}-\frac{(1-ix)^n}{\sqrt{(x^2+1)^n}}}{\frac{(1+ix)^n}{\sqrt{(x^2+1)^n}}+\frac{(1-ix)^n}{\sqrt{(x^2+1)^n}}}=$$</span> <span class="math-container">$$\frac{1}{i} \cdot \frac{(1+ix)^n-(1-ix)^n}{(1+ix)^n+(1-ix)^n}$$</span></p> <p>Now, we can consider the binomial expansions of <span class="math-container">$(1+ix)^n$</span> and <span class="math-container">$(1-ix)^n$</span>. By the binomial theorem, we have that:</p> <p><span class="math-container">$$(1+ix)^n=\sum_{j=0}^{n} \binom{n}{j} (ix)^j$$</span> <span class="math-container">$$(1-ix)^n=\sum_{j=0}^{n} \binom{n}{j} (-ix)^j$$</span></p> <p>Now, in the numerator, all the terms with even powers of <span class="math-container">$j$</span> will cancel, and in the denominator, all the terms with odd powers of <span class="math-container">$j$</span> will cancel. So, in the numerator, we will be left with the imaginary part of <span class="math-container">$(1+ix)^n$</span> times <span class="math-container">$i$</span>, and in the denominator, we will be left with the real part of <span class="math-container">$(1+ix)^n$</span>. So,</p> <p><span class="math-container">$$F_n(x)=\frac{1}{i} \cdot \frac{i \cdot \Im[(1+ix)^n]}{\Re[(1+ix)^n]}=$$</span> <span class="math-container">$$\frac{\Im[(1+ix)^n]}{\Re[(1+ix)^n]}$$</span></p> <p>where <span class="math-container">$\Im[a+bi]=b$</span> and <span class="math-container">$\Re[a+bi]=a$</span>.</p> <p>Hence, we have the final result that <span class="math-container">$$\boxed{\tan(n \theta) = \frac{\Im[(1+i \tan{\theta})^n]}{\Re[(1+i \tan{\theta})^n]}}$$</span></p> <p>You can derive an &quot;explicit&quot; formula by just writing out the binomial expansion terms, but I think it is much cleaner this way.</p> <p>As to your question regarding <span class="math-container">$\tan^{n} \theta$</span>, I am almost certain that there is no expansion in terms of only sums of powers of tangent, sine, and cosine. I have absolutely no idea how to prove it; I think it would involve some higher branch of math such as Galois Theory(which I am by no means an expert). But, if you want an expansion in terms of <span class="math-container">$z=\cos{\theta}+i \sin{\theta}$</span>, then just use the expressions you gave in your question statement for sine and cosine to get</p> <p><span class="math-container">$$\tan^n{\theta}=\frac{\sin^n{\theta}}{\cos^n{\theta}}=$$</span> <span class="math-container">$$\frac{(z-\frac{1}{z})^n}{(z+\frac{1}{z})^n}=$$</span> <span class="math-container">$$\left(\frac{z^2-1}{z^2+1}\right)^n$$</span></p> <p>For reference, here is where I found the formula for <span class="math-container">$F_n(x)$</span>: <a href="https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf" rel="nofollow noreferrer">https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf</a></p> <p>This is a research paper on various properties of <span class="math-container">$F_n$</span>, most of them after section 2(the derivation of formula for <span class="math-container">$F_n$</span>) involve Galois theory, although section 6(applications and questions) might be understandable.</p> <p>Note: In the paper, it references the &quot;Chebyshev polynomials of the first kind&quot;, denoted <span class="math-container">$T_n(x)$</span>, which are basically the cosine form of <span class="math-container">$F_n(x)$</span>. They are defined as <span class="math-container">$T_n(x)=\cos(n \arccos{x})$</span>.</p> Answer 5: <p>You can use this way to get $\sin^n\theta$ and $\cos^n\theta$ in terms of $\sin\theta$ and $\cos\theta,$ then $\tan^n\theta=\frac{\sin^n\theta}{\cos^n\theta}$ can be obtained from above, if your goal is just to expand it as a sum of tangents or sines or cosines of multiples of $\theta.$ (this result involves $\sin$ and $\cos$ terms.</p> Answer 6: <p>Really late answer, but I feel this is worth something.</p> <p>So, I'm not sure about <span class="math-container">$\tan^{n} \theta$</span>, but I think that it is much more common to express <span class="math-container">$\tan{n \theta}$</span> in terms of powers of <span class="math-container">$\tan^{\theta}$</span>, so here's a derivation for that... it might not be exactly what you want but I'm not even sure there is an analogue for <span class="math-container">$\tan^{n} \theta$</span>, so I'm putting this as an answer.</p> <p>Suppose we want a polynomial <span class="math-container">$F_n(x)=\tan(n \arctan{x})$</span>. From the definition, it is clear that <span class="math-container">$F_n(\tan{\theta}) = \tan{n \theta}$</span>, so this is what we want. Clearly, <span class="math-container">$F_n$</span> doesn't look like a polynomial right now, so let's rewrite it. I'll let <span class="math-container">$\theta = \arctan{x}$</span> for simplicity.</p> <p><span class="math-container">$$F_n(x)=\tan(n \arctan{x})=$$</span> <span class="math-container">$$\frac{\sin (n \theta)}{\cos (n \theta)}=$$</span> <span class="math-container">$$\frac{1}{i} \cdot \frac{(\cos{\theta} + i \sin{\theta})^n - (\cos{\theta} - i \sin{\theta})^n}{(\cos{\theta} + i \sin{\theta})^n+(\cos{\theta} - i \sin{\theta})^n}$$</span></p> <p>Where the last equality follows from considering De Moivre's formula. Now, this looks much more complicated, but remember that we have <span class="math-container">$\theta = \arctan{x}$</span>. Considering our standard formulas, we remember that:</p> <p><span class="math-container">$$\cos(\arctan{x})=\frac{1}{\sqrt{x^2+1}}$$</span> <span class="math-container">$$\sin(\arctan{x})=\frac{x}{\sqrt{x^2+1}}$$</span></p> <p>So, we have that:</p> <p><span class="math-container">$$F_n(x)=\frac{1}{i} \cdot \frac{\frac{(1+ix)^n}{\sqrt{(x^2+1)^n}}-\frac{(1-ix)^n}{\sqrt{(x^2+1)^n}}}{\frac{(1+ix)^n}{\sqrt{(x^2+1)^n}}+\frac{(1-ix)^n}{\sqrt{(x^2+1)^n}}}=$$</span> <span class="math-container">$$\frac{1}{i} \cdot \frac{(1+ix)^n-(1-ix)^n}{(1+ix)^n+(1-ix)^n}$$</span></p> <p>Now, we can consider the binomial expansions of <span class="math-container">$(1+ix)^n$</span> and <span class="math-container">$(1-ix)^n$</span>. By the binomial theorem, we have that:</p> <p><span class="math-container">$$(1+ix)^n=\sum_{j=0}^{n} \binom{n}{j} (ix)^j$$</span> <span class="math-container">$$(1-ix)^n=\sum_{j=0}^{n} \binom{n}{j} (-ix)^j$$</span></p> <p>Now, in the numerator, all the terms with even powers of <span class="math-container">$j$</span> will cancel, and in the denominator, all the terms with odd powers of <span class="math-container">$j$</span> will cancel. So, in the numerator, we will be left with the imaginary part of <span class="math-container">$(1+ix)^n$</span> times <span class="math-container">$i$</span>, and in the denominator, we will be left with the real part of <span class="math-container">$(1+ix)^n$</span>. So,</p> <p><span class="math-container">$$F_n(x)=\frac{1}{i} \cdot \frac{i \cdot \Im[(1+ix)^n]}{\Re[(1+ix)^n]}=$$</span> <span class="math-container">$$\frac{\Im[(1+ix)^n]}{\Re[(1+ix)^n]}$$</span></p> <p>where <span class="math-container">$\Im[a+bi]=b$</span> and <span class="math-container">$\Re[a+bi]=a$</span>.</p> <p>Hence, we have the final result that <span class="math-container">$$\boxed{\tan(n \theta) = \frac{\Im[(1+i \tan{\theta})^n]}{\Re[(1+i \tan{\theta})^n]}}$$</span></p> <p>You can derive an &quot;explicit&quot; formula by just writing out the binomial expansion terms, but I think it is much cleaner this way.</p> <p>As to your question regarding <span class="math-container">$\tan^{n} \theta$</span>, I am almost certain that there is no expansion in terms of only sums of powers of tangent, sine, and cosine. I have absolutely no idea how to prove it; I think it would involve some higher branch of math such as Galois Theory(which I am by no means an expert). But, if you want an expansion in terms of <span class="math-container">$z=\cos{\theta}+i \sin{\theta}$</span>, then just use the expressions you gave in your question statement for sine and cosine to get</p> <p><span class="math-container">$$\tan^n{\theta}=\frac{\sin^n{\theta}}{\cos^n{\theta}}=$$</span> <span class="math-container">$$\frac{(z-\frac{1}{z})^n}{(z+\frac{1}{z})^n}=$$</span> <span class="math-container">$$\left(\frac{z^2-1}{z^2+1}\right)^n$$</span></p> <p>For reference, here is where I found the formula for <span class="math-container">$F_n(x)$</span>: <a href="https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf" rel="nofollow noreferrer">https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf</a></p> <p>This is a research paper on various properties of <span class="math-container">$F_n$</span>, most of them after section 2(the derivation of formula for <span class="math-container">$F_n$</span>) involve Galois theory, although section 6(applications and questions) might be understandable.</p> <p>Note: In the paper, it references the &quot;Chebyshev polynomials of the first kind&quot;, denoted <span class="math-container">$T_n(x)$</span>, which are basically the cosine form of <span class="math-container">$F_n(x)$</span>. They are defined as <span class="math-container">$T_n(x)=\cos(n \arccos{x})$</span>.</p> Answer 7: <p>You can use this way to get $\sin^n\theta$ and $\cos^n\theta$ in terms of $\sin\theta$ and $\cos\theta,$ then $\tan^n\theta=\frac{\sin^n\theta}{\cos^n\theta}$ can be obtained from above, if your goal is just to expand it as a sum of tangents or sines or cosines of multiples of $\theta.$ (this result involves $\sin$ and $\cos$ terms.</p> Answer 8: <p>Really late answer, but I feel this is worth something.</p> <p>So, I'm not sure about <span class="math-container">$\tan^{n} \theta$</span>, but I think that it is much more common to express <span class="math-container">$\tan{n \theta}$</span> in terms of powers of <span class="math-container">$\tan^{\theta}$</span>, so here's a derivation for that... it might not be exactly what you want but I'm not even sure there is an analogue for <span class="math-container">$\tan^{n} \theta$</span>, so I'm putting this as an answer.</p> <p>Suppose we want a polynomial <span class="math-container">$F_n(x)=\tan(n \arctan{x})$</span>. From the definition, it is clear that <span class="math-container">$F_n(\tan{\theta}) = \tan{n \theta}$</span>, so this is what we want. Clearly, <span class="math-container">$F_n$</span> doesn't look like a polynomial right now, so let's rewrite it. I'll let <span class="math-container">$\theta = \arctan{x}$</span> for simplicity.</p> <p><span class="math-container">$$F_n(x)=\tan(n \arctan{x})=$$</span> <span class="math-container">$$\frac{\sin (n \theta)}{\cos (n \theta)}=$$</span> <span class="math-container">$$\frac{1}{i} \cdot \frac{(\cos{\theta} + i \sin{\theta})^n - (\cos{\theta} - i \sin{\theta})^n}{(\cos{\theta} + i \sin{\theta})^n+(\cos{\theta} - i \sin{\theta})^n}$$</span></p> <p>Where the last equality follows from considering De Moivre's formula. Now, this looks much more complicated, but remember that we have <span class="math-container">$\theta = \arctan{x}$</span>. Considering our standard formulas, we remember that:</p> <p><span class="math-container">$$\cos(\arctan{x})=\frac{1}{\sqrt{x^2+1}}$$</span> <span class="math-container">$$\sin(\arctan{x})=\frac{x}{\sqrt{x^2+1}}$$</span></p> <p>So, we have that:</p> <p><span class="math-container">$$F_n(x)=\frac{1}{i} \cdot \frac{\frac{(1+ix)^n}{\sqrt{(x^2+1)^n}}-\frac{(1-ix)^n}{\sqrt{(x^2+1)^n}}}{\frac{(1+ix)^n}{\sqrt{(x^2+1)^n}}+\frac{(1-ix)^n}{\sqrt{(x^2+1)^n}}}=$$</span> <span class="math-container">$$\frac{1}{i} \cdot \frac{(1+ix)^n-(1-ix)^n}{(1+ix)^n+(1-ix)^n}$$</span></p> <p>Now, we can consider the binomial expansions of <span class="math-container">$(1+ix)^n$</span> and <span class="math-container">$(1-ix)^n$</span>. By the binomial theorem, we have that:</p> <p><span class="math-container">$$(1+ix)^n=\sum_{j=0}^{n} \binom{n}{j} (ix)^j$$</span> <span class="math-container">$$(1-ix)^n=\sum_{j=0}^{n} \binom{n}{j} (-ix)^j$$</span></p> <p>Now, in the numerator, all the terms with even powers of <span class="math-container">$j$</span> will cancel, and in the denominator, all the terms with odd powers of <span class="math-container">$j$</span> will cancel. So, in the numerator, we will be left with the imaginary part of <span class="math-container">$(1+ix)^n$</span> times <span class="math-container">$i$</span>, and in the denominator, we will be left with the real part of <span class="math-container">$(1+ix)^n$</span>. So,</p> <p><span class="math-container">$$F_n(x)=\frac{1}{i} \cdot \frac{i \cdot \Im[(1+ix)^n]}{\Re[(1+ix)^n]}=$$</span> <span class="math-container">$$\frac{\Im[(1+ix)^n]}{\Re[(1+ix)^n]}$$</span></p> <p>where <span class="math-container">$\Im[a+bi]=b$</span> and <span class="math-container">$\Re[a+bi]=a$</span>.</p> <p>Hence, we have the final result that <span class="math-container">$$\boxed{\tan(n \theta) = \frac{\Im[(1+i \tan{\theta})^n]}{\Re[(1+i \tan{\theta})^n]}}$$</span></p> <p>You can derive an &quot;explicit&quot; formula by just writing out the binomial expansion terms, but I think it is much cleaner this way.</p> <p>As to your question regarding <span class="math-container">$\tan^{n} \theta$</span>, I am almost certain that there is no expansion in terms of only sums of powers of tangent, sine, and cosine. I have absolutely no idea how to prove it; I think it would involve some higher branch of math such as Galois Theory(which I am by no means an expert). But, if you want an expansion in terms of <span class="math-container">$z=\cos{\theta}+i \sin{\theta}$</span>, then just use the expressions you gave in your question statement for sine and cosine to get</p> <p><span class="math-container">$$\tan^n{\theta}=\frac{\sin^n{\theta}}{\cos^n{\theta}}=$$</span> <span class="math-container">$$\frac{(z-\frac{1}{z})^n}{(z+\frac{1}{z})^n}=$$</span> <span class="math-container">$$\left(\frac{z^2-1}{z^2+1}\right)^n$$</span></p> <p>For reference, here is where I found the formula for <span class="math-container">$F_n(x)$</span>: <a href="https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf" rel="nofollow noreferrer">https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf</a></p> <p>This is a research paper on various properties of <span class="math-container">$F_n$</span>, most of them after section 2(the derivation of formula for <span class="math-container">$F_n$</span>) involve Galois theory, although section 6(applications and questions) might be understandable.</p> <p>Note: In the paper, it references the &quot;Chebyshev polynomials of the first kind&quot;, denoted <span class="math-container">$T_n(x)$</span>, which are basically the cosine form of <span class="math-container">$F_n(x)$</span>. They are defined as <span class="math-container">$T_n(x)=\cos(n \arccos{x})$</span>.</p> Answer 9: <p>You can use this way to get $\sin^n\theta$ and $\cos^n\theta$ in terms of $\sin\theta$ and $\cos\theta,$ then $\tan^n\theta=\frac{\sin^n\theta}{\cos^n\theta}$ can be obtained from above, if your goal is just to expand it as a sum of tangents or sines or cosines of multiples of $\theta.$ (this result involves $\sin$ and $\cos$ terms.</p> Answer 10: <p>Really late answer, but I feel this is worth something.</p> <p>So, I'm not sure about <span class="math-container">$\tan^{n} \theta$</span>, but I think that it is much more common to express <span class="math-container">$\tan{n \theta}$</span> in terms of powers of <span class="math-container">$\tan^{\theta}$</span>, so here's a derivation for that... it might not be exactly what you want but I'm not even sure there is an analogue for <span class="math-container">$\tan^{n} \theta$</span>, so I'm putting this as an answer.</p> <p>Suppose we want a polynomial <span class="math-container">$F_n(x)=\tan(n \arctan{x})$</span>. From the definition, it is clear that <span class="math-container">$F_n(\tan{\theta}) = \tan{n \theta}$</span>, so this is what we want. Clearly, <span class="math-container">$F_n$</span> doesn't look like a polynomial right now, so let's rewrite it. I'll let <span class="math-container">$\theta = \arctan{x}$</span> for simplicity.</p> <p><span class="math-container">$$F_n(x)=\tan(n \arctan{x})=$$</span> <span class="math-container">$$\frac{\sin (n \theta)}{\cos (n \theta)}=$$</span> <span class="math-container">$$\frac{1}{i} \cdot \frac{(\cos{\theta} + i \sin{\theta})^n - (\cos{\theta} - i \sin{\theta})^n}{(\cos{\theta} + i \sin{\theta})^n+(\cos{\theta} - i \sin{\theta})^n}$$</span></p> <p>Where the last equality follows from considering De Moivre's formula. Now, this looks much more complicated, but remember that we have <span class="math-container">$\theta = \arctan{x}$</span>. Considering our standard formulas, we remember that:</p> <p><span class="math-container">$$\cos(\arctan{x})=\frac{1}{\sqrt{x^2+1}}$$</span> <span class="math-container">$$\sin(\arctan{x})=\frac{x}{\sqrt{x^2+1}}$$</span></p> <p>So, we have that:</p> <p><span class="math-container">$$F_n(x)=\frac{1}{i} \cdot \frac{\frac{(1+ix)^n}{\sqrt{(x^2+1)^n}}-\frac{(1-ix)^n}{\sqrt{(x^2+1)^n}}}{\frac{(1+ix)^n}{\sqrt{(x^2+1)^n}}+\frac{(1-ix)^n}{\sqrt{(x^2+1)^n}}}=$$</span> <span class="math-container">$$\frac{1}{i} \cdot \frac{(1+ix)^n-(1-ix)^n}{(1+ix)^n+(1-ix)^n}$$</span></p> <p>Now, we can consider the binomial expansions of <span class="math-container">$(1+ix)^n$</span> and <span class="math-container">$(1-ix)^n$</span>. By the binomial theorem, we have that:</p> <p><span class="math-container">$$(1+ix)^n=\sum_{j=0}^{n} \binom{n}{j} (ix)^j$$</span> <span class="math-container">$$(1-ix)^n=\sum_{j=0}^{n} \binom{n}{j} (-ix)^j$$</span></p> <p>Now, in the numerator, all the terms with even powers of <span class="math-container">$j$</span> will cancel, and in the denominator, all the terms with odd powers of <span class="math-container">$j$</span> will cancel. So, in the numerator, we will be left with the imaginary part of <span class="math-container">$(1+ix)^n$</span> times <span class="math-container">$i$</span>, and in the denominator, we will be left with the real part of <span class="math-container">$(1+ix)^n$</span>. So,</p> <p><span class="math-container">$$F_n(x)=\frac{1}{i} \cdot \frac{i \cdot \Im[(1+ix)^n]}{\Re[(1+ix)^n]}=$$</span> <span class="math-container">$$\frac{\Im[(1+ix)^n]}{\Re[(1+ix)^n]}$$</span></p> <p>where <span class="math-container">$\Im[a+bi]=b$</span> and <span class="math-container">$\Re[a+bi]=a$</span>.</p> <p>Hence, we have the final result that <span class="math-container">$$\boxed{\tan(n \theta) = \frac{\Im[(1+i \tan{\theta})^n]}{\Re[(1+i \tan{\theta})^n]}}$$</span></p> <p>You can derive an &quot;explicit&quot; formula by just writing out the binomial expansion terms, but I think it is much cleaner this way.</p> <p>As to your question regarding <span class="math-container">$\tan^{n} \theta$</span>, I am almost certain that there is no expansion in terms of only sums of powers of tangent, sine, and cosine. I have absolutely no idea how to prove it; I think it would involve some higher branch of math such as Galois Theory(which I am by no means an expert). But, if you want an expansion in terms of <span class="math-container">$z=\cos{\theta}+i \sin{\theta}$</span>, then just use the expressions you gave in your question statement for sine and cosine to get</p> <p><span class="math-container">$$\tan^n{\theta}=\frac{\sin^n{\theta}}{\cos^n{\theta}}=$$</span> <span class="math-container">$$\frac{(z-\frac{1}{z})^n}{(z+\frac{1}{z})^n}=$$</span> <span class="math-container">$$\left(\frac{z^2-1}{z^2+1}\right)^n$$</span></p> <p>For reference, here is where I found the formula for <span class="math-container">$F_n(x)$</span>: <a href="https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf" rel="nofollow noreferrer">https://www2.oberlin.edu/faculty/jcalcut/arctan.pdf</a></p> <p>This is a research paper on various properties of <span class="math-container">$F_n$</span>, most of them after section 2(the derivation of formula for <span class="math-container">$F_n$</span>) involve Galois theory, although section 6(applications and questions) might be understandable.</p> <p>Note: In the paper, it references the &quot;Chebyshev polynomials of the first kind&quot;, denoted <span class="math-container">$T_n(x)$</span>, which are basically the cosine form of <span class="math-container">$F_n(x)$</span>. They are defined as <span class="math-container">$T_n(x)=\cos(n \arccos{x})$</span>.</p>
Proof of $\epsilon_{ijk}\epsilon_{klm}=\delta_{il}\delta_{jm}-\delta_{im}\delta_{jl}$
2,079,112
4
https://math.stackexchange.com/questions/2079112/proof-of-epsilon-ijk-epsilon-klm-delta-il-delta-jm-delta-im-delta
<p>I'm a student of physics. There is an identity in tensor calculus involving Kronecker deltas ans Levi-Civita pseudo tensors is given by $$\epsilon_{ijk}\epsilon_{klm}=\delta_{il}\delta_{jm}-\delta_{im}\delta_{jl}$$ which is extensively used in physics in deriving various identities. I have neither found a proof of this in physics textbooks nor in Wikipedia. In particular, how does the above formula follow from the definition of $\epsilon_{ijk}$ tensor$$\epsilon_{ijk} = \begin{cases} +1 &amp; \text{ for even permutations }, \\ -1 &amp; \text{ for odd permutations } ,\\ \;\;\,0 &amp; \text{ for repetition of indices }, \end{cases}$$ This is the only definition of I'm familiar with.</p>
[ { "answer_id": 3659595, "score": 3, "text": "<p>Here's an explicit proof that doesn't rely on knowning anything about determinants and just uses the definition of <span class=\"math-container\">$\\epsilon_{ijk}$</span> and <span class=\"math-container\">$\\delta_{ij}$</span>.</p>\n\n<p>Consider first</p>\n\n<p><span class=\"math-container\">$$\n\\epsilon_{ijk}\\epsilon_{lmn}\n$$</span></p>\n\n<p>This quantity is <span class=\"math-container\">$+1$</span> if <span class=\"math-container\">$(lmn)$</span> is an even permutation of <span class=\"math-container\">$(ijk)$</span> and <span class=\"math-container\">$-1$</span> if <span class=\"math-container\">$(lmn)$</span> is an odd permutation of <span class=\"math-container\">$(ijk)$</span>. It is <span class=\"math-container\">$0$</span> if <span class=\"math-container\">$(lmn)$</span> is not a permutation of <span class=\"math-container\">$(ijk)$</span>. This means each of <span class=\"math-container\">$(ijk)$</span> must have a pair in <span class=\"math-container\">$(lmn)$</span>. We can enumerate all possibilities.</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{ijk}\\epsilon_{lmn} = &amp;\\delta_{il}\\delta_{jm}\\delta_{kn} + \\delta_{im}\\delta_{jn}\\delta_{kl} + \\delta_{in}\\delta_{jl}\\delta_{km}\\\\\n-&amp;\\delta_{in}\\delta_{jm}\\delta_{kl} - \\delta_{il}\\delta_{jn}\\delta_{km} - \\delta_{im}\\delta_{jl}\\delta_{kn}\n\\end{align}</span></p>\n\n<p>Let us now set <span class=\"math-container\">$l=i$</span> and sum over <span class=\"math-container\">$i$</span> (with Einstein summation notation implied)</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{ijk}\\epsilon_{imn} = &amp;\\delta_{ii}\\delta_{jm}\\delta_{kn} + \\delta_{im}\\delta_{jn}\\delta_{ki} + \\delta_{in}\\delta_{ji}\\delta_{km}\\\\\n-&amp;\\delta_{in}\\delta_{jm}\\delta_{ki} - \\delta_{ii}\\delta_{jn}\\delta_{km} - \\delta_{im}\\delta_{ji}\\delta_{kn}\\\\\n=&amp; 3\\delta_{jm}\\delta_{kn} + \\delta_{km}\\delta_{jn} + \\delta_{jn}\\delta_{km}\\\\\n-&amp;\\delta_{kn}\\delta_{jm} - 3\\delta_{jn}\\delta_{km} - \\delta_{jm}\\delta_{kn}\\\\\n=&amp; \\delta_{jm}\\delta_{kn} - \\delta_{jn}\\delta_{km}\n\\end{align}</span></p>\n\n<p>By a change of index <span class=\"math-container\">$i\\rightarrow k \\rightarrow j\\rightarrow i$</span> and <span class=\"math-container\">$m\\rightarrow l$</span>, <span class=\"math-container\">$n\\rightarrow m$</span> we get</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{kij}\\epsilon_{klm}=\\epsilon_{ijk}\\epsilon_{klm} = \\delta_{il}\\delta_{jm} - \\delta_{im}\\delta_{jl}\n\\end{align}</span></p>\n" }, { "answer_id": 3659638, "score": 1, "text": "<p>We have that (the repeated index <span class=\"math-container\">$s$</span> is summed and therefore it cannot enter into the result)</p>\n\n<p><span class=\"math-container\">$\\begin{align}\n\\epsilon_{sab}\\epsilon_{sij}= A_{abij}\n\\end{align}$</span></p>\n\n<p>where <span class=\"math-container\">$A_{abij}=-A_{baij}=-A_{abji}=A_{ijab}$</span>, which are obvious symmetries of the still unknown symbol <span class=\"math-container\">$A$</span>. \nSince <span class=\"math-container\">$\\epsilon_{sab}$</span> is non-zero only if <span class=\"math-container\">$s\\neq a\\neq b\\neq s$</span> (i.e. the indexes are all different), and similarly for <span class=\"math-container\">$\\epsilon_{sij}$</span>, the only possibility to have a non-zero result is that <span class=\"math-container\">$a=i$</span> and <span class=\"math-container\">$b=j$</span> or that <span class=\"math-container\">$a=j$</span> and <span class=\"math-container\">$b=i$</span>, so that </p>\n\n<p><span class=\"math-container\">$\\begin{align}\nA_{abij} = \\alpha \\delta_{ai} \\delta_{bj} + \\beta \\delta_{aj} \\delta_{bi}\n\\end{align}$</span> </p>\n\n<p>To implement the expected symmetries of <span class=\"math-container\">$A$</span> we must have <span class=\"math-container\">$\\alpha = -\\beta$</span>. The value <span class=\"math-container\">$\\alpha = 1$</span> is fixed by considering any non-zero particular case (e.g. <span class=\"math-container\">$a=i=1$</span>, <span class=\"math-container\">$b=j=2$</span>, gives <span class=\"math-container\">$\\epsilon_{s12}\\epsilon_{s12}= \\epsilon_{312}\\epsilon_{312}=1$</span>).</p>\n" }, { "answer_id": 3659595, "score": 3, "text": "<p>Here's an explicit proof that doesn't rely on knowning anything about determinants and just uses the definition of <span class=\"math-container\">$\\epsilon_{ijk}$</span> and <span class=\"math-container\">$\\delta_{ij}$</span>.</p>\n\n<p>Consider first</p>\n\n<p><span class=\"math-container\">$$\n\\epsilon_{ijk}\\epsilon_{lmn}\n$$</span></p>\n\n<p>This quantity is <span class=\"math-container\">$+1$</span> if <span class=\"math-container\">$(lmn)$</span> is an even permutation of <span class=\"math-container\">$(ijk)$</span> and <span class=\"math-container\">$-1$</span> if <span class=\"math-container\">$(lmn)$</span> is an odd permutation of <span class=\"math-container\">$(ijk)$</span>. It is <span class=\"math-container\">$0$</span> if <span class=\"math-container\">$(lmn)$</span> is not a permutation of <span class=\"math-container\">$(ijk)$</span>. This means each of <span class=\"math-container\">$(ijk)$</span> must have a pair in <span class=\"math-container\">$(lmn)$</span>. We can enumerate all possibilities.</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{ijk}\\epsilon_{lmn} = &amp;\\delta_{il}\\delta_{jm}\\delta_{kn} + \\delta_{im}\\delta_{jn}\\delta_{kl} + \\delta_{in}\\delta_{jl}\\delta_{km}\\\\\n-&amp;\\delta_{in}\\delta_{jm}\\delta_{kl} - \\delta_{il}\\delta_{jn}\\delta_{km} - \\delta_{im}\\delta_{jl}\\delta_{kn}\n\\end{align}</span></p>\n\n<p>Let us now set <span class=\"math-container\">$l=i$</span> and sum over <span class=\"math-container\">$i$</span> (with Einstein summation notation implied)</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{ijk}\\epsilon_{imn} = &amp;\\delta_{ii}\\delta_{jm}\\delta_{kn} + \\delta_{im}\\delta_{jn}\\delta_{ki} + \\delta_{in}\\delta_{ji}\\delta_{km}\\\\\n-&amp;\\delta_{in}\\delta_{jm}\\delta_{ki} - \\delta_{ii}\\delta_{jn}\\delta_{km} - \\delta_{im}\\delta_{ji}\\delta_{kn}\\\\\n=&amp; 3\\delta_{jm}\\delta_{kn} + \\delta_{km}\\delta_{jn} + \\delta_{jn}\\delta_{km}\\\\\n-&amp;\\delta_{kn}\\delta_{jm} - 3\\delta_{jn}\\delta_{km} - \\delta_{jm}\\delta_{kn}\\\\\n=&amp; \\delta_{jm}\\delta_{kn} - \\delta_{jn}\\delta_{km}\n\\end{align}</span></p>\n\n<p>By a change of index <span class=\"math-container\">$i\\rightarrow k \\rightarrow j\\rightarrow i$</span> and <span class=\"math-container\">$m\\rightarrow l$</span>, <span class=\"math-container\">$n\\rightarrow m$</span> we get</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{kij}\\epsilon_{klm}=\\epsilon_{ijk}\\epsilon_{klm} = \\delta_{il}\\delta_{jm} - \\delta_{im}\\delta_{jl}\n\\end{align}</span></p>\n" }, { "answer_id": 3659638, "score": 1, "text": "<p>We have that (the repeated index <span class=\"math-container\">$s$</span> is summed and therefore it cannot enter into the result)</p>\n\n<p><span class=\"math-container\">$\\begin{align}\n\\epsilon_{sab}\\epsilon_{sij}= A_{abij}\n\\end{align}$</span></p>\n\n<p>where <span class=\"math-container\">$A_{abij}=-A_{baij}=-A_{abji}=A_{ijab}$</span>, which are obvious symmetries of the still unknown symbol <span class=\"math-container\">$A$</span>. \nSince <span class=\"math-container\">$\\epsilon_{sab}$</span> is non-zero only if <span class=\"math-container\">$s\\neq a\\neq b\\neq s$</span> (i.e. the indexes are all different), and similarly for <span class=\"math-container\">$\\epsilon_{sij}$</span>, the only possibility to have a non-zero result is that <span class=\"math-container\">$a=i$</span> and <span class=\"math-container\">$b=j$</span> or that <span class=\"math-container\">$a=j$</span> and <span class=\"math-container\">$b=i$</span>, so that </p>\n\n<p><span class=\"math-container\">$\\begin{align}\nA_{abij} = \\alpha \\delta_{ai} \\delta_{bj} + \\beta \\delta_{aj} \\delta_{bi}\n\\end{align}$</span> </p>\n\n<p>To implement the expected symmetries of <span class=\"math-container\">$A$</span> we must have <span class=\"math-container\">$\\alpha = -\\beta$</span>. The value <span class=\"math-container\">$\\alpha = 1$</span> is fixed by considering any non-zero particular case (e.g. <span class=\"math-container\">$a=i=1$</span>, <span class=\"math-container\">$b=j=2$</span>, gives <span class=\"math-container\">$\\epsilon_{s12}\\epsilon_{s12}= \\epsilon_{312}\\epsilon_{312}=1$</span>).</p>\n" }, { "answer_id": 3659595, "score": 3, "text": "<p>Here's an explicit proof that doesn't rely on knowning anything about determinants and just uses the definition of <span class=\"math-container\">$\\epsilon_{ijk}$</span> and <span class=\"math-container\">$\\delta_{ij}$</span>.</p>\n\n<p>Consider first</p>\n\n<p><span class=\"math-container\">$$\n\\epsilon_{ijk}\\epsilon_{lmn}\n$$</span></p>\n\n<p>This quantity is <span class=\"math-container\">$+1$</span> if <span class=\"math-container\">$(lmn)$</span> is an even permutation of <span class=\"math-container\">$(ijk)$</span> and <span class=\"math-container\">$-1$</span> if <span class=\"math-container\">$(lmn)$</span> is an odd permutation of <span class=\"math-container\">$(ijk)$</span>. It is <span class=\"math-container\">$0$</span> if <span class=\"math-container\">$(lmn)$</span> is not a permutation of <span class=\"math-container\">$(ijk)$</span>. This means each of <span class=\"math-container\">$(ijk)$</span> must have a pair in <span class=\"math-container\">$(lmn)$</span>. We can enumerate all possibilities.</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{ijk}\\epsilon_{lmn} = &amp;\\delta_{il}\\delta_{jm}\\delta_{kn} + \\delta_{im}\\delta_{jn}\\delta_{kl} + \\delta_{in}\\delta_{jl}\\delta_{km}\\\\\n-&amp;\\delta_{in}\\delta_{jm}\\delta_{kl} - \\delta_{il}\\delta_{jn}\\delta_{km} - \\delta_{im}\\delta_{jl}\\delta_{kn}\n\\end{align}</span></p>\n\n<p>Let us now set <span class=\"math-container\">$l=i$</span> and sum over <span class=\"math-container\">$i$</span> (with Einstein summation notation implied)</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{ijk}\\epsilon_{imn} = &amp;\\delta_{ii}\\delta_{jm}\\delta_{kn} + \\delta_{im}\\delta_{jn}\\delta_{ki} + \\delta_{in}\\delta_{ji}\\delta_{km}\\\\\n-&amp;\\delta_{in}\\delta_{jm}\\delta_{ki} - \\delta_{ii}\\delta_{jn}\\delta_{km} - \\delta_{im}\\delta_{ji}\\delta_{kn}\\\\\n=&amp; 3\\delta_{jm}\\delta_{kn} + \\delta_{km}\\delta_{jn} + \\delta_{jn}\\delta_{km}\\\\\n-&amp;\\delta_{kn}\\delta_{jm} - 3\\delta_{jn}\\delta_{km} - \\delta_{jm}\\delta_{kn}\\\\\n=&amp; \\delta_{jm}\\delta_{kn} - \\delta_{jn}\\delta_{km}\n\\end{align}</span></p>\n\n<p>By a change of index <span class=\"math-container\">$i\\rightarrow k \\rightarrow j\\rightarrow i$</span> and <span class=\"math-container\">$m\\rightarrow l$</span>, <span class=\"math-container\">$n\\rightarrow m$</span> we get</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{kij}\\epsilon_{klm}=\\epsilon_{ijk}\\epsilon_{klm} = \\delta_{il}\\delta_{jm} - \\delta_{im}\\delta_{jl}\n\\end{align}</span></p>\n" }, { "answer_id": 3659638, "score": 1, "text": "<p>We have that (the repeated index <span class=\"math-container\">$s$</span> is summed and therefore it cannot enter into the result)</p>\n\n<p><span class=\"math-container\">$\\begin{align}\n\\epsilon_{sab}\\epsilon_{sij}= A_{abij}\n\\end{align}$</span></p>\n\n<p>where <span class=\"math-container\">$A_{abij}=-A_{baij}=-A_{abji}=A_{ijab}$</span>, which are obvious symmetries of the still unknown symbol <span class=\"math-container\">$A$</span>. \nSince <span class=\"math-container\">$\\epsilon_{sab}$</span> is non-zero only if <span class=\"math-container\">$s\\neq a\\neq b\\neq s$</span> (i.e. the indexes are all different), and similarly for <span class=\"math-container\">$\\epsilon_{sij}$</span>, the only possibility to have a non-zero result is that <span class=\"math-container\">$a=i$</span> and <span class=\"math-container\">$b=j$</span> or that <span class=\"math-container\">$a=j$</span> and <span class=\"math-container\">$b=i$</span>, so that </p>\n\n<p><span class=\"math-container\">$\\begin{align}\nA_{abij} = \\alpha \\delta_{ai} \\delta_{bj} + \\beta \\delta_{aj} \\delta_{bi}\n\\end{align}$</span> </p>\n\n<p>To implement the expected symmetries of <span class=\"math-container\">$A$</span> we must have <span class=\"math-container\">$\\alpha = -\\beta$</span>. The value <span class=\"math-container\">$\\alpha = 1$</span> is fixed by considering any non-zero particular case (e.g. <span class=\"math-container\">$a=i=1$</span>, <span class=\"math-container\">$b=j=2$</span>, gives <span class=\"math-container\">$\\epsilon_{s12}\\epsilon_{s12}= \\epsilon_{312}\\epsilon_{312}=1$</span>).</p>\n" }, { "answer_id": 3659595, "score": 3, "text": "<p>Here's an explicit proof that doesn't rely on knowning anything about determinants and just uses the definition of <span class=\"math-container\">$\\epsilon_{ijk}$</span> and <span class=\"math-container\">$\\delta_{ij}$</span>.</p>\n\n<p>Consider first</p>\n\n<p><span class=\"math-container\">$$\n\\epsilon_{ijk}\\epsilon_{lmn}\n$$</span></p>\n\n<p>This quantity is <span class=\"math-container\">$+1$</span> if <span class=\"math-container\">$(lmn)$</span> is an even permutation of <span class=\"math-container\">$(ijk)$</span> and <span class=\"math-container\">$-1$</span> if <span class=\"math-container\">$(lmn)$</span> is an odd permutation of <span class=\"math-container\">$(ijk)$</span>. It is <span class=\"math-container\">$0$</span> if <span class=\"math-container\">$(lmn)$</span> is not a permutation of <span class=\"math-container\">$(ijk)$</span>. This means each of <span class=\"math-container\">$(ijk)$</span> must have a pair in <span class=\"math-container\">$(lmn)$</span>. We can enumerate all possibilities.</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{ijk}\\epsilon_{lmn} = &amp;\\delta_{il}\\delta_{jm}\\delta_{kn} + \\delta_{im}\\delta_{jn}\\delta_{kl} + \\delta_{in}\\delta_{jl}\\delta_{km}\\\\\n-&amp;\\delta_{in}\\delta_{jm}\\delta_{kl} - \\delta_{il}\\delta_{jn}\\delta_{km} - \\delta_{im}\\delta_{jl}\\delta_{kn}\n\\end{align}</span></p>\n\n<p>Let us now set <span class=\"math-container\">$l=i$</span> and sum over <span class=\"math-container\">$i$</span> (with Einstein summation notation implied)</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{ijk}\\epsilon_{imn} = &amp;\\delta_{ii}\\delta_{jm}\\delta_{kn} + \\delta_{im}\\delta_{jn}\\delta_{ki} + \\delta_{in}\\delta_{ji}\\delta_{km}\\\\\n-&amp;\\delta_{in}\\delta_{jm}\\delta_{ki} - \\delta_{ii}\\delta_{jn}\\delta_{km} - \\delta_{im}\\delta_{ji}\\delta_{kn}\\\\\n=&amp; 3\\delta_{jm}\\delta_{kn} + \\delta_{km}\\delta_{jn} + \\delta_{jn}\\delta_{km}\\\\\n-&amp;\\delta_{kn}\\delta_{jm} - 3\\delta_{jn}\\delta_{km} - \\delta_{jm}\\delta_{kn}\\\\\n=&amp; \\delta_{jm}\\delta_{kn} - \\delta_{jn}\\delta_{km}\n\\end{align}</span></p>\n\n<p>By a change of index <span class=\"math-container\">$i\\rightarrow k \\rightarrow j\\rightarrow i$</span> and <span class=\"math-container\">$m\\rightarrow l$</span>, <span class=\"math-container\">$n\\rightarrow m$</span> we get</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{kij}\\epsilon_{klm}=\\epsilon_{ijk}\\epsilon_{klm} = \\delta_{il}\\delta_{jm} - \\delta_{im}\\delta_{jl}\n\\end{align}</span></p>\n" }, { "answer_id": 3659638, "score": 1, "text": "<p>We have that (the repeated index <span class=\"math-container\">$s$</span> is summed and therefore it cannot enter into the result)</p>\n\n<p><span class=\"math-container\">$\\begin{align}\n\\epsilon_{sab}\\epsilon_{sij}= A_{abij}\n\\end{align}$</span></p>\n\n<p>where <span class=\"math-container\">$A_{abij}=-A_{baij}=-A_{abji}=A_{ijab}$</span>, which are obvious symmetries of the still unknown symbol <span class=\"math-container\">$A$</span>. \nSince <span class=\"math-container\">$\\epsilon_{sab}$</span> is non-zero only if <span class=\"math-container\">$s\\neq a\\neq b\\neq s$</span> (i.e. the indexes are all different), and similarly for <span class=\"math-container\">$\\epsilon_{sij}$</span>, the only possibility to have a non-zero result is that <span class=\"math-container\">$a=i$</span> and <span class=\"math-container\">$b=j$</span> or that <span class=\"math-container\">$a=j$</span> and <span class=\"math-container\">$b=i$</span>, so that </p>\n\n<p><span class=\"math-container\">$\\begin{align}\nA_{abij} = \\alpha \\delta_{ai} \\delta_{bj} + \\beta \\delta_{aj} \\delta_{bi}\n\\end{align}$</span> </p>\n\n<p>To implement the expected symmetries of <span class=\"math-container\">$A$</span> we must have <span class=\"math-container\">$\\alpha = -\\beta$</span>. The value <span class=\"math-container\">$\\alpha = 1$</span> is fixed by considering any non-zero particular case (e.g. <span class=\"math-container\">$a=i=1$</span>, <span class=\"math-container\">$b=j=2$</span>, gives <span class=\"math-container\">$\\epsilon_{s12}\\epsilon_{s12}= \\epsilon_{312}\\epsilon_{312}=1$</span>).</p>\n" }, { "answer_id": 3659595, "score": 3, "text": "<p>Here's an explicit proof that doesn't rely on knowning anything about determinants and just uses the definition of <span class=\"math-container\">$\\epsilon_{ijk}$</span> and <span class=\"math-container\">$\\delta_{ij}$</span>.</p>\n\n<p>Consider first</p>\n\n<p><span class=\"math-container\">$$\n\\epsilon_{ijk}\\epsilon_{lmn}\n$$</span></p>\n\n<p>This quantity is <span class=\"math-container\">$+1$</span> if <span class=\"math-container\">$(lmn)$</span> is an even permutation of <span class=\"math-container\">$(ijk)$</span> and <span class=\"math-container\">$-1$</span> if <span class=\"math-container\">$(lmn)$</span> is an odd permutation of <span class=\"math-container\">$(ijk)$</span>. It is <span class=\"math-container\">$0$</span> if <span class=\"math-container\">$(lmn)$</span> is not a permutation of <span class=\"math-container\">$(ijk)$</span>. This means each of <span class=\"math-container\">$(ijk)$</span> must have a pair in <span class=\"math-container\">$(lmn)$</span>. We can enumerate all possibilities.</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{ijk}\\epsilon_{lmn} = &amp;\\delta_{il}\\delta_{jm}\\delta_{kn} + \\delta_{im}\\delta_{jn}\\delta_{kl} + \\delta_{in}\\delta_{jl}\\delta_{km}\\\\\n-&amp;\\delta_{in}\\delta_{jm}\\delta_{kl} - \\delta_{il}\\delta_{jn}\\delta_{km} - \\delta_{im}\\delta_{jl}\\delta_{kn}\n\\end{align}</span></p>\n\n<p>Let us now set <span class=\"math-container\">$l=i$</span> and sum over <span class=\"math-container\">$i$</span> (with Einstein summation notation implied)</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{ijk}\\epsilon_{imn} = &amp;\\delta_{ii}\\delta_{jm}\\delta_{kn} + \\delta_{im}\\delta_{jn}\\delta_{ki} + \\delta_{in}\\delta_{ji}\\delta_{km}\\\\\n-&amp;\\delta_{in}\\delta_{jm}\\delta_{ki} - \\delta_{ii}\\delta_{jn}\\delta_{km} - \\delta_{im}\\delta_{ji}\\delta_{kn}\\\\\n=&amp; 3\\delta_{jm}\\delta_{kn} + \\delta_{km}\\delta_{jn} + \\delta_{jn}\\delta_{km}\\\\\n-&amp;\\delta_{kn}\\delta_{jm} - 3\\delta_{jn}\\delta_{km} - \\delta_{jm}\\delta_{kn}\\\\\n=&amp; \\delta_{jm}\\delta_{kn} - \\delta_{jn}\\delta_{km}\n\\end{align}</span></p>\n\n<p>By a change of index <span class=\"math-container\">$i\\rightarrow k \\rightarrow j\\rightarrow i$</span> and <span class=\"math-container\">$m\\rightarrow l$</span>, <span class=\"math-container\">$n\\rightarrow m$</span> we get</p>\n\n<p><span class=\"math-container\">\\begin{align}\n\\epsilon_{kij}\\epsilon_{klm}=\\epsilon_{ijk}\\epsilon_{klm} = \\delta_{il}\\delta_{jm} - \\delta_{im}\\delta_{jl}\n\\end{align}</span></p>\n" }, { "answer_id": 3659638, "score": 1, "text": "<p>We have that (the repeated index <span class=\"math-container\">$s$</span> is summed and therefore it cannot enter into the result)</p>\n\n<p><span class=\"math-container\">$\\begin{align}\n\\epsilon_{sab}\\epsilon_{sij}= A_{abij}\n\\end{align}$</span></p>\n\n<p>where <span class=\"math-container\">$A_{abij}=-A_{baij}=-A_{abji}=A_{ijab}$</span>, which are obvious symmetries of the still unknown symbol <span class=\"math-container\">$A$</span>. \nSince <span class=\"math-container\">$\\epsilon_{sab}$</span> is non-zero only if <span class=\"math-container\">$s\\neq a\\neq b\\neq s$</span> (i.e. the indexes are all different), and similarly for <span class=\"math-container\">$\\epsilon_{sij}$</span>, the only possibility to have a non-zero result is that <span class=\"math-container\">$a=i$</span> and <span class=\"math-container\">$b=j$</span> or that <span class=\"math-container\">$a=j$</span> and <span class=\"math-container\">$b=i$</span>, so that </p>\n\n<p><span class=\"math-container\">$\\begin{align}\nA_{abij} = \\alpha \\delta_{ai} \\delta_{bj} + \\beta \\delta_{aj} \\delta_{bi}\n\\end{align}$</span> </p>\n\n<p>To implement the expected symmetries of <span class=\"math-container\">$A$</span> we must have <span class=\"math-container\">$\\alpha = -\\beta$</span>. The value <span class=\"math-container\">$\\alpha = 1$</span> is fixed by considering any non-zero particular case (e.g. <span class=\"math-container\">$a=i=1$</span>, <span class=\"math-container\">$b=j=2$</span>, gives <span class=\"math-container\">$\\epsilon_{s12}\\epsilon_{s12}= \\epsilon_{312}\\epsilon_{312}=1$</span>).</p>\n" } ]
Question: Proof of $\epsilon_{ijk}\epsilon_{klm}=\delta_{il}\delta_{jm}-\delta_{im}\delta_{jl}$ Answer 1: <p>Here's an explicit proof that doesn't rely on knowning anything about determinants and just uses the definition of <span class="math-container">$\epsilon_{ijk}$</span> and <span class="math-container">$\delta_{ij}$</span>.</p> <p>Consider first</p> <p><span class="math-container">$$ \epsilon_{ijk}\epsilon_{lmn} $$</span></p> <p>This quantity is <span class="math-container">$+1$</span> if <span class="math-container">$(lmn)$</span> is an even permutation of <span class="math-container">$(ijk)$</span> and <span class="math-container">$-1$</span> if <span class="math-container">$(lmn)$</span> is an odd permutation of <span class="math-container">$(ijk)$</span>. It is <span class="math-container">$0$</span> if <span class="math-container">$(lmn)$</span> is not a permutation of <span class="math-container">$(ijk)$</span>. This means each of <span class="math-container">$(ijk)$</span> must have a pair in <span class="math-container">$(lmn)$</span>. We can enumerate all possibilities.</p> <p><span class="math-container">\begin{align} \epsilon_{ijk}\epsilon_{lmn} = &amp;\delta_{il}\delta_{jm}\delta_{kn} + \delta_{im}\delta_{jn}\delta_{kl} + \delta_{in}\delta_{jl}\delta_{km}\\ -&amp;\delta_{in}\delta_{jm}\delta_{kl} - \delta_{il}\delta_{jn}\delta_{km} - \delta_{im}\delta_{jl}\delta_{kn} \end{align}</span></p> <p>Let us now set <span class="math-container">$l=i$</span> and sum over <span class="math-container">$i$</span> (with Einstein summation notation implied)</p> <p><span class="math-container">\begin{align} \epsilon_{ijk}\epsilon_{imn} = &amp;\delta_{ii}\delta_{jm}\delta_{kn} + \delta_{im}\delta_{jn}\delta_{ki} + \delta_{in}\delta_{ji}\delta_{km}\\ -&amp;\delta_{in}\delta_{jm}\delta_{ki} - \delta_{ii}\delta_{jn}\delta_{km} - \delta_{im}\delta_{ji}\delta_{kn}\\ =&amp; 3\delta_{jm}\delta_{kn} + \delta_{km}\delta_{jn} + \delta_{jn}\delta_{km}\\ -&amp;\delta_{kn}\delta_{jm} - 3\delta_{jn}\delta_{km} - \delta_{jm}\delta_{kn}\\ =&amp; \delta_{jm}\delta_{kn} - \delta_{jn}\delta_{km} \end{align}</span></p> <p>By a change of index <span class="math-container">$i\rightarrow k \rightarrow j\rightarrow i$</span> and <span class="math-container">$m\rightarrow l$</span>, <span class="math-container">$n\rightarrow m$</span> we get</p> <p><span class="math-container">\begin{align} \epsilon_{kij}\epsilon_{klm}=\epsilon_{ijk}\epsilon_{klm} = \delta_{il}\delta_{jm} - \delta_{im}\delta_{jl} \end{align}</span></p> Answer 2: <p>We have that (the repeated index <span class="math-container">$s$</span> is summed and therefore it cannot enter into the result)</p> <p><span class="math-container">$\begin{align} \epsilon_{sab}\epsilon_{sij}= A_{abij} \end{align}$</span></p> <p>where <span class="math-container">$A_{abij}=-A_{baij}=-A_{abji}=A_{ijab}$</span>, which are obvious symmetries of the still unknown symbol <span class="math-container">$A$</span>. Since <span class="math-container">$\epsilon_{sab}$</span> is non-zero only if <span class="math-container">$s\neq a\neq b\neq s$</span> (i.e. the indexes are all different), and similarly for <span class="math-container">$\epsilon_{sij}$</span>, the only possibility to have a non-zero result is that <span class="math-container">$a=i$</span> and <span class="math-container">$b=j$</span> or that <span class="math-container">$a=j$</span> and <span class="math-container">$b=i$</span>, so that </p> <p><span class="math-container">$\begin{align} A_{abij} = \alpha \delta_{ai} \delta_{bj} + \beta \delta_{aj} \delta_{bi} \end{align}$</span> </p> <p>To implement the expected symmetries of <span class="math-container">$A$</span> we must have <span class="math-container">$\alpha = -\beta$</span>. The value <span class="math-container">$\alpha = 1$</span> is fixed by considering any non-zero particular case (e.g. <span class="math-container">$a=i=1$</span>, <span class="math-container">$b=j=2$</span>, gives <span class="math-container">$\epsilon_{s12}\epsilon_{s12}= \epsilon_{312}\epsilon_{312}=1$</span>).</p> Answer 3: <p>Here's an explicit proof that doesn't rely on knowning anything about determinants and just uses the definition of <span class="math-container">$\epsilon_{ijk}$</span> and <span class="math-container">$\delta_{ij}$</span>.</p> <p>Consider first</p> <p><span class="math-container">$$ \epsilon_{ijk}\epsilon_{lmn} $$</span></p> <p>This quantity is <span class="math-container">$+1$</span> if <span class="math-container">$(lmn)$</span> is an even permutation of <span class="math-container">$(ijk)$</span> and <span class="math-container">$-1$</span> if <span class="math-container">$(lmn)$</span> is an odd permutation of <span class="math-container">$(ijk)$</span>. It is <span class="math-container">$0$</span> if <span class="math-container">$(lmn)$</span> is not a permutation of <span class="math-container">$(ijk)$</span>. This means each of <span class="math-container">$(ijk)$</span> must have a pair in <span class="math-container">$(lmn)$</span>. We can enumerate all possibilities.</p> <p><span class="math-container">\begin{align} \epsilon_{ijk}\epsilon_{lmn} = &amp;\delta_{il}\delta_{jm}\delta_{kn} + \delta_{im}\delta_{jn}\delta_{kl} + \delta_{in}\delta_{jl}\delta_{km}\\ -&amp;\delta_{in}\delta_{jm}\delta_{kl} - \delta_{il}\delta_{jn}\delta_{km} - \delta_{im}\delta_{jl}\delta_{kn} \end{align}</span></p> <p>Let us now set <span class="math-container">$l=i$</span> and sum over <span class="math-container">$i$</span> (with Einstein summation notation implied)</p> <p><span class="math-container">\begin{align} \epsilon_{ijk}\epsilon_{imn} = &amp;\delta_{ii}\delta_{jm}\delta_{kn} + \delta_{im}\delta_{jn}\delta_{ki} + \delta_{in}\delta_{ji}\delta_{km}\\ -&amp;\delta_{in}\delta_{jm}\delta_{ki} - \delta_{ii}\delta_{jn}\delta_{km} - \delta_{im}\delta_{ji}\delta_{kn}\\ =&amp; 3\delta_{jm}\delta_{kn} + \delta_{km}\delta_{jn} + \delta_{jn}\delta_{km}\\ -&amp;\delta_{kn}\delta_{jm} - 3\delta_{jn}\delta_{km} - \delta_{jm}\delta_{kn}\\ =&amp; \delta_{jm}\delta_{kn} - \delta_{jn}\delta_{km} \end{align}</span></p> <p>By a change of index <span class="math-container">$i\rightarrow k \rightarrow j\rightarrow i$</span> and <span class="math-container">$m\rightarrow l$</span>, <span class="math-container">$n\rightarrow m$</span> we get</p> <p><span class="math-container">\begin{align} \epsilon_{kij}\epsilon_{klm}=\epsilon_{ijk}\epsilon_{klm} = \delta_{il}\delta_{jm} - \delta_{im}\delta_{jl} \end{align}</span></p> Answer 4: <p>We have that (the repeated index <span class="math-container">$s$</span> is summed and therefore it cannot enter into the result)</p> <p><span class="math-container">$\begin{align} \epsilon_{sab}\epsilon_{sij}= A_{abij} \end{align}$</span></p> <p>where <span class="math-container">$A_{abij}=-A_{baij}=-A_{abji}=A_{ijab}$</span>, which are obvious symmetries of the still unknown symbol <span class="math-container">$A$</span>. Since <span class="math-container">$\epsilon_{sab}$</span> is non-zero only if <span class="math-container">$s\neq a\neq b\neq s$</span> (i.e. the indexes are all different), and similarly for <span class="math-container">$\epsilon_{sij}$</span>, the only possibility to have a non-zero result is that <span class="math-container">$a=i$</span> and <span class="math-container">$b=j$</span> or that <span class="math-container">$a=j$</span> and <span class="math-container">$b=i$</span>, so that </p> <p><span class="math-container">$\begin{align} A_{abij} = \alpha \delta_{ai} \delta_{bj} + \beta \delta_{aj} \delta_{bi} \end{align}$</span> </p> <p>To implement the expected symmetries of <span class="math-container">$A$</span> we must have <span class="math-container">$\alpha = -\beta$</span>. The value <span class="math-container">$\alpha = 1$</span> is fixed by considering any non-zero particular case (e.g. <span class="math-container">$a=i=1$</span>, <span class="math-container">$b=j=2$</span>, gives <span class="math-container">$\epsilon_{s12}\epsilon_{s12}= \epsilon_{312}\epsilon_{312}=1$</span>).</p> Answer 5: <p>Here's an explicit proof that doesn't rely on knowning anything about determinants and just uses the definition of <span class="math-container">$\epsilon_{ijk}$</span> and <span class="math-container">$\delta_{ij}$</span>.</p> <p>Consider first</p> <p><span class="math-container">$$ \epsilon_{ijk}\epsilon_{lmn} $$</span></p> <p>This quantity is <span class="math-container">$+1$</span> if <span class="math-container">$(lmn)$</span> is an even permutation of <span class="math-container">$(ijk)$</span> and <span class="math-container">$-1$</span> if <span class="math-container">$(lmn)$</span> is an odd permutation of <span class="math-container">$(ijk)$</span>. It is <span class="math-container">$0$</span> if <span class="math-container">$(lmn)$</span> is not a permutation of <span class="math-container">$(ijk)$</span>. This means each of <span class="math-container">$(ijk)$</span> must have a pair in <span class="math-container">$(lmn)$</span>. We can enumerate all possibilities.</p> <p><span class="math-container">\begin{align} \epsilon_{ijk}\epsilon_{lmn} = &amp;\delta_{il}\delta_{jm}\delta_{kn} + \delta_{im}\delta_{jn}\delta_{kl} + \delta_{in}\delta_{jl}\delta_{km}\\ -&amp;\delta_{in}\delta_{jm}\delta_{kl} - \delta_{il}\delta_{jn}\delta_{km} - \delta_{im}\delta_{jl}\delta_{kn} \end{align}</span></p> <p>Let us now set <span class="math-container">$l=i$</span> and sum over <span class="math-container">$i$</span> (with Einstein summation notation implied)</p> <p><span class="math-container">\begin{align} \epsilon_{ijk}\epsilon_{imn} = &amp;\delta_{ii}\delta_{jm}\delta_{kn} + \delta_{im}\delta_{jn}\delta_{ki} + \delta_{in}\delta_{ji}\delta_{km}\\ -&amp;\delta_{in}\delta_{jm}\delta_{ki} - \delta_{ii}\delta_{jn}\delta_{km} - \delta_{im}\delta_{ji}\delta_{kn}\\ =&amp; 3\delta_{jm}\delta_{kn} + \delta_{km}\delta_{jn} + \delta_{jn}\delta_{km}\\ -&amp;\delta_{kn}\delta_{jm} - 3\delta_{jn}\delta_{km} - \delta_{jm}\delta_{kn}\\ =&amp; \delta_{jm}\delta_{kn} - \delta_{jn}\delta_{km} \end{align}</span></p> <p>By a change of index <span class="math-container">$i\rightarrow k \rightarrow j\rightarrow i$</span> and <span class="math-container">$m\rightarrow l$</span>, <span class="math-container">$n\rightarrow m$</span> we get</p> <p><span class="math-container">\begin{align} \epsilon_{kij}\epsilon_{klm}=\epsilon_{ijk}\epsilon_{klm} = \delta_{il}\delta_{jm} - \delta_{im}\delta_{jl} \end{align}</span></p> Answer 6: <p>We have that (the repeated index <span class="math-container">$s$</span> is summed and therefore it cannot enter into the result)</p> <p><span class="math-container">$\begin{align} \epsilon_{sab}\epsilon_{sij}= A_{abij} \end{align}$</span></p> <p>where <span class="math-container">$A_{abij}=-A_{baij}=-A_{abji}=A_{ijab}$</span>, which are obvious symmetries of the still unknown symbol <span class="math-container">$A$</span>. Since <span class="math-container">$\epsilon_{sab}$</span> is non-zero only if <span class="math-container">$s\neq a\neq b\neq s$</span> (i.e. the indexes are all different), and similarly for <span class="math-container">$\epsilon_{sij}$</span>, the only possibility to have a non-zero result is that <span class="math-container">$a=i$</span> and <span class="math-container">$b=j$</span> or that <span class="math-container">$a=j$</span> and <span class="math-container">$b=i$</span>, so that </p> <p><span class="math-container">$\begin{align} A_{abij} = \alpha \delta_{ai} \delta_{bj} + \beta \delta_{aj} \delta_{bi} \end{align}$</span> </p> <p>To implement the expected symmetries of <span class="math-container">$A$</span> we must have <span class="math-container">$\alpha = -\beta$</span>. The value <span class="math-container">$\alpha = 1$</span> is fixed by considering any non-zero particular case (e.g. <span class="math-container">$a=i=1$</span>, <span class="math-container">$b=j=2$</span>, gives <span class="math-container">$\epsilon_{s12}\epsilon_{s12}= \epsilon_{312}\epsilon_{312}=1$</span>).</p> Answer 7: <p>Here's an explicit proof that doesn't rely on knowning anything about determinants and just uses the definition of <span class="math-container">$\epsilon_{ijk}$</span> and <span class="math-container">$\delta_{ij}$</span>.</p> <p>Consider first</p> <p><span class="math-container">$$ \epsilon_{ijk}\epsilon_{lmn} $$</span></p> <p>This quantity is <span class="math-container">$+1$</span> if <span class="math-container">$(lmn)$</span> is an even permutation of <span class="math-container">$(ijk)$</span> and <span class="math-container">$-1$</span> if <span class="math-container">$(lmn)$</span> is an odd permutation of <span class="math-container">$(ijk)$</span>. It is <span class="math-container">$0$</span> if <span class="math-container">$(lmn)$</span> is not a permutation of <span class="math-container">$(ijk)$</span>. This means each of <span class="math-container">$(ijk)$</span> must have a pair in <span class="math-container">$(lmn)$</span>. We can enumerate all possibilities.</p> <p><span class="math-container">\begin{align} \epsilon_{ijk}\epsilon_{lmn} = &amp;\delta_{il}\delta_{jm}\delta_{kn} + \delta_{im}\delta_{jn}\delta_{kl} + \delta_{in}\delta_{jl}\delta_{km}\\ -&amp;\delta_{in}\delta_{jm}\delta_{kl} - \delta_{il}\delta_{jn}\delta_{km} - \delta_{im}\delta_{jl}\delta_{kn} \end{align}</span></p> <p>Let us now set <span class="math-container">$l=i$</span> and sum over <span class="math-container">$i$</span> (with Einstein summation notation implied)</p> <p><span class="math-container">\begin{align} \epsilon_{ijk}\epsilon_{imn} = &amp;\delta_{ii}\delta_{jm}\delta_{kn} + \delta_{im}\delta_{jn}\delta_{ki} + \delta_{in}\delta_{ji}\delta_{km}\\ -&amp;\delta_{in}\delta_{jm}\delta_{ki} - \delta_{ii}\delta_{jn}\delta_{km} - \delta_{im}\delta_{ji}\delta_{kn}\\ =&amp; 3\delta_{jm}\delta_{kn} + \delta_{km}\delta_{jn} + \delta_{jn}\delta_{km}\\ -&amp;\delta_{kn}\delta_{jm} - 3\delta_{jn}\delta_{km} - \delta_{jm}\delta_{kn}\\ =&amp; \delta_{jm}\delta_{kn} - \delta_{jn}\delta_{km} \end{align}</span></p> <p>By a change of index <span class="math-container">$i\rightarrow k \rightarrow j\rightarrow i$</span> and <span class="math-container">$m\rightarrow l$</span>, <span class="math-container">$n\rightarrow m$</span> we get</p> <p><span class="math-container">\begin{align} \epsilon_{kij}\epsilon_{klm}=\epsilon_{ijk}\epsilon_{klm} = \delta_{il}\delta_{jm} - \delta_{im}\delta_{jl} \end{align}</span></p> Answer 8: <p>We have that (the repeated index <span class="math-container">$s$</span> is summed and therefore it cannot enter into the result)</p> <p><span class="math-container">$\begin{align} \epsilon_{sab}\epsilon_{sij}= A_{abij} \end{align}$</span></p> <p>where <span class="math-container">$A_{abij}=-A_{baij}=-A_{abji}=A_{ijab}$</span>, which are obvious symmetries of the still unknown symbol <span class="math-container">$A$</span>. Since <span class="math-container">$\epsilon_{sab}$</span> is non-zero only if <span class="math-container">$s\neq a\neq b\neq s$</span> (i.e. the indexes are all different), and similarly for <span class="math-container">$\epsilon_{sij}$</span>, the only possibility to have a non-zero result is that <span class="math-container">$a=i$</span> and <span class="math-container">$b=j$</span> or that <span class="math-container">$a=j$</span> and <span class="math-container">$b=i$</span>, so that </p> <p><span class="math-container">$\begin{align} A_{abij} = \alpha \delta_{ai} \delta_{bj} + \beta \delta_{aj} \delta_{bi} \end{align}$</span> </p> <p>To implement the expected symmetries of <span class="math-container">$A$</span> we must have <span class="math-container">$\alpha = -\beta$</span>. The value <span class="math-container">$\alpha = 1$</span> is fixed by considering any non-zero particular case (e.g. <span class="math-container">$a=i=1$</span>, <span class="math-container">$b=j=2$</span>, gives <span class="math-container">$\epsilon_{s12}\epsilon_{s12}= \epsilon_{312}\epsilon_{312}=1$</span>).</p> Answer 9: <p>Here's an explicit proof that doesn't rely on knowning anything about determinants and just uses the definition of <span class="math-container">$\epsilon_{ijk}$</span> and <span class="math-container">$\delta_{ij}$</span>.</p> <p>Consider first</p> <p><span class="math-container">$$ \epsilon_{ijk}\epsilon_{lmn} $$</span></p> <p>This quantity is <span class="math-container">$+1$</span> if <span class="math-container">$(lmn)$</span> is an even permutation of <span class="math-container">$(ijk)$</span> and <span class="math-container">$-1$</span> if <span class="math-container">$(lmn)$</span> is an odd permutation of <span class="math-container">$(ijk)$</span>. It is <span class="math-container">$0$</span> if <span class="math-container">$(lmn)$</span> is not a permutation of <span class="math-container">$(ijk)$</span>. This means each of <span class="math-container">$(ijk)$</span> must have a pair in <span class="math-container">$(lmn)$</span>. We can enumerate all possibilities.</p> <p><span class="math-container">\begin{align} \epsilon_{ijk}\epsilon_{lmn} = &amp;\delta_{il}\delta_{jm}\delta_{kn} + \delta_{im}\delta_{jn}\delta_{kl} + \delta_{in}\delta_{jl}\delta_{km}\\ -&amp;\delta_{in}\delta_{jm}\delta_{kl} - \delta_{il}\delta_{jn}\delta_{km} - \delta_{im}\delta_{jl}\delta_{kn} \end{align}</span></p> <p>Let us now set <span class="math-container">$l=i$</span> and sum over <span class="math-container">$i$</span> (with Einstein summation notation implied)</p> <p><span class="math-container">\begin{align} \epsilon_{ijk}\epsilon_{imn} = &amp;\delta_{ii}\delta_{jm}\delta_{kn} + \delta_{im}\delta_{jn}\delta_{ki} + \delta_{in}\delta_{ji}\delta_{km}\\ -&amp;\delta_{in}\delta_{jm}\delta_{ki} - \delta_{ii}\delta_{jn}\delta_{km} - \delta_{im}\delta_{ji}\delta_{kn}\\ =&amp; 3\delta_{jm}\delta_{kn} + \delta_{km}\delta_{jn} + \delta_{jn}\delta_{km}\\ -&amp;\delta_{kn}\delta_{jm} - 3\delta_{jn}\delta_{km} - \delta_{jm}\delta_{kn}\\ =&amp; \delta_{jm}\delta_{kn} - \delta_{jn}\delta_{km} \end{align}</span></p> <p>By a change of index <span class="math-container">$i\rightarrow k \rightarrow j\rightarrow i$</span> and <span class="math-container">$m\rightarrow l$</span>, <span class="math-container">$n\rightarrow m$</span> we get</p> <p><span class="math-container">\begin{align} \epsilon_{kij}\epsilon_{klm}=\epsilon_{ijk}\epsilon_{klm} = \delta_{il}\delta_{jm} - \delta_{im}\delta_{jl} \end{align}</span></p> Answer 10: <p>We have that (the repeated index <span class="math-container">$s$</span> is summed and therefore it cannot enter into the result)</p> <p><span class="math-container">$\begin{align} \epsilon_{sab}\epsilon_{sij}= A_{abij} \end{align}$</span></p> <p>where <span class="math-container">$A_{abij}=-A_{baij}=-A_{abji}=A_{ijab}$</span>, which are obvious symmetries of the still unknown symbol <span class="math-container">$A$</span>. Since <span class="math-container">$\epsilon_{sab}$</span> is non-zero only if <span class="math-container">$s\neq a\neq b\neq s$</span> (i.e. the indexes are all different), and similarly for <span class="math-container">$\epsilon_{sij}$</span>, the only possibility to have a non-zero result is that <span class="math-container">$a=i$</span> and <span class="math-container">$b=j$</span> or that <span class="math-container">$a=j$</span> and <span class="math-container">$b=i$</span>, so that </p> <p><span class="math-container">$\begin{align} A_{abij} = \alpha \delta_{ai} \delta_{bj} + \beta \delta_{aj} \delta_{bi} \end{align}$</span> </p> <p>To implement the expected symmetries of <span class="math-container">$A$</span> we must have <span class="math-container">$\alpha = -\beta$</span>. The value <span class="math-container">$\alpha = 1$</span> is fixed by considering any non-zero particular case (e.g. <span class="math-container">$a=i=1$</span>, <span class="math-container">$b=j=2$</span>, gives <span class="math-container">$\epsilon_{s12}\epsilon_{s12}= \epsilon_{312}\epsilon_{312}=1$</span>).</p>
Why is there an $a_0$ term in Fourier series?
2,079,116
4
https://math.stackexchange.com/questions/2079116/why-is-there-an-a-0-term-in-fourier-series
<p><a href="https://i.stack.imgur.com/0zjpe.png" rel="nofollow noreferrer">Fourier Series Formula</a></p> <p>I get the reason why the Fourier Series sums up the sine and cosine function, but I do not understand what the purpose of the $a_0$ in the front? Is the $a_0$ there in case the original $f(x)$ function does not go through $(0, 0)$. I am no math genius so I am sorry if this a stupid question</p>
[ { "answer_id": 2079139, "score": 6, "text": "<p>For periodic sine and cosine functions, the average value of the function over a period is zero. i.e.\n$$\\frac 1T\\int_T \\sin\\left(\\frac{2\\pi}T x\\right)dx=0$$\nThis means if the fourier series of a function is totally expressed in terms of pure sine and cosine functions, then its average value is zero. But sometimes we have a periodic function whose average value is not zero. For example:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Babes.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Babes.png\" alt=\"wave\"></a></p>\n\n<p>So here comes the mighty $a_0$ to get rid of that bias and express the rest of function in terms of sines and cosines.</p>\n" }, { "answer_id": 2079121, "score": 1, "text": "<p>You're quite right; you might find it an enlightening exercise to try and work out what would the Fourier series of $x \\mapsto 1$ would be if you weren't allowed access to the $n=0$ term of the cosine expansion.</p>\n" }, { "answer_id": 2079250, "score": 0, "text": "<p>$b_{n} sin(nx) + a_{n} cos(nx)$ put $n = 0$.</p>\n" }, { "answer_id": 2079139, "score": 6, "text": "<p>For periodic sine and cosine functions, the average value of the function over a period is zero. i.e.\n$$\\frac 1T\\int_T \\sin\\left(\\frac{2\\pi}T x\\right)dx=0$$\nThis means if the fourier series of a function is totally expressed in terms of pure sine and cosine functions, then its average value is zero. But sometimes we have a periodic function whose average value is not zero. For example:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Babes.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Babes.png\" alt=\"wave\"></a></p>\n\n<p>So here comes the mighty $a_0$ to get rid of that bias and express the rest of function in terms of sines and cosines.</p>\n" }, { "answer_id": 2079121, "score": 1, "text": "<p>You're quite right; you might find it an enlightening exercise to try and work out what would the Fourier series of $x \\mapsto 1$ would be if you weren't allowed access to the $n=0$ term of the cosine expansion.</p>\n" }, { "answer_id": 2079250, "score": 0, "text": "<p>$b_{n} sin(nx) + a_{n} cos(nx)$ put $n = 0$.</p>\n" }, { "answer_id": 2079139, "score": 6, "text": "<p>For periodic sine and cosine functions, the average value of the function over a period is zero. i.e.\n$$\\frac 1T\\int_T \\sin\\left(\\frac{2\\pi}T x\\right)dx=0$$\nThis means if the fourier series of a function is totally expressed in terms of pure sine and cosine functions, then its average value is zero. But sometimes we have a periodic function whose average value is not zero. For example:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Babes.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Babes.png\" alt=\"wave\"></a></p>\n\n<p>So here comes the mighty $a_0$ to get rid of that bias and express the rest of function in terms of sines and cosines.</p>\n" }, { "answer_id": 2079121, "score": 1, "text": "<p>You're quite right; you might find it an enlightening exercise to try and work out what would the Fourier series of $x \\mapsto 1$ would be if you weren't allowed access to the $n=0$ term of the cosine expansion.</p>\n" }, { "answer_id": 2079250, "score": 0, "text": "<p>$b_{n} sin(nx) + a_{n} cos(nx)$ put $n = 0$.</p>\n" }, { "answer_id": 2079139, "score": 6, "text": "<p>For periodic sine and cosine functions, the average value of the function over a period is zero. i.e.\n$$\\frac 1T\\int_T \\sin\\left(\\frac{2\\pi}T x\\right)dx=0$$\nThis means if the fourier series of a function is totally expressed in terms of pure sine and cosine functions, then its average value is zero. But sometimes we have a periodic function whose average value is not zero. For example:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Babes.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Babes.png\" alt=\"wave\"></a></p>\n\n<p>So here comes the mighty $a_0$ to get rid of that bias and express the rest of function in terms of sines and cosines.</p>\n" }, { "answer_id": 2079121, "score": 1, "text": "<p>You're quite right; you might find it an enlightening exercise to try and work out what would the Fourier series of $x \\mapsto 1$ would be if you weren't allowed access to the $n=0$ term of the cosine expansion.</p>\n" }, { "answer_id": 2079250, "score": 0, "text": "<p>$b_{n} sin(nx) + a_{n} cos(nx)$ put $n = 0$.</p>\n" }, { "answer_id": 2079139, "score": 6, "text": "<p>For periodic sine and cosine functions, the average value of the function over a period is zero. i.e.\n$$\\frac 1T\\int_T \\sin\\left(\\frac{2\\pi}T x\\right)dx=0$$\nThis means if the fourier series of a function is totally expressed in terms of pure sine and cosine functions, then its average value is zero. But sometimes we have a periodic function whose average value is not zero. For example:</p>\n\n<p><a href=\"https://i.stack.imgur.com/Babes.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Babes.png\" alt=\"wave\"></a></p>\n\n<p>So here comes the mighty $a_0$ to get rid of that bias and express the rest of function in terms of sines and cosines.</p>\n" }, { "answer_id": 2079121, "score": 1, "text": "<p>You're quite right; you might find it an enlightening exercise to try and work out what would the Fourier series of $x \\mapsto 1$ would be if you weren't allowed access to the $n=0$ term of the cosine expansion.</p>\n" }, { "answer_id": 2079250, "score": 0, "text": "<p>$b_{n} sin(nx) + a_{n} cos(nx)$ put $n = 0$.</p>\n" } ]
Question: Why is there an $a_0$ term in Fourier series? Answer 1: <p>For periodic sine and cosine functions, the average value of the function over a period is zero. i.e. $$\frac 1T\int_T \sin\left(\frac{2\pi}T x\right)dx=0$$ This means if the fourier series of a function is totally expressed in terms of pure sine and cosine functions, then its average value is zero. But sometimes we have a periodic function whose average value is not zero. For example:</p> <p><a href="https://i.stack.imgur.com/Babes.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Babes.png" alt="wave"></a></p> <p>So here comes the mighty $a_0$ to get rid of that bias and express the rest of function in terms of sines and cosines.</p> Answer 2: <p>You're quite right; you might find it an enlightening exercise to try and work out what would the Fourier series of $x \mapsto 1$ would be if you weren't allowed access to the $n=0$ term of the cosine expansion.</p> Answer 3: <p>$b_{n} sin(nx) + a_{n} cos(nx)$ put $n = 0$.</p> Answer 4: <p>For periodic sine and cosine functions, the average value of the function over a period is zero. i.e. $$\frac 1T\int_T \sin\left(\frac{2\pi}T x\right)dx=0$$ This means if the fourier series of a function is totally expressed in terms of pure sine and cosine functions, then its average value is zero. But sometimes we have a periodic function whose average value is not zero. For example:</p> <p><a href="https://i.stack.imgur.com/Babes.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Babes.png" alt="wave"></a></p> <p>So here comes the mighty $a_0$ to get rid of that bias and express the rest of function in terms of sines and cosines.</p> Answer 5: <p>You're quite right; you might find it an enlightening exercise to try and work out what would the Fourier series of $x \mapsto 1$ would be if you weren't allowed access to the $n=0$ term of the cosine expansion.</p> Answer 6: <p>$b_{n} sin(nx) + a_{n} cos(nx)$ put $n = 0$.</p> Answer 7: <p>For periodic sine and cosine functions, the average value of the function over a period is zero. i.e. $$\frac 1T\int_T \sin\left(\frac{2\pi}T x\right)dx=0$$ This means if the fourier series of a function is totally expressed in terms of pure sine and cosine functions, then its average value is zero. But sometimes we have a periodic function whose average value is not zero. For example:</p> <p><a href="https://i.stack.imgur.com/Babes.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Babes.png" alt="wave"></a></p> <p>So here comes the mighty $a_0$ to get rid of that bias and express the rest of function in terms of sines and cosines.</p> Answer 8: <p>You're quite right; you might find it an enlightening exercise to try and work out what would the Fourier series of $x \mapsto 1$ would be if you weren't allowed access to the $n=0$ term of the cosine expansion.</p> Answer 9: <p>$b_{n} sin(nx) + a_{n} cos(nx)$ put $n = 0$.</p> Answer 10: <p>For periodic sine and cosine functions, the average value of the function over a period is zero. i.e. $$\frac 1T\int_T \sin\left(\frac{2\pi}T x\right)dx=0$$ This means if the fourier series of a function is totally expressed in terms of pure sine and cosine functions, then its average value is zero. But sometimes we have a periodic function whose average value is not zero. For example:</p> <p><a href="https://i.stack.imgur.com/Babes.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Babes.png" alt="wave"></a></p> <p>So here comes the mighty $a_0$ to get rid of that bias and express the rest of function in terms of sines and cosines.</p> Answer 11: <p>You're quite right; you might find it an enlightening exercise to try and work out what would the Fourier series of $x \mapsto 1$ would be if you weren't allowed access to the $n=0$ term of the cosine expansion.</p> Answer 12: <p>$b_{n} sin(nx) + a_{n} cos(nx)$ put $n = 0$.</p> Answer 13: <p>For periodic sine and cosine functions, the average value of the function over a period is zero. i.e. $$\frac 1T\int_T \sin\left(\frac{2\pi}T x\right)dx=0$$ This means if the fourier series of a function is totally expressed in terms of pure sine and cosine functions, then its average value is zero. But sometimes we have a periodic function whose average value is not zero. For example:</p> <p><a href="https://i.stack.imgur.com/Babes.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Babes.png" alt="wave"></a></p> <p>So here comes the mighty $a_0$ to get rid of that bias and express the rest of function in terms of sines and cosines.</p> Answer 14: <p>You're quite right; you might find it an enlightening exercise to try and work out what would the Fourier series of $x \mapsto 1$ would be if you weren't allowed access to the $n=0$ term of the cosine expansion.</p> Answer 15: <p>$b_{n} sin(nx) + a_{n} cos(nx)$ put $n = 0$.</p>
Bounded sequence of functions has subsequence convergent a.e.?
2,079,123
4
https://math.stackexchange.com/questions/2079123/bounded-sequence-of-functions-has-subsequence-convergent-a-e
<blockquote> <p>Does bounded sequence of functions have subsequence convergent a.e.?</p> </blockquote> <p>There is a equivalent question :</p> <blockquote> <p>Does weak convergent sequence of functions have subsequence convergent a.e.?</p> </blockquote> <p>These two are equivalent because of <em>Banach-Steinhaus theorem</em>. Before I ask here, I thought <em>Bolzano-Weierstrass theorem</em> which is every bounded sequence has convergent subsequence. And my question comes up from 'this theorem can be applied to sequence of functions?'. If the space of functions should be special (like reflexive or compact or housdorff), please mention about it.</p> <p>I didn't study functional analysis but attended courses : Partial differential equation and real analysis for graduate student.</p>
[ { "answer_id": 2079385, "score": 1, "text": "<p>Not necessarily. For each $x\\in [0,1)$ let $B(x)=(x_n)_{n\\in \\mathbb N}$ be the sequence of digits for the representation of $x$ in base $2,$ with $\\{n:x_n=0\\}$ being infinite. For $n\\in \\mathbb N$ let $f_n(x)=x_n.$ Let $g:\\mathbb N\\to \\mathbb N$ be strictly increasing. A convergent binary sequence is eventually constant . So if $(f_{g(n)}(x))_n=(x_{g(n)})_n$ converges then $$x\\in \\cup_{m\\in \\mathbb N}(A_m\\cup B_m)$$ $$ \\text {where } A_m=\\{x: \\forall n\\geq m\\;(x_{g(n)}=0)\\}$$ $$ \\text { and } B_m=\\{x:\\forall n\\geq m\\;(x_{g(n)}=1)\\}.$$ We show that $\\mu(A_m)=\\mu(B_m)=0$ for each $m,$ where $\\mu$ is Lebesgue measure, so the set of $x$ for which $(f_{g(n)}(x))_n$ converges is Lebesgue-null. </p>\n\n<p>(i). Fix $m$. Let $S(n)=\\{g(m+j):0\\leq j\\leq n\\}.$ Let $2^{S(n)}$ denote the set of all functions from $S(n)$ to $\\{0,1\\}.$</p>\n\n<p>(ii). For $h\\in 2^{S(n)}$ let $$R(h)=\\{x\\in [0,1): \\forall i\\in S(n)\\;(x_i=h(i)\\}$$ and let $V(h)=\\sum_{i\\in S(n)}2^{-i}h(i).$ </p>\n\n<p>Let $h_0\\in 2^{S(n)}$ where $h_0(j)=0$ for all $j\\in S(n).$ </p>\n\n<p>(iii). For each $h\\in 2^{S(n)}$ we have $R(h)=\\{x+V(h): x\\in R(h_0)\\},$ so $$\\mu (R(h))=\\mu (R(h_0)).$$ Now if $h,h'$ are distinct members of $2^{S(n)}$ then $R(h)\\cap R(h')=\\emptyset.$ Also $\\cup \\{R(h):h\\in 2^{S(n)}\\}=[0,1).$ And the number of members of $2^{S(n)}$ is $2^n.$ </p>\n\n<p>(iv).Putting all the parts of (iii) together we have $$1=\\mu ([0,1))=\\sum_{h\\in 2^{S(n)}}\\mu (R(h))=2^n\\mu (R(h_0)).$$ Since $A_m\\subset R(h_0)$ we have $1\\geq 2^n \\mu (A_m)$ for every $n.$ Therfore $\\mu (A_m)=0.$ </p>\n\n<p>The proof that $\\mu (B_m)=0$ is similar.</p>\n\n<p>Note: I neglected to show that $R(h_0)$ is a Lebesgue-measurable set. I will leave this as an exercise because it's 8 A.M. and I've been up all night. Happy New Year. </p>\n" }, { "answer_id": 2080067, "score": 1, "text": "<p>There is no subsequence of $\\sin (nx)$ that converges a.e. In fact, every subsequence $\\sin (n_kx)$ diverges a.e. For a proof of this, see <a href=\"https://math.stackexchange.com/questions/1380286/pointwise-almost-everywhere-convergent-subsequence-of-sin-nx/1380389#1380389\">Pointwise almost everywhere convergent subsequence of $\\{\\sin (nx)\\}$</a></p>\n" }, { "answer_id": 2079385, "score": 1, "text": "<p>Not necessarily. For each $x\\in [0,1)$ let $B(x)=(x_n)_{n\\in \\mathbb N}$ be the sequence of digits for the representation of $x$ in base $2,$ with $\\{n:x_n=0\\}$ being infinite. For $n\\in \\mathbb N$ let $f_n(x)=x_n.$ Let $g:\\mathbb N\\to \\mathbb N$ be strictly increasing. A convergent binary sequence is eventually constant . So if $(f_{g(n)}(x))_n=(x_{g(n)})_n$ converges then $$x\\in \\cup_{m\\in \\mathbb N}(A_m\\cup B_m)$$ $$ \\text {where } A_m=\\{x: \\forall n\\geq m\\;(x_{g(n)}=0)\\}$$ $$ \\text { and } B_m=\\{x:\\forall n\\geq m\\;(x_{g(n)}=1)\\}.$$ We show that $\\mu(A_m)=\\mu(B_m)=0$ for each $m,$ where $\\mu$ is Lebesgue measure, so the set of $x$ for which $(f_{g(n)}(x))_n$ converges is Lebesgue-null. </p>\n\n<p>(i). Fix $m$. Let $S(n)=\\{g(m+j):0\\leq j\\leq n\\}.$ Let $2^{S(n)}$ denote the set of all functions from $S(n)$ to $\\{0,1\\}.$</p>\n\n<p>(ii). For $h\\in 2^{S(n)}$ let $$R(h)=\\{x\\in [0,1): \\forall i\\in S(n)\\;(x_i=h(i)\\}$$ and let $V(h)=\\sum_{i\\in S(n)}2^{-i}h(i).$ </p>\n\n<p>Let $h_0\\in 2^{S(n)}$ where $h_0(j)=0$ for all $j\\in S(n).$ </p>\n\n<p>(iii). For each $h\\in 2^{S(n)}$ we have $R(h)=\\{x+V(h): x\\in R(h_0)\\},$ so $$\\mu (R(h))=\\mu (R(h_0)).$$ Now if $h,h'$ are distinct members of $2^{S(n)}$ then $R(h)\\cap R(h')=\\emptyset.$ Also $\\cup \\{R(h):h\\in 2^{S(n)}\\}=[0,1).$ And the number of members of $2^{S(n)}$ is $2^n.$ </p>\n\n<p>(iv).Putting all the parts of (iii) together we have $$1=\\mu ([0,1))=\\sum_{h\\in 2^{S(n)}}\\mu (R(h))=2^n\\mu (R(h_0)).$$ Since $A_m\\subset R(h_0)$ we have $1\\geq 2^n \\mu (A_m)$ for every $n.$ Therfore $\\mu (A_m)=0.$ </p>\n\n<p>The proof that $\\mu (B_m)=0$ is similar.</p>\n\n<p>Note: I neglected to show that $R(h_0)$ is a Lebesgue-measurable set. I will leave this as an exercise because it's 8 A.M. and I've been up all night. Happy New Year. </p>\n" }, { "answer_id": 2080067, "score": 1, "text": "<p>There is no subsequence of $\\sin (nx)$ that converges a.e. In fact, every subsequence $\\sin (n_kx)$ diverges a.e. For a proof of this, see <a href=\"https://math.stackexchange.com/questions/1380286/pointwise-almost-everywhere-convergent-subsequence-of-sin-nx/1380389#1380389\">Pointwise almost everywhere convergent subsequence of $\\{\\sin (nx)\\}$</a></p>\n" }, { "answer_id": 2079385, "score": 1, "text": "<p>Not necessarily. For each $x\\in [0,1)$ let $B(x)=(x_n)_{n\\in \\mathbb N}$ be the sequence of digits for the representation of $x$ in base $2,$ with $\\{n:x_n=0\\}$ being infinite. For $n\\in \\mathbb N$ let $f_n(x)=x_n.$ Let $g:\\mathbb N\\to \\mathbb N$ be strictly increasing. A convergent binary sequence is eventually constant . So if $(f_{g(n)}(x))_n=(x_{g(n)})_n$ converges then $$x\\in \\cup_{m\\in \\mathbb N}(A_m\\cup B_m)$$ $$ \\text {where } A_m=\\{x: \\forall n\\geq m\\;(x_{g(n)}=0)\\}$$ $$ \\text { and } B_m=\\{x:\\forall n\\geq m\\;(x_{g(n)}=1)\\}.$$ We show that $\\mu(A_m)=\\mu(B_m)=0$ for each $m,$ where $\\mu$ is Lebesgue measure, so the set of $x$ for which $(f_{g(n)}(x))_n$ converges is Lebesgue-null. </p>\n\n<p>(i). Fix $m$. Let $S(n)=\\{g(m+j):0\\leq j\\leq n\\}.$ Let $2^{S(n)}$ denote the set of all functions from $S(n)$ to $\\{0,1\\}.$</p>\n\n<p>(ii). For $h\\in 2^{S(n)}$ let $$R(h)=\\{x\\in [0,1): \\forall i\\in S(n)\\;(x_i=h(i)\\}$$ and let $V(h)=\\sum_{i\\in S(n)}2^{-i}h(i).$ </p>\n\n<p>Let $h_0\\in 2^{S(n)}$ where $h_0(j)=0$ for all $j\\in S(n).$ </p>\n\n<p>(iii). For each $h\\in 2^{S(n)}$ we have $R(h)=\\{x+V(h): x\\in R(h_0)\\},$ so $$\\mu (R(h))=\\mu (R(h_0)).$$ Now if $h,h'$ are distinct members of $2^{S(n)}$ then $R(h)\\cap R(h')=\\emptyset.$ Also $\\cup \\{R(h):h\\in 2^{S(n)}\\}=[0,1).$ And the number of members of $2^{S(n)}$ is $2^n.$ </p>\n\n<p>(iv).Putting all the parts of (iii) together we have $$1=\\mu ([0,1))=\\sum_{h\\in 2^{S(n)}}\\mu (R(h))=2^n\\mu (R(h_0)).$$ Since $A_m\\subset R(h_0)$ we have $1\\geq 2^n \\mu (A_m)$ for every $n.$ Therfore $\\mu (A_m)=0.$ </p>\n\n<p>The proof that $\\mu (B_m)=0$ is similar.</p>\n\n<p>Note: I neglected to show that $R(h_0)$ is a Lebesgue-measurable set. I will leave this as an exercise because it's 8 A.M. and I've been up all night. Happy New Year. </p>\n" }, { "answer_id": 2080067, "score": 1, "text": "<p>There is no subsequence of $\\sin (nx)$ that converges a.e. In fact, every subsequence $\\sin (n_kx)$ diverges a.e. For a proof of this, see <a href=\"https://math.stackexchange.com/questions/1380286/pointwise-almost-everywhere-convergent-subsequence-of-sin-nx/1380389#1380389\">Pointwise almost everywhere convergent subsequence of $\\{\\sin (nx)\\}$</a></p>\n" }, { "answer_id": 2079385, "score": 1, "text": "<p>Not necessarily. For each $x\\in [0,1)$ let $B(x)=(x_n)_{n\\in \\mathbb N}$ be the sequence of digits for the representation of $x$ in base $2,$ with $\\{n:x_n=0\\}$ being infinite. For $n\\in \\mathbb N$ let $f_n(x)=x_n.$ Let $g:\\mathbb N\\to \\mathbb N$ be strictly increasing. A convergent binary sequence is eventually constant . So if $(f_{g(n)}(x))_n=(x_{g(n)})_n$ converges then $$x\\in \\cup_{m\\in \\mathbb N}(A_m\\cup B_m)$$ $$ \\text {where } A_m=\\{x: \\forall n\\geq m\\;(x_{g(n)}=0)\\}$$ $$ \\text { and } B_m=\\{x:\\forall n\\geq m\\;(x_{g(n)}=1)\\}.$$ We show that $\\mu(A_m)=\\mu(B_m)=0$ for each $m,$ where $\\mu$ is Lebesgue measure, so the set of $x$ for which $(f_{g(n)}(x))_n$ converges is Lebesgue-null. </p>\n\n<p>(i). Fix $m$. Let $S(n)=\\{g(m+j):0\\leq j\\leq n\\}.$ Let $2^{S(n)}$ denote the set of all functions from $S(n)$ to $\\{0,1\\}.$</p>\n\n<p>(ii). For $h\\in 2^{S(n)}$ let $$R(h)=\\{x\\in [0,1): \\forall i\\in S(n)\\;(x_i=h(i)\\}$$ and let $V(h)=\\sum_{i\\in S(n)}2^{-i}h(i).$ </p>\n\n<p>Let $h_0\\in 2^{S(n)}$ where $h_0(j)=0$ for all $j\\in S(n).$ </p>\n\n<p>(iii). For each $h\\in 2^{S(n)}$ we have $R(h)=\\{x+V(h): x\\in R(h_0)\\},$ so $$\\mu (R(h))=\\mu (R(h_0)).$$ Now if $h,h'$ are distinct members of $2^{S(n)}$ then $R(h)\\cap R(h')=\\emptyset.$ Also $\\cup \\{R(h):h\\in 2^{S(n)}\\}=[0,1).$ And the number of members of $2^{S(n)}$ is $2^n.$ </p>\n\n<p>(iv).Putting all the parts of (iii) together we have $$1=\\mu ([0,1))=\\sum_{h\\in 2^{S(n)}}\\mu (R(h))=2^n\\mu (R(h_0)).$$ Since $A_m\\subset R(h_0)$ we have $1\\geq 2^n \\mu (A_m)$ for every $n.$ Therfore $\\mu (A_m)=0.$ </p>\n\n<p>The proof that $\\mu (B_m)=0$ is similar.</p>\n\n<p>Note: I neglected to show that $R(h_0)$ is a Lebesgue-measurable set. I will leave this as an exercise because it's 8 A.M. and I've been up all night. Happy New Year. </p>\n" }, { "answer_id": 2080067, "score": 1, "text": "<p>There is no subsequence of $\\sin (nx)$ that converges a.e. In fact, every subsequence $\\sin (n_kx)$ diverges a.e. For a proof of this, see <a href=\"https://math.stackexchange.com/questions/1380286/pointwise-almost-everywhere-convergent-subsequence-of-sin-nx/1380389#1380389\">Pointwise almost everywhere convergent subsequence of $\\{\\sin (nx)\\}$</a></p>\n" }, { "answer_id": 2079385, "score": 1, "text": "<p>Not necessarily. For each $x\\in [0,1)$ let $B(x)=(x_n)_{n\\in \\mathbb N}$ be the sequence of digits for the representation of $x$ in base $2,$ with $\\{n:x_n=0\\}$ being infinite. For $n\\in \\mathbb N$ let $f_n(x)=x_n.$ Let $g:\\mathbb N\\to \\mathbb N$ be strictly increasing. A convergent binary sequence is eventually constant . So if $(f_{g(n)}(x))_n=(x_{g(n)})_n$ converges then $$x\\in \\cup_{m\\in \\mathbb N}(A_m\\cup B_m)$$ $$ \\text {where } A_m=\\{x: \\forall n\\geq m\\;(x_{g(n)}=0)\\}$$ $$ \\text { and } B_m=\\{x:\\forall n\\geq m\\;(x_{g(n)}=1)\\}.$$ We show that $\\mu(A_m)=\\mu(B_m)=0$ for each $m,$ where $\\mu$ is Lebesgue measure, so the set of $x$ for which $(f_{g(n)}(x))_n$ converges is Lebesgue-null. </p>\n\n<p>(i). Fix $m$. Let $S(n)=\\{g(m+j):0\\leq j\\leq n\\}.$ Let $2^{S(n)}$ denote the set of all functions from $S(n)$ to $\\{0,1\\}.$</p>\n\n<p>(ii). For $h\\in 2^{S(n)}$ let $$R(h)=\\{x\\in [0,1): \\forall i\\in S(n)\\;(x_i=h(i)\\}$$ and let $V(h)=\\sum_{i\\in S(n)}2^{-i}h(i).$ </p>\n\n<p>Let $h_0\\in 2^{S(n)}$ where $h_0(j)=0$ for all $j\\in S(n).$ </p>\n\n<p>(iii). For each $h\\in 2^{S(n)}$ we have $R(h)=\\{x+V(h): x\\in R(h_0)\\},$ so $$\\mu (R(h))=\\mu (R(h_0)).$$ Now if $h,h'$ are distinct members of $2^{S(n)}$ then $R(h)\\cap R(h')=\\emptyset.$ Also $\\cup \\{R(h):h\\in 2^{S(n)}\\}=[0,1).$ And the number of members of $2^{S(n)}$ is $2^n.$ </p>\n\n<p>(iv).Putting all the parts of (iii) together we have $$1=\\mu ([0,1))=\\sum_{h\\in 2^{S(n)}}\\mu (R(h))=2^n\\mu (R(h_0)).$$ Since $A_m\\subset R(h_0)$ we have $1\\geq 2^n \\mu (A_m)$ for every $n.$ Therfore $\\mu (A_m)=0.$ </p>\n\n<p>The proof that $\\mu (B_m)=0$ is similar.</p>\n\n<p>Note: I neglected to show that $R(h_0)$ is a Lebesgue-measurable set. I will leave this as an exercise because it's 8 A.M. and I've been up all night. Happy New Year. </p>\n" }, { "answer_id": 2080067, "score": 1, "text": "<p>There is no subsequence of $\\sin (nx)$ that converges a.e. In fact, every subsequence $\\sin (n_kx)$ diverges a.e. For a proof of this, see <a href=\"https://math.stackexchange.com/questions/1380286/pointwise-almost-everywhere-convergent-subsequence-of-sin-nx/1380389#1380389\">Pointwise almost everywhere convergent subsequence of $\\{\\sin (nx)\\}$</a></p>\n" } ]
Question: Bounded sequence of functions has subsequence convergent a.e.? Answer 1: <p>Not necessarily. For each $x\in [0,1)$ let $B(x)=(x_n)_{n\in \mathbb N}$ be the sequence of digits for the representation of $x$ in base $2,$ with $\{n:x_n=0\}$ being infinite. For $n\in \mathbb N$ let $f_n(x)=x_n.$ Let $g:\mathbb N\to \mathbb N$ be strictly increasing. A convergent binary sequence is eventually constant . So if $(f_{g(n)}(x))_n=(x_{g(n)})_n$ converges then $$x\in \cup_{m\in \mathbb N}(A_m\cup B_m)$$ $$ \text {where } A_m=\{x: \forall n\geq m\;(x_{g(n)}=0)\}$$ $$ \text { and } B_m=\{x:\forall n\geq m\;(x_{g(n)}=1)\}.$$ We show that $\mu(A_m)=\mu(B_m)=0$ for each $m,$ where $\mu$ is Lebesgue measure, so the set of $x$ for which $(f_{g(n)}(x))_n$ converges is Lebesgue-null. </p> <p>(i). Fix $m$. Let $S(n)=\{g(m+j):0\leq j\leq n\}.$ Let $2^{S(n)}$ denote the set of all functions from $S(n)$ to $\{0,1\}.$</p> <p>(ii). For $h\in 2^{S(n)}$ let $$R(h)=\{x\in [0,1): \forall i\in S(n)\;(x_i=h(i)\}$$ and let $V(h)=\sum_{i\in S(n)}2^{-i}h(i).$ </p> <p>Let $h_0\in 2^{S(n)}$ where $h_0(j)=0$ for all $j\in S(n).$ </p> <p>(iii). For each $h\in 2^{S(n)}$ we have $R(h)=\{x+V(h): x\in R(h_0)\},$ so $$\mu (R(h))=\mu (R(h_0)).$$ Now if $h,h'$ are distinct members of $2^{S(n)}$ then $R(h)\cap R(h')=\emptyset.$ Also $\cup \{R(h):h\in 2^{S(n)}\}=[0,1).$ And the number of members of $2^{S(n)}$ is $2^n.$ </p> <p>(iv).Putting all the parts of (iii) together we have $$1=\mu ([0,1))=\sum_{h\in 2^{S(n)}}\mu (R(h))=2^n\mu (R(h_0)).$$ Since $A_m\subset R(h_0)$ we have $1\geq 2^n \mu (A_m)$ for every $n.$ Therfore $\mu (A_m)=0.$ </p> <p>The proof that $\mu (B_m)=0$ is similar.</p> <p>Note: I neglected to show that $R(h_0)$ is a Lebesgue-measurable set. I will leave this as an exercise because it's 8 A.M. and I've been up all night. Happy New Year. </p> Answer 2: <p>There is no subsequence of $\sin (nx)$ that converges a.e. In fact, every subsequence $\sin (n_kx)$ diverges a.e. For a proof of this, see <a href="https://math.stackexchange.com/questions/1380286/pointwise-almost-everywhere-convergent-subsequence-of-sin-nx/1380389#1380389">Pointwise almost everywhere convergent subsequence of $\{\sin (nx)\}$</a></p> Answer 3: <p>Not necessarily. For each $x\in [0,1)$ let $B(x)=(x_n)_{n\in \mathbb N}$ be the sequence of digits for the representation of $x$ in base $2,$ with $\{n:x_n=0\}$ being infinite. For $n\in \mathbb N$ let $f_n(x)=x_n.$ Let $g:\mathbb N\to \mathbb N$ be strictly increasing. A convergent binary sequence is eventually constant . So if $(f_{g(n)}(x))_n=(x_{g(n)})_n$ converges then $$x\in \cup_{m\in \mathbb N}(A_m\cup B_m)$$ $$ \text {where } A_m=\{x: \forall n\geq m\;(x_{g(n)}=0)\}$$ $$ \text { and } B_m=\{x:\forall n\geq m\;(x_{g(n)}=1)\}.$$ We show that $\mu(A_m)=\mu(B_m)=0$ for each $m,$ where $\mu$ is Lebesgue measure, so the set of $x$ for which $(f_{g(n)}(x))_n$ converges is Lebesgue-null. </p> <p>(i). Fix $m$. Let $S(n)=\{g(m+j):0\leq j\leq n\}.$ Let $2^{S(n)}$ denote the set of all functions from $S(n)$ to $\{0,1\}.$</p> <p>(ii). For $h\in 2^{S(n)}$ let $$R(h)=\{x\in [0,1): \forall i\in S(n)\;(x_i=h(i)\}$$ and let $V(h)=\sum_{i\in S(n)}2^{-i}h(i).$ </p> <p>Let $h_0\in 2^{S(n)}$ where $h_0(j)=0$ for all $j\in S(n).$ </p> <p>(iii). For each $h\in 2^{S(n)}$ we have $R(h)=\{x+V(h): x\in R(h_0)\},$ so $$\mu (R(h))=\mu (R(h_0)).$$ Now if $h,h'$ are distinct members of $2^{S(n)}$ then $R(h)\cap R(h')=\emptyset.$ Also $\cup \{R(h):h\in 2^{S(n)}\}=[0,1).$ And the number of members of $2^{S(n)}$ is $2^n.$ </p> <p>(iv).Putting all the parts of (iii) together we have $$1=\mu ([0,1))=\sum_{h\in 2^{S(n)}}\mu (R(h))=2^n\mu (R(h_0)).$$ Since $A_m\subset R(h_0)$ we have $1\geq 2^n \mu (A_m)$ for every $n.$ Therfore $\mu (A_m)=0.$ </p> <p>The proof that $\mu (B_m)=0$ is similar.</p> <p>Note: I neglected to show that $R(h_0)$ is a Lebesgue-measurable set. I will leave this as an exercise because it's 8 A.M. and I've been up all night. Happy New Year. </p> Answer 4: <p>There is no subsequence of $\sin (nx)$ that converges a.e. In fact, every subsequence $\sin (n_kx)$ diverges a.e. For a proof of this, see <a href="https://math.stackexchange.com/questions/1380286/pointwise-almost-everywhere-convergent-subsequence-of-sin-nx/1380389#1380389">Pointwise almost everywhere convergent subsequence of $\{\sin (nx)\}$</a></p> Answer 5: <p>Not necessarily. For each $x\in [0,1)$ let $B(x)=(x_n)_{n\in \mathbb N}$ be the sequence of digits for the representation of $x$ in base $2,$ with $\{n:x_n=0\}$ being infinite. For $n\in \mathbb N$ let $f_n(x)=x_n.$ Let $g:\mathbb N\to \mathbb N$ be strictly increasing. A convergent binary sequence is eventually constant . So if $(f_{g(n)}(x))_n=(x_{g(n)})_n$ converges then $$x\in \cup_{m\in \mathbb N}(A_m\cup B_m)$$ $$ \text {where } A_m=\{x: \forall n\geq m\;(x_{g(n)}=0)\}$$ $$ \text { and } B_m=\{x:\forall n\geq m\;(x_{g(n)}=1)\}.$$ We show that $\mu(A_m)=\mu(B_m)=0$ for each $m,$ where $\mu$ is Lebesgue measure, so the set of $x$ for which $(f_{g(n)}(x))_n$ converges is Lebesgue-null. </p> <p>(i). Fix $m$. Let $S(n)=\{g(m+j):0\leq j\leq n\}.$ Let $2^{S(n)}$ denote the set of all functions from $S(n)$ to $\{0,1\}.$</p> <p>(ii). For $h\in 2^{S(n)}$ let $$R(h)=\{x\in [0,1): \forall i\in S(n)\;(x_i=h(i)\}$$ and let $V(h)=\sum_{i\in S(n)}2^{-i}h(i).$ </p> <p>Let $h_0\in 2^{S(n)}$ where $h_0(j)=0$ for all $j\in S(n).$ </p> <p>(iii). For each $h\in 2^{S(n)}$ we have $R(h)=\{x+V(h): x\in R(h_0)\},$ so $$\mu (R(h))=\mu (R(h_0)).$$ Now if $h,h'$ are distinct members of $2^{S(n)}$ then $R(h)\cap R(h')=\emptyset.$ Also $\cup \{R(h):h\in 2^{S(n)}\}=[0,1).$ And the number of members of $2^{S(n)}$ is $2^n.$ </p> <p>(iv).Putting all the parts of (iii) together we have $$1=\mu ([0,1))=\sum_{h\in 2^{S(n)}}\mu (R(h))=2^n\mu (R(h_0)).$$ Since $A_m\subset R(h_0)$ we have $1\geq 2^n \mu (A_m)$ for every $n.$ Therfore $\mu (A_m)=0.$ </p> <p>The proof that $\mu (B_m)=0$ is similar.</p> <p>Note: I neglected to show that $R(h_0)$ is a Lebesgue-measurable set. I will leave this as an exercise because it's 8 A.M. and I've been up all night. Happy New Year. </p> Answer 6: <p>There is no subsequence of $\sin (nx)$ that converges a.e. In fact, every subsequence $\sin (n_kx)$ diverges a.e. For a proof of this, see <a href="https://math.stackexchange.com/questions/1380286/pointwise-almost-everywhere-convergent-subsequence-of-sin-nx/1380389#1380389">Pointwise almost everywhere convergent subsequence of $\{\sin (nx)\}$</a></p> Answer 7: <p>Not necessarily. For each $x\in [0,1)$ let $B(x)=(x_n)_{n\in \mathbb N}$ be the sequence of digits for the representation of $x$ in base $2,$ with $\{n:x_n=0\}$ being infinite. For $n\in \mathbb N$ let $f_n(x)=x_n.$ Let $g:\mathbb N\to \mathbb N$ be strictly increasing. A convergent binary sequence is eventually constant . So if $(f_{g(n)}(x))_n=(x_{g(n)})_n$ converges then $$x\in \cup_{m\in \mathbb N}(A_m\cup B_m)$$ $$ \text {where } A_m=\{x: \forall n\geq m\;(x_{g(n)}=0)\}$$ $$ \text { and } B_m=\{x:\forall n\geq m\;(x_{g(n)}=1)\}.$$ We show that $\mu(A_m)=\mu(B_m)=0$ for each $m,$ where $\mu$ is Lebesgue measure, so the set of $x$ for which $(f_{g(n)}(x))_n$ converges is Lebesgue-null. </p> <p>(i). Fix $m$. Let $S(n)=\{g(m+j):0\leq j\leq n\}.$ Let $2^{S(n)}$ denote the set of all functions from $S(n)$ to $\{0,1\}.$</p> <p>(ii). For $h\in 2^{S(n)}$ let $$R(h)=\{x\in [0,1): \forall i\in S(n)\;(x_i=h(i)\}$$ and let $V(h)=\sum_{i\in S(n)}2^{-i}h(i).$ </p> <p>Let $h_0\in 2^{S(n)}$ where $h_0(j)=0$ for all $j\in S(n).$ </p> <p>(iii). For each $h\in 2^{S(n)}$ we have $R(h)=\{x+V(h): x\in R(h_0)\},$ so $$\mu (R(h))=\mu (R(h_0)).$$ Now if $h,h'$ are distinct members of $2^{S(n)}$ then $R(h)\cap R(h')=\emptyset.$ Also $\cup \{R(h):h\in 2^{S(n)}\}=[0,1).$ And the number of members of $2^{S(n)}$ is $2^n.$ </p> <p>(iv).Putting all the parts of (iii) together we have $$1=\mu ([0,1))=\sum_{h\in 2^{S(n)}}\mu (R(h))=2^n\mu (R(h_0)).$$ Since $A_m\subset R(h_0)$ we have $1\geq 2^n \mu (A_m)$ for every $n.$ Therfore $\mu (A_m)=0.$ </p> <p>The proof that $\mu (B_m)=0$ is similar.</p> <p>Note: I neglected to show that $R(h_0)$ is a Lebesgue-measurable set. I will leave this as an exercise because it's 8 A.M. and I've been up all night. Happy New Year. </p> Answer 8: <p>There is no subsequence of $\sin (nx)$ that converges a.e. In fact, every subsequence $\sin (n_kx)$ diverges a.e. For a proof of this, see <a href="https://math.stackexchange.com/questions/1380286/pointwise-almost-everywhere-convergent-subsequence-of-sin-nx/1380389#1380389">Pointwise almost everywhere convergent subsequence of $\{\sin (nx)\}$</a></p> Answer 9: <p>Not necessarily. For each $x\in [0,1)$ let $B(x)=(x_n)_{n\in \mathbb N}$ be the sequence of digits for the representation of $x$ in base $2,$ with $\{n:x_n=0\}$ being infinite. For $n\in \mathbb N$ let $f_n(x)=x_n.$ Let $g:\mathbb N\to \mathbb N$ be strictly increasing. A convergent binary sequence is eventually constant . So if $(f_{g(n)}(x))_n=(x_{g(n)})_n$ converges then $$x\in \cup_{m\in \mathbb N}(A_m\cup B_m)$$ $$ \text {where } A_m=\{x: \forall n\geq m\;(x_{g(n)}=0)\}$$ $$ \text { and } B_m=\{x:\forall n\geq m\;(x_{g(n)}=1)\}.$$ We show that $\mu(A_m)=\mu(B_m)=0$ for each $m,$ where $\mu$ is Lebesgue measure, so the set of $x$ for which $(f_{g(n)}(x))_n$ converges is Lebesgue-null. </p> <p>(i). Fix $m$. Let $S(n)=\{g(m+j):0\leq j\leq n\}.$ Let $2^{S(n)}$ denote the set of all functions from $S(n)$ to $\{0,1\}.$</p> <p>(ii). For $h\in 2^{S(n)}$ let $$R(h)=\{x\in [0,1): \forall i\in S(n)\;(x_i=h(i)\}$$ and let $V(h)=\sum_{i\in S(n)}2^{-i}h(i).$ </p> <p>Let $h_0\in 2^{S(n)}$ where $h_0(j)=0$ for all $j\in S(n).$ </p> <p>(iii). For each $h\in 2^{S(n)}$ we have $R(h)=\{x+V(h): x\in R(h_0)\},$ so $$\mu (R(h))=\mu (R(h_0)).$$ Now if $h,h'$ are distinct members of $2^{S(n)}$ then $R(h)\cap R(h')=\emptyset.$ Also $\cup \{R(h):h\in 2^{S(n)}\}=[0,1).$ And the number of members of $2^{S(n)}$ is $2^n.$ </p> <p>(iv).Putting all the parts of (iii) together we have $$1=\mu ([0,1))=\sum_{h\in 2^{S(n)}}\mu (R(h))=2^n\mu (R(h_0)).$$ Since $A_m\subset R(h_0)$ we have $1\geq 2^n \mu (A_m)$ for every $n.$ Therfore $\mu (A_m)=0.$ </p> <p>The proof that $\mu (B_m)=0$ is similar.</p> <p>Note: I neglected to show that $R(h_0)$ is a Lebesgue-measurable set. I will leave this as an exercise because it's 8 A.M. and I've been up all night. Happy New Year. </p> Answer 10: <p>There is no subsequence of $\sin (nx)$ that converges a.e. In fact, every subsequence $\sin (n_kx)$ diverges a.e. For a proof of this, see <a href="https://math.stackexchange.com/questions/1380286/pointwise-almost-everywhere-convergent-subsequence-of-sin-nx/1380389#1380389">Pointwise almost everywhere convergent subsequence of $\{\sin (nx)\}$</a></p>
Permutations : A person picks 4 numbers from a set of 9, what are the total ways he can Win?
2,079,134
4
https://math.stackexchange.com/questions/2079134/permutations-a-person-picks-4-numbers-from-a-set-of-9-what-are-the-total-ways
<p>In a casino, a person picks 4 numbers from a set of 9 numbers (3 Even, 6 Odd). A person wins if at least one of those 4 numbers is Even. What are the total ways he can Win ?</p> <p>There are two approches I believe.</p> <p>1) One is to find all possible ways of picking 4 from 9 (9C4) and subtract the number of ways of getting only odd (6C4) ie:</p> <pre><code>9C4 - 6C4 </code></pre> <p>2) Pick one even from the set of 3 Evens and pick 3 from the remaning 8. ie :</p> <pre><code>3C1 * 8C3 </code></pre> <p>What is the difference between the above two, In my oppinon the 2nd method does the same, why is it wronge ?</p>
[ { "answer_id": 2079138, "score": 2, "text": "<p>The second method is wrong because you are over counting.</p>\n\n<p>Suppose the even numbers are $e_1,e_2,e_3$ and the odd numbers are $o_1,o_2,\\cdots,o_6$.</p>\n\n<p>When you choose a number from the set of three evens, suppose you get $e_1$. Out of the remaining $8$, you choose $3$, let them be $e_2,o_1,o_2$.</p>\n\n<p>So you have $e_1,e_2,o_1,o_2$ and you win.</p>\n\n<p>But you could also have selected them this way. When you chose one out the three evens, you got $e_2$. Later, when you chose the next three, you got $e_1,o_1,o_2$. The set of numbers at the end is identical.</p>\n\n<p>However, your method counts these two cases separately, whereas they should be counted exactly once. So, the final answer you get is incorrect.</p>\n" }, { "answer_id": 2079136, "score": 0, "text": "<p>The problem is that in the second method you are marking one specific odd element in each set. So the combinations which have more than one odd number are counted more than once.</p>\n" }, { "answer_id": 2079138, "score": 2, "text": "<p>The second method is wrong because you are over counting.</p>\n\n<p>Suppose the even numbers are $e_1,e_2,e_3$ and the odd numbers are $o_1,o_2,\\cdots,o_6$.</p>\n\n<p>When you choose a number from the set of three evens, suppose you get $e_1$. Out of the remaining $8$, you choose $3$, let them be $e_2,o_1,o_2$.</p>\n\n<p>So you have $e_1,e_2,o_1,o_2$ and you win.</p>\n\n<p>But you could also have selected them this way. When you chose one out the three evens, you got $e_2$. Later, when you chose the next three, you got $e_1,o_1,o_2$. The set of numbers at the end is identical.</p>\n\n<p>However, your method counts these two cases separately, whereas they should be counted exactly once. So, the final answer you get is incorrect.</p>\n" }, { "answer_id": 2079136, "score": 0, "text": "<p>The problem is that in the second method you are marking one specific odd element in each set. So the combinations which have more than one odd number are counted more than once.</p>\n" }, { "answer_id": 2079138, "score": 2, "text": "<p>The second method is wrong because you are over counting.</p>\n\n<p>Suppose the even numbers are $e_1,e_2,e_3$ and the odd numbers are $o_1,o_2,\\cdots,o_6$.</p>\n\n<p>When you choose a number from the set of three evens, suppose you get $e_1$. Out of the remaining $8$, you choose $3$, let them be $e_2,o_1,o_2$.</p>\n\n<p>So you have $e_1,e_2,o_1,o_2$ and you win.</p>\n\n<p>But you could also have selected them this way. When you chose one out the three evens, you got $e_2$. Later, when you chose the next three, you got $e_1,o_1,o_2$. The set of numbers at the end is identical.</p>\n\n<p>However, your method counts these two cases separately, whereas they should be counted exactly once. So, the final answer you get is incorrect.</p>\n" }, { "answer_id": 2079136, "score": 0, "text": "<p>The problem is that in the second method you are marking one specific odd element in each set. So the combinations which have more than one odd number are counted more than once.</p>\n" }, { "answer_id": 2079138, "score": 2, "text": "<p>The second method is wrong because you are over counting.</p>\n\n<p>Suppose the even numbers are $e_1,e_2,e_3$ and the odd numbers are $o_1,o_2,\\cdots,o_6$.</p>\n\n<p>When you choose a number from the set of three evens, suppose you get $e_1$. Out of the remaining $8$, you choose $3$, let them be $e_2,o_1,o_2$.</p>\n\n<p>So you have $e_1,e_2,o_1,o_2$ and you win.</p>\n\n<p>But you could also have selected them this way. When you chose one out the three evens, you got $e_2$. Later, when you chose the next three, you got $e_1,o_1,o_2$. The set of numbers at the end is identical.</p>\n\n<p>However, your method counts these two cases separately, whereas they should be counted exactly once. So, the final answer you get is incorrect.</p>\n" }, { "answer_id": 2079136, "score": 0, "text": "<p>The problem is that in the second method you are marking one specific odd element in each set. So the combinations which have more than one odd number are counted more than once.</p>\n" }, { "answer_id": 2079138, "score": 2, "text": "<p>The second method is wrong because you are over counting.</p>\n\n<p>Suppose the even numbers are $e_1,e_2,e_3$ and the odd numbers are $o_1,o_2,\\cdots,o_6$.</p>\n\n<p>When you choose a number from the set of three evens, suppose you get $e_1$. Out of the remaining $8$, you choose $3$, let them be $e_2,o_1,o_2$.</p>\n\n<p>So you have $e_1,e_2,o_1,o_2$ and you win.</p>\n\n<p>But you could also have selected them this way. When you chose one out the three evens, you got $e_2$. Later, when you chose the next three, you got $e_1,o_1,o_2$. The set of numbers at the end is identical.</p>\n\n<p>However, your method counts these two cases separately, whereas they should be counted exactly once. So, the final answer you get is incorrect.</p>\n" }, { "answer_id": 2079136, "score": 0, "text": "<p>The problem is that in the second method you are marking one specific odd element in each set. So the combinations which have more than one odd number are counted more than once.</p>\n" } ]
Question: Permutations : A person picks 4 numbers from a set of 9, what are the total ways he can Win? Answer 1: <p>The second method is wrong because you are over counting.</p> <p>Suppose the even numbers are $e_1,e_2,e_3$ and the odd numbers are $o_1,o_2,\cdots,o_6$.</p> <p>When you choose a number from the set of three evens, suppose you get $e_1$. Out of the remaining $8$, you choose $3$, let them be $e_2,o_1,o_2$.</p> <p>So you have $e_1,e_2,o_1,o_2$ and you win.</p> <p>But you could also have selected them this way. When you chose one out the three evens, you got $e_2$. Later, when you chose the next three, you got $e_1,o_1,o_2$. The set of numbers at the end is identical.</p> <p>However, your method counts these two cases separately, whereas they should be counted exactly once. So, the final answer you get is incorrect.</p> Answer 2: <p>The problem is that in the second method you are marking one specific odd element in each set. So the combinations which have more than one odd number are counted more than once.</p> Answer 3: <p>The second method is wrong because you are over counting.</p> <p>Suppose the even numbers are $e_1,e_2,e_3$ and the odd numbers are $o_1,o_2,\cdots,o_6$.</p> <p>When you choose a number from the set of three evens, suppose you get $e_1$. Out of the remaining $8$, you choose $3$, let them be $e_2,o_1,o_2$.</p> <p>So you have $e_1,e_2,o_1,o_2$ and you win.</p> <p>But you could also have selected them this way. When you chose one out the three evens, you got $e_2$. Later, when you chose the next three, you got $e_1,o_1,o_2$. The set of numbers at the end is identical.</p> <p>However, your method counts these two cases separately, whereas they should be counted exactly once. So, the final answer you get is incorrect.</p> Answer 4: <p>The problem is that in the second method you are marking one specific odd element in each set. So the combinations which have more than one odd number are counted more than once.</p> Answer 5: <p>The second method is wrong because you are over counting.</p> <p>Suppose the even numbers are $e_1,e_2,e_3$ and the odd numbers are $o_1,o_2,\cdots,o_6$.</p> <p>When you choose a number from the set of three evens, suppose you get $e_1$. Out of the remaining $8$, you choose $3$, let them be $e_2,o_1,o_2$.</p> <p>So you have $e_1,e_2,o_1,o_2$ and you win.</p> <p>But you could also have selected them this way. When you chose one out the three evens, you got $e_2$. Later, when you chose the next three, you got $e_1,o_1,o_2$. The set of numbers at the end is identical.</p> <p>However, your method counts these two cases separately, whereas they should be counted exactly once. So, the final answer you get is incorrect.</p> Answer 6: <p>The problem is that in the second method you are marking one specific odd element in each set. So the combinations which have more than one odd number are counted more than once.</p> Answer 7: <p>The second method is wrong because you are over counting.</p> <p>Suppose the even numbers are $e_1,e_2,e_3$ and the odd numbers are $o_1,o_2,\cdots,o_6$.</p> <p>When you choose a number from the set of three evens, suppose you get $e_1$. Out of the remaining $8$, you choose $3$, let them be $e_2,o_1,o_2$.</p> <p>So you have $e_1,e_2,o_1,o_2$ and you win.</p> <p>But you could also have selected them this way. When you chose one out the three evens, you got $e_2$. Later, when you chose the next three, you got $e_1,o_1,o_2$. The set of numbers at the end is identical.</p> <p>However, your method counts these two cases separately, whereas they should be counted exactly once. So, the final answer you get is incorrect.</p> Answer 8: <p>The problem is that in the second method you are marking one specific odd element in each set. So the combinations which have more than one odd number are counted more than once.</p> Answer 9: <p>The second method is wrong because you are over counting.</p> <p>Suppose the even numbers are $e_1,e_2,e_3$ and the odd numbers are $o_1,o_2,\cdots,o_6$.</p> <p>When you choose a number from the set of three evens, suppose you get $e_1$. Out of the remaining $8$, you choose $3$, let them be $e_2,o_1,o_2$.</p> <p>So you have $e_1,e_2,o_1,o_2$ and you win.</p> <p>But you could also have selected them this way. When you chose one out the three evens, you got $e_2$. Later, when you chose the next three, you got $e_1,o_1,o_2$. The set of numbers at the end is identical.</p> <p>However, your method counts these two cases separately, whereas they should be counted exactly once. So, the final answer you get is incorrect.</p> Answer 10: <p>The problem is that in the second method you are marking one specific odd element in each set. So the combinations which have more than one odd number are counted more than once.</p>
Do these abstract groups have a name?
2,079,321
4
https://math.stackexchange.com/questions/2079321/do-these-abstract-groups-have-a-name
<p>I have recently found, as part of a problem in finite groups, the following abstract groups (where $p$ is an odd prime):</p> <p>$G_1 = \langle \tau, x, y : \tau^2 = x^p = y^p = 1, xy = yx, \tau x \tau^{-1} = x, \tau y \tau^{-1} = y^{-1} \rangle$ $G_2 = \langle \tau, x, y : \tau^2 = x^p = y^p = 1, xy = yx, \tau x \tau^{-1} = x^{-1}, \tau y \tau^{-1} = y^{-1} \rangle$</p> <p>I am wondering if there is a common name for groups with either of these structures, so that I can look up existing results about them. I have never seen them in my undergraduate studies so far, </p>
[ { "answer_id": 2079369, "score": 4, "text": "<p>There are two ways of writing these groups, although both of them will be ambiguous (since without any further indication it owuld be unclear which one is referring to). </p>\n\n<p>The first one, as suggested by Arnaud D., is writing it as a semidirect product of cyclic groups. Recall that given groups $G$ and $H$ and a group homomorphism $\\varphi:H\\rightarrow G$, the semidirect product of $G$ and $H$ with respect to $\\varphi$ is the group $G\\rtimes_{\\varphi}H$ whose underlying set is $G\\times H$ and its product is given by\n$$(g_0,h_0)(g_1,h_1)=(g_0\\varphi(h_0)(g_1),h_0h_1)\\text{.}$$\nNote that this generalizes the direct product in the sense that those are just the semidirect product where $\\varphi$ is the trivial map (which sends every element to the identity). In this way, in sloppy notation, both of your groups would be $(C_p \\times C_p) \\rtimes C_2$. Note that $\\rtimes$ is $\\times$ together with $\\triangleleft$, used to say that something is a normal subgroup, so that's why the order is reversed.</p>\n\n<p>The second one I have seen is that of (sub)holomorph groups. Given a group $G$ and a subgroup $N$ of its automorphism group $\\mathrm{Aut}(G)$, the $N$-holomorph group of $G$ is the group\n$$\\mathrm{Hol}(G,N):=G\\rtimes_{\\iota} N$$\nwhere $\\iota$ is the inclusion map. This is less general and less common, as the holomorph group of $G$ refers usually to $\\mathrm{Hol}(G):=\\mathrm{Hol}(G,\\mathrm{Aut}\\,G)$. Using sloppy notation, your groups would be written as $\\mathrm{Hol}(C_p\\times C_p,C_2)$.</p>\n" }, { "answer_id": 2079633, "score": 3, "text": "<p>$G_1$ is the direct product of a cyclic group and a dihedral group, $C_p\\times D_p$ (or $C_p\\times D_{2p}$ depending on your notation).</p>\n\n<p>$G_2$ is the generalised dihedral group on the abelian group $C_p^2$.</p>\n\n<p>(See <a href=\"https://en.wikipedia.org/wiki/Generalized_dihedral_group\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Generalized_dihedral_group</a> or <a href=\"https://groupprops.subwiki.org/wiki/Generalized_dihedral_group\" rel=\"nofollow noreferrer\">https://groupprops.subwiki.org/wiki/Generalized_dihedral_group</a>)</p>\n" }, { "answer_id": 2079369, "score": 4, "text": "<p>There are two ways of writing these groups, although both of them will be ambiguous (since without any further indication it owuld be unclear which one is referring to). </p>\n\n<p>The first one, as suggested by Arnaud D., is writing it as a semidirect product of cyclic groups. Recall that given groups $G$ and $H$ and a group homomorphism $\\varphi:H\\rightarrow G$, the semidirect product of $G$ and $H$ with respect to $\\varphi$ is the group $G\\rtimes_{\\varphi}H$ whose underlying set is $G\\times H$ and its product is given by\n$$(g_0,h_0)(g_1,h_1)=(g_0\\varphi(h_0)(g_1),h_0h_1)\\text{.}$$\nNote that this generalizes the direct product in the sense that those are just the semidirect product where $\\varphi$ is the trivial map (which sends every element to the identity). In this way, in sloppy notation, both of your groups would be $(C_p \\times C_p) \\rtimes C_2$. Note that $\\rtimes$ is $\\times$ together with $\\triangleleft$, used to say that something is a normal subgroup, so that's why the order is reversed.</p>\n\n<p>The second one I have seen is that of (sub)holomorph groups. Given a group $G$ and a subgroup $N$ of its automorphism group $\\mathrm{Aut}(G)$, the $N$-holomorph group of $G$ is the group\n$$\\mathrm{Hol}(G,N):=G\\rtimes_{\\iota} N$$\nwhere $\\iota$ is the inclusion map. This is less general and less common, as the holomorph group of $G$ refers usually to $\\mathrm{Hol}(G):=\\mathrm{Hol}(G,\\mathrm{Aut}\\,G)$. Using sloppy notation, your groups would be written as $\\mathrm{Hol}(C_p\\times C_p,C_2)$.</p>\n" }, { "answer_id": 2079633, "score": 3, "text": "<p>$G_1$ is the direct product of a cyclic group and a dihedral group, $C_p\\times D_p$ (or $C_p\\times D_{2p}$ depending on your notation).</p>\n\n<p>$G_2$ is the generalised dihedral group on the abelian group $C_p^2$.</p>\n\n<p>(See <a href=\"https://en.wikipedia.org/wiki/Generalized_dihedral_group\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Generalized_dihedral_group</a> or <a href=\"https://groupprops.subwiki.org/wiki/Generalized_dihedral_group\" rel=\"nofollow noreferrer\">https://groupprops.subwiki.org/wiki/Generalized_dihedral_group</a>)</p>\n" }, { "answer_id": 2079369, "score": 4, "text": "<p>There are two ways of writing these groups, although both of them will be ambiguous (since without any further indication it owuld be unclear which one is referring to). </p>\n\n<p>The first one, as suggested by Arnaud D., is writing it as a semidirect product of cyclic groups. Recall that given groups $G$ and $H$ and a group homomorphism $\\varphi:H\\rightarrow G$, the semidirect product of $G$ and $H$ with respect to $\\varphi$ is the group $G\\rtimes_{\\varphi}H$ whose underlying set is $G\\times H$ and its product is given by\n$$(g_0,h_0)(g_1,h_1)=(g_0\\varphi(h_0)(g_1),h_0h_1)\\text{.}$$\nNote that this generalizes the direct product in the sense that those are just the semidirect product where $\\varphi$ is the trivial map (which sends every element to the identity). In this way, in sloppy notation, both of your groups would be $(C_p \\times C_p) \\rtimes C_2$. Note that $\\rtimes$ is $\\times$ together with $\\triangleleft$, used to say that something is a normal subgroup, so that's why the order is reversed.</p>\n\n<p>The second one I have seen is that of (sub)holomorph groups. Given a group $G$ and a subgroup $N$ of its automorphism group $\\mathrm{Aut}(G)$, the $N$-holomorph group of $G$ is the group\n$$\\mathrm{Hol}(G,N):=G\\rtimes_{\\iota} N$$\nwhere $\\iota$ is the inclusion map. This is less general and less common, as the holomorph group of $G$ refers usually to $\\mathrm{Hol}(G):=\\mathrm{Hol}(G,\\mathrm{Aut}\\,G)$. Using sloppy notation, your groups would be written as $\\mathrm{Hol}(C_p\\times C_p,C_2)$.</p>\n" }, { "answer_id": 2079633, "score": 3, "text": "<p>$G_1$ is the direct product of a cyclic group and a dihedral group, $C_p\\times D_p$ (or $C_p\\times D_{2p}$ depending on your notation).</p>\n\n<p>$G_2$ is the generalised dihedral group on the abelian group $C_p^2$.</p>\n\n<p>(See <a href=\"https://en.wikipedia.org/wiki/Generalized_dihedral_group\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Generalized_dihedral_group</a> or <a href=\"https://groupprops.subwiki.org/wiki/Generalized_dihedral_group\" rel=\"nofollow noreferrer\">https://groupprops.subwiki.org/wiki/Generalized_dihedral_group</a>)</p>\n" }, { "answer_id": 2079369, "score": 4, "text": "<p>There are two ways of writing these groups, although both of them will be ambiguous (since without any further indication it owuld be unclear which one is referring to). </p>\n\n<p>The first one, as suggested by Arnaud D., is writing it as a semidirect product of cyclic groups. Recall that given groups $G$ and $H$ and a group homomorphism $\\varphi:H\\rightarrow G$, the semidirect product of $G$ and $H$ with respect to $\\varphi$ is the group $G\\rtimes_{\\varphi}H$ whose underlying set is $G\\times H$ and its product is given by\n$$(g_0,h_0)(g_1,h_1)=(g_0\\varphi(h_0)(g_1),h_0h_1)\\text{.}$$\nNote that this generalizes the direct product in the sense that those are just the semidirect product where $\\varphi$ is the trivial map (which sends every element to the identity). In this way, in sloppy notation, both of your groups would be $(C_p \\times C_p) \\rtimes C_2$. Note that $\\rtimes$ is $\\times$ together with $\\triangleleft$, used to say that something is a normal subgroup, so that's why the order is reversed.</p>\n\n<p>The second one I have seen is that of (sub)holomorph groups. Given a group $G$ and a subgroup $N$ of its automorphism group $\\mathrm{Aut}(G)$, the $N$-holomorph group of $G$ is the group\n$$\\mathrm{Hol}(G,N):=G\\rtimes_{\\iota} N$$\nwhere $\\iota$ is the inclusion map. This is less general and less common, as the holomorph group of $G$ refers usually to $\\mathrm{Hol}(G):=\\mathrm{Hol}(G,\\mathrm{Aut}\\,G)$. Using sloppy notation, your groups would be written as $\\mathrm{Hol}(C_p\\times C_p,C_2)$.</p>\n" }, { "answer_id": 2079633, "score": 3, "text": "<p>$G_1$ is the direct product of a cyclic group and a dihedral group, $C_p\\times D_p$ (or $C_p\\times D_{2p}$ depending on your notation).</p>\n\n<p>$G_2$ is the generalised dihedral group on the abelian group $C_p^2$.</p>\n\n<p>(See <a href=\"https://en.wikipedia.org/wiki/Generalized_dihedral_group\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Generalized_dihedral_group</a> or <a href=\"https://groupprops.subwiki.org/wiki/Generalized_dihedral_group\" rel=\"nofollow noreferrer\">https://groupprops.subwiki.org/wiki/Generalized_dihedral_group</a>)</p>\n" }, { "answer_id": 2079369, "score": 4, "text": "<p>There are two ways of writing these groups, although both of them will be ambiguous (since without any further indication it owuld be unclear which one is referring to). </p>\n\n<p>The first one, as suggested by Arnaud D., is writing it as a semidirect product of cyclic groups. Recall that given groups $G$ and $H$ and a group homomorphism $\\varphi:H\\rightarrow G$, the semidirect product of $G$ and $H$ with respect to $\\varphi$ is the group $G\\rtimes_{\\varphi}H$ whose underlying set is $G\\times H$ and its product is given by\n$$(g_0,h_0)(g_1,h_1)=(g_0\\varphi(h_0)(g_1),h_0h_1)\\text{.}$$\nNote that this generalizes the direct product in the sense that those are just the semidirect product where $\\varphi$ is the trivial map (which sends every element to the identity). In this way, in sloppy notation, both of your groups would be $(C_p \\times C_p) \\rtimes C_2$. Note that $\\rtimes$ is $\\times$ together with $\\triangleleft$, used to say that something is a normal subgroup, so that's why the order is reversed.</p>\n\n<p>The second one I have seen is that of (sub)holomorph groups. Given a group $G$ and a subgroup $N$ of its automorphism group $\\mathrm{Aut}(G)$, the $N$-holomorph group of $G$ is the group\n$$\\mathrm{Hol}(G,N):=G\\rtimes_{\\iota} N$$\nwhere $\\iota$ is the inclusion map. This is less general and less common, as the holomorph group of $G$ refers usually to $\\mathrm{Hol}(G):=\\mathrm{Hol}(G,\\mathrm{Aut}\\,G)$. Using sloppy notation, your groups would be written as $\\mathrm{Hol}(C_p\\times C_p,C_2)$.</p>\n" }, { "answer_id": 2079633, "score": 3, "text": "<p>$G_1$ is the direct product of a cyclic group and a dihedral group, $C_p\\times D_p$ (or $C_p\\times D_{2p}$ depending on your notation).</p>\n\n<p>$G_2$ is the generalised dihedral group on the abelian group $C_p^2$.</p>\n\n<p>(See <a href=\"https://en.wikipedia.org/wiki/Generalized_dihedral_group\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Generalized_dihedral_group</a> or <a href=\"https://groupprops.subwiki.org/wiki/Generalized_dihedral_group\" rel=\"nofollow noreferrer\">https://groupprops.subwiki.org/wiki/Generalized_dihedral_group</a>)</p>\n" } ]
Question: Do these abstract groups have a name? Answer 1: <p>There are two ways of writing these groups, although both of them will be ambiguous (since without any further indication it owuld be unclear which one is referring to). </p> <p>The first one, as suggested by Arnaud D., is writing it as a semidirect product of cyclic groups. Recall that given groups $G$ and $H$ and a group homomorphism $\varphi:H\rightarrow G$, the semidirect product of $G$ and $H$ with respect to $\varphi$ is the group $G\rtimes_{\varphi}H$ whose underlying set is $G\times H$ and its product is given by $$(g_0,h_0)(g_1,h_1)=(g_0\varphi(h_0)(g_1),h_0h_1)\text{.}$$ Note that this generalizes the direct product in the sense that those are just the semidirect product where $\varphi$ is the trivial map (which sends every element to the identity). In this way, in sloppy notation, both of your groups would be $(C_p \times C_p) \rtimes C_2$. Note that $\rtimes$ is $\times$ together with $\triangleleft$, used to say that something is a normal subgroup, so that's why the order is reversed.</p> <p>The second one I have seen is that of (sub)holomorph groups. Given a group $G$ and a subgroup $N$ of its automorphism group $\mathrm{Aut}(G)$, the $N$-holomorph group of $G$ is the group $$\mathrm{Hol}(G,N):=G\rtimes_{\iota} N$$ where $\iota$ is the inclusion map. This is less general and less common, as the holomorph group of $G$ refers usually to $\mathrm{Hol}(G):=\mathrm{Hol}(G,\mathrm{Aut}\,G)$. Using sloppy notation, your groups would be written as $\mathrm{Hol}(C_p\times C_p,C_2)$.</p> Answer 2: <p>$G_1$ is the direct product of a cyclic group and a dihedral group, $C_p\times D_p$ (or $C_p\times D_{2p}$ depending on your notation).</p> <p>$G_2$ is the generalised dihedral group on the abelian group $C_p^2$.</p> <p>(See <a href="https://en.wikipedia.org/wiki/Generalized_dihedral_group" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Generalized_dihedral_group</a> or <a href="https://groupprops.subwiki.org/wiki/Generalized_dihedral_group" rel="nofollow noreferrer">https://groupprops.subwiki.org/wiki/Generalized_dihedral_group</a>)</p> Answer 3: <p>There are two ways of writing these groups, although both of them will be ambiguous (since without any further indication it owuld be unclear which one is referring to). </p> <p>The first one, as suggested by Arnaud D., is writing it as a semidirect product of cyclic groups. Recall that given groups $G$ and $H$ and a group homomorphism $\varphi:H\rightarrow G$, the semidirect product of $G$ and $H$ with respect to $\varphi$ is the group $G\rtimes_{\varphi}H$ whose underlying set is $G\times H$ and its product is given by $$(g_0,h_0)(g_1,h_1)=(g_0\varphi(h_0)(g_1),h_0h_1)\text{.}$$ Note that this generalizes the direct product in the sense that those are just the semidirect product where $\varphi$ is the trivial map (which sends every element to the identity). In this way, in sloppy notation, both of your groups would be $(C_p \times C_p) \rtimes C_2$. Note that $\rtimes$ is $\times$ together with $\triangleleft$, used to say that something is a normal subgroup, so that's why the order is reversed.</p> <p>The second one I have seen is that of (sub)holomorph groups. Given a group $G$ and a subgroup $N$ of its automorphism group $\mathrm{Aut}(G)$, the $N$-holomorph group of $G$ is the group $$\mathrm{Hol}(G,N):=G\rtimes_{\iota} N$$ where $\iota$ is the inclusion map. This is less general and less common, as the holomorph group of $G$ refers usually to $\mathrm{Hol}(G):=\mathrm{Hol}(G,\mathrm{Aut}\,G)$. Using sloppy notation, your groups would be written as $\mathrm{Hol}(C_p\times C_p,C_2)$.</p> Answer 4: <p>$G_1$ is the direct product of a cyclic group and a dihedral group, $C_p\times D_p$ (or $C_p\times D_{2p}$ depending on your notation).</p> <p>$G_2$ is the generalised dihedral group on the abelian group $C_p^2$.</p> <p>(See <a href="https://en.wikipedia.org/wiki/Generalized_dihedral_group" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Generalized_dihedral_group</a> or <a href="https://groupprops.subwiki.org/wiki/Generalized_dihedral_group" rel="nofollow noreferrer">https://groupprops.subwiki.org/wiki/Generalized_dihedral_group</a>)</p> Answer 5: <p>There are two ways of writing these groups, although both of them will be ambiguous (since without any further indication it owuld be unclear which one is referring to). </p> <p>The first one, as suggested by Arnaud D., is writing it as a semidirect product of cyclic groups. Recall that given groups $G$ and $H$ and a group homomorphism $\varphi:H\rightarrow G$, the semidirect product of $G$ and $H$ with respect to $\varphi$ is the group $G\rtimes_{\varphi}H$ whose underlying set is $G\times H$ and its product is given by $$(g_0,h_0)(g_1,h_1)=(g_0\varphi(h_0)(g_1),h_0h_1)\text{.}$$ Note that this generalizes the direct product in the sense that those are just the semidirect product where $\varphi$ is the trivial map (which sends every element to the identity). In this way, in sloppy notation, both of your groups would be $(C_p \times C_p) \rtimes C_2$. Note that $\rtimes$ is $\times$ together with $\triangleleft$, used to say that something is a normal subgroup, so that's why the order is reversed.</p> <p>The second one I have seen is that of (sub)holomorph groups. Given a group $G$ and a subgroup $N$ of its automorphism group $\mathrm{Aut}(G)$, the $N$-holomorph group of $G$ is the group $$\mathrm{Hol}(G,N):=G\rtimes_{\iota} N$$ where $\iota$ is the inclusion map. This is less general and less common, as the holomorph group of $G$ refers usually to $\mathrm{Hol}(G):=\mathrm{Hol}(G,\mathrm{Aut}\,G)$. Using sloppy notation, your groups would be written as $\mathrm{Hol}(C_p\times C_p,C_2)$.</p> Answer 6: <p>$G_1$ is the direct product of a cyclic group and a dihedral group, $C_p\times D_p$ (or $C_p\times D_{2p}$ depending on your notation).</p> <p>$G_2$ is the generalised dihedral group on the abelian group $C_p^2$.</p> <p>(See <a href="https://en.wikipedia.org/wiki/Generalized_dihedral_group" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Generalized_dihedral_group</a> or <a href="https://groupprops.subwiki.org/wiki/Generalized_dihedral_group" rel="nofollow noreferrer">https://groupprops.subwiki.org/wiki/Generalized_dihedral_group</a>)</p> Answer 7: <p>There are two ways of writing these groups, although both of them will be ambiguous (since without any further indication it owuld be unclear which one is referring to). </p> <p>The first one, as suggested by Arnaud D., is writing it as a semidirect product of cyclic groups. Recall that given groups $G$ and $H$ and a group homomorphism $\varphi:H\rightarrow G$, the semidirect product of $G$ and $H$ with respect to $\varphi$ is the group $G\rtimes_{\varphi}H$ whose underlying set is $G\times H$ and its product is given by $$(g_0,h_0)(g_1,h_1)=(g_0\varphi(h_0)(g_1),h_0h_1)\text{.}$$ Note that this generalizes the direct product in the sense that those are just the semidirect product where $\varphi$ is the trivial map (which sends every element to the identity). In this way, in sloppy notation, both of your groups would be $(C_p \times C_p) \rtimes C_2$. Note that $\rtimes$ is $\times$ together with $\triangleleft$, used to say that something is a normal subgroup, so that's why the order is reversed.</p> <p>The second one I have seen is that of (sub)holomorph groups. Given a group $G$ and a subgroup $N$ of its automorphism group $\mathrm{Aut}(G)$, the $N$-holomorph group of $G$ is the group $$\mathrm{Hol}(G,N):=G\rtimes_{\iota} N$$ where $\iota$ is the inclusion map. This is less general and less common, as the holomorph group of $G$ refers usually to $\mathrm{Hol}(G):=\mathrm{Hol}(G,\mathrm{Aut}\,G)$. Using sloppy notation, your groups would be written as $\mathrm{Hol}(C_p\times C_p,C_2)$.</p> Answer 8: <p>$G_1$ is the direct product of a cyclic group and a dihedral group, $C_p\times D_p$ (or $C_p\times D_{2p}$ depending on your notation).</p> <p>$G_2$ is the generalised dihedral group on the abelian group $C_p^2$.</p> <p>(See <a href="https://en.wikipedia.org/wiki/Generalized_dihedral_group" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Generalized_dihedral_group</a> or <a href="https://groupprops.subwiki.org/wiki/Generalized_dihedral_group" rel="nofollow noreferrer">https://groupprops.subwiki.org/wiki/Generalized_dihedral_group</a>)</p> Answer 9: <p>There are two ways of writing these groups, although both of them will be ambiguous (since without any further indication it owuld be unclear which one is referring to). </p> <p>The first one, as suggested by Arnaud D., is writing it as a semidirect product of cyclic groups. Recall that given groups $G$ and $H$ and a group homomorphism $\varphi:H\rightarrow G$, the semidirect product of $G$ and $H$ with respect to $\varphi$ is the group $G\rtimes_{\varphi}H$ whose underlying set is $G\times H$ and its product is given by $$(g_0,h_0)(g_1,h_1)=(g_0\varphi(h_0)(g_1),h_0h_1)\text{.}$$ Note that this generalizes the direct product in the sense that those are just the semidirect product where $\varphi$ is the trivial map (which sends every element to the identity). In this way, in sloppy notation, both of your groups would be $(C_p \times C_p) \rtimes C_2$. Note that $\rtimes$ is $\times$ together with $\triangleleft$, used to say that something is a normal subgroup, so that's why the order is reversed.</p> <p>The second one I have seen is that of (sub)holomorph groups. Given a group $G$ and a subgroup $N$ of its automorphism group $\mathrm{Aut}(G)$, the $N$-holomorph group of $G$ is the group $$\mathrm{Hol}(G,N):=G\rtimes_{\iota} N$$ where $\iota$ is the inclusion map. This is less general and less common, as the holomorph group of $G$ refers usually to $\mathrm{Hol}(G):=\mathrm{Hol}(G,\mathrm{Aut}\,G)$. Using sloppy notation, your groups would be written as $\mathrm{Hol}(C_p\times C_p,C_2)$.</p> Answer 10: <p>$G_1$ is the direct product of a cyclic group and a dihedral group, $C_p\times D_p$ (or $C_p\times D_{2p}$ depending on your notation).</p> <p>$G_2$ is the generalised dihedral group on the abelian group $C_p^2$.</p> <p>(See <a href="https://en.wikipedia.org/wiki/Generalized_dihedral_group" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Generalized_dihedral_group</a> or <a href="https://groupprops.subwiki.org/wiki/Generalized_dihedral_group" rel="nofollow noreferrer">https://groupprops.subwiki.org/wiki/Generalized_dihedral_group</a>)</p>
Hyperplane separation theorem, continuous linear functional
2,079,413
4
https://math.stackexchange.com/questions/2079413/hyperplane-separation-theorem-continuous-linear-functional
<p>One version of the Hyperplane separation theorem states that two non-intersecting convex sets in a normed linear space can be separated by a linear functional. In the proof I know they use the gauge (or Minkowsky functional) to show this. There's another version of this theorem that states that two such convex sets can be separated by a continuous linear functional. I wonder if somebody can explain to me how. Is the gauge bounded by the norm ( $ p_K(x)\leq C\lVert x \rVert$)?Is this true for all $x$ in a normed linear space?</p>
[ { "answer_id": 2088495, "score": 6, "text": "<p>It is not true that two non-intersecting convex sets can be separated by a continuous functional without further assumptions (Such as one is closed and the other compact, or one of the convex subsets has nonempty interior). </p>\n\n<p>Here is a counterexample (c.f exercise 1.14 of Brezis). In $\\ell^1(\\mathbb{R})$ define the two closed subspaces \n\\begin{align*}\nA &amp;= \\{x\\in \\ell^1 | x_{2n}=0 \\text{ for all } n\\in\\mathbb{N}\\} \\\\\nB &amp;= \\{x\\in \\ell^1 | x_{2n}=\\frac{1}{2^n}x_{2n-1} \\text{ for all } n\\in\\mathbb{N}\\}.\n\\end{align*}</p>\n\n<p>Observe that $\\overline{A+B}=\\ell^1$. Let $p$ be the point defined by $$ p_{2n}=\\frac{1}{2^n}, \\quad p_{2n-1}=0.$$ We have that $p \\notin A + B$ (we can see this by working backwards from the definitions and seeing the only possible way this could be is if there exists a point such that $x_{2n-1}=1$ for all $n$; but no such point exists.)</p>\n\n<p>Now, set $C = A+p$, we have that $B\\cap C = \\emptyset$ where $B$ and $C$ are closed convex subsets, with empty interior. Suppose the linear functional $f$ separates $B$ and $C$. So for all $b\\in B$, $f(b) &lt; \\alpha$ and for all $a\\in A$, $f(a+p) &gt; \\alpha$. Now, as $B$ is a subspace, we must have $f(b)=0$ for all $b\\in B$ and similarly, $f(a)=0$ for all $a\\in A$. However, such a linear functional is not continuous, since $A+B$ is dense in $\\ell^1$ and so $f$ must be zero if continuous. Therefore $B$ and $C$ is an example of two convex subsets which cannot be separated by a closed hyperplane.</p>\n" }, { "answer_id": 2079425, "score": 5, "text": "<p>This cannot be true. Since If you cosider any infinite dimensional normed space $(X, ||\\cdot ||_{*} )$ and any other norm $||\\cdot ||^{*} :X\\to \\mathbb{R} $ stronger than $||\\cdot ||_{*} ,$ then taking $K=\\{ x\\in X: ||x||^{*} \\leqslant 1\\}$ you obtain $p_K (x) =||x||^{*} \\nleqslant C||x||_{*}$ for any $C\\in\\mathbb{R}.$</p>\n" }, { "answer_id": 2088495, "score": 6, "text": "<p>It is not true that two non-intersecting convex sets can be separated by a continuous functional without further assumptions (Such as one is closed and the other compact, or one of the convex subsets has nonempty interior). </p>\n\n<p>Here is a counterexample (c.f exercise 1.14 of Brezis). In $\\ell^1(\\mathbb{R})$ define the two closed subspaces \n\\begin{align*}\nA &amp;= \\{x\\in \\ell^1 | x_{2n}=0 \\text{ for all } n\\in\\mathbb{N}\\} \\\\\nB &amp;= \\{x\\in \\ell^1 | x_{2n}=\\frac{1}{2^n}x_{2n-1} \\text{ for all } n\\in\\mathbb{N}\\}.\n\\end{align*}</p>\n\n<p>Observe that $\\overline{A+B}=\\ell^1$. Let $p$ be the point defined by $$ p_{2n}=\\frac{1}{2^n}, \\quad p_{2n-1}=0.$$ We have that $p \\notin A + B$ (we can see this by working backwards from the definitions and seeing the only possible way this could be is if there exists a point such that $x_{2n-1}=1$ for all $n$; but no such point exists.)</p>\n\n<p>Now, set $C = A+p$, we have that $B\\cap C = \\emptyset$ where $B$ and $C$ are closed convex subsets, with empty interior. Suppose the linear functional $f$ separates $B$ and $C$. So for all $b\\in B$, $f(b) &lt; \\alpha$ and for all $a\\in A$, $f(a+p) &gt; \\alpha$. Now, as $B$ is a subspace, we must have $f(b)=0$ for all $b\\in B$ and similarly, $f(a)=0$ for all $a\\in A$. However, such a linear functional is not continuous, since $A+B$ is dense in $\\ell^1$ and so $f$ must be zero if continuous. Therefore $B$ and $C$ is an example of two convex subsets which cannot be separated by a closed hyperplane.</p>\n" }, { "answer_id": 2079425, "score": 5, "text": "<p>This cannot be true. Since If you cosider any infinite dimensional normed space $(X, ||\\cdot ||_{*} )$ and any other norm $||\\cdot ||^{*} :X\\to \\mathbb{R} $ stronger than $||\\cdot ||_{*} ,$ then taking $K=\\{ x\\in X: ||x||^{*} \\leqslant 1\\}$ you obtain $p_K (x) =||x||^{*} \\nleqslant C||x||_{*}$ for any $C\\in\\mathbb{R}.$</p>\n" }, { "answer_id": 2088495, "score": 6, "text": "<p>It is not true that two non-intersecting convex sets can be separated by a continuous functional without further assumptions (Such as one is closed and the other compact, or one of the convex subsets has nonempty interior). </p>\n\n<p>Here is a counterexample (c.f exercise 1.14 of Brezis). In $\\ell^1(\\mathbb{R})$ define the two closed subspaces \n\\begin{align*}\nA &amp;= \\{x\\in \\ell^1 | x_{2n}=0 \\text{ for all } n\\in\\mathbb{N}\\} \\\\\nB &amp;= \\{x\\in \\ell^1 | x_{2n}=\\frac{1}{2^n}x_{2n-1} \\text{ for all } n\\in\\mathbb{N}\\}.\n\\end{align*}</p>\n\n<p>Observe that $\\overline{A+B}=\\ell^1$. Let $p$ be the point defined by $$ p_{2n}=\\frac{1}{2^n}, \\quad p_{2n-1}=0.$$ We have that $p \\notin A + B$ (we can see this by working backwards from the definitions and seeing the only possible way this could be is if there exists a point such that $x_{2n-1}=1$ for all $n$; but no such point exists.)</p>\n\n<p>Now, set $C = A+p$, we have that $B\\cap C = \\emptyset$ where $B$ and $C$ are closed convex subsets, with empty interior. Suppose the linear functional $f$ separates $B$ and $C$. So for all $b\\in B$, $f(b) &lt; \\alpha$ and for all $a\\in A$, $f(a+p) &gt; \\alpha$. Now, as $B$ is a subspace, we must have $f(b)=0$ for all $b\\in B$ and similarly, $f(a)=0$ for all $a\\in A$. However, such a linear functional is not continuous, since $A+B$ is dense in $\\ell^1$ and so $f$ must be zero if continuous. Therefore $B$ and $C$ is an example of two convex subsets which cannot be separated by a closed hyperplane.</p>\n" }, { "answer_id": 2079425, "score": 5, "text": "<p>This cannot be true. Since If you cosider any infinite dimensional normed space $(X, ||\\cdot ||_{*} )$ and any other norm $||\\cdot ||^{*} :X\\to \\mathbb{R} $ stronger than $||\\cdot ||_{*} ,$ then taking $K=\\{ x\\in X: ||x||^{*} \\leqslant 1\\}$ you obtain $p_K (x) =||x||^{*} \\nleqslant C||x||_{*}$ for any $C\\in\\mathbb{R}.$</p>\n" }, { "answer_id": 2088495, "score": 6, "text": "<p>It is not true that two non-intersecting convex sets can be separated by a continuous functional without further assumptions (Such as one is closed and the other compact, or one of the convex subsets has nonempty interior). </p>\n\n<p>Here is a counterexample (c.f exercise 1.14 of Brezis). In $\\ell^1(\\mathbb{R})$ define the two closed subspaces \n\\begin{align*}\nA &amp;= \\{x\\in \\ell^1 | x_{2n}=0 \\text{ for all } n\\in\\mathbb{N}\\} \\\\\nB &amp;= \\{x\\in \\ell^1 | x_{2n}=\\frac{1}{2^n}x_{2n-1} \\text{ for all } n\\in\\mathbb{N}\\}.\n\\end{align*}</p>\n\n<p>Observe that $\\overline{A+B}=\\ell^1$. Let $p$ be the point defined by $$ p_{2n}=\\frac{1}{2^n}, \\quad p_{2n-1}=0.$$ We have that $p \\notin A + B$ (we can see this by working backwards from the definitions and seeing the only possible way this could be is if there exists a point such that $x_{2n-1}=1$ for all $n$; but no such point exists.)</p>\n\n<p>Now, set $C = A+p$, we have that $B\\cap C = \\emptyset$ where $B$ and $C$ are closed convex subsets, with empty interior. Suppose the linear functional $f$ separates $B$ and $C$. So for all $b\\in B$, $f(b) &lt; \\alpha$ and for all $a\\in A$, $f(a+p) &gt; \\alpha$. Now, as $B$ is a subspace, we must have $f(b)=0$ for all $b\\in B$ and similarly, $f(a)=0$ for all $a\\in A$. However, such a linear functional is not continuous, since $A+B$ is dense in $\\ell^1$ and so $f$ must be zero if continuous. Therefore $B$ and $C$ is an example of two convex subsets which cannot be separated by a closed hyperplane.</p>\n" }, { "answer_id": 2079425, "score": 5, "text": "<p>This cannot be true. Since If you cosider any infinite dimensional normed space $(X, ||\\cdot ||_{*} )$ and any other norm $||\\cdot ||^{*} :X\\to \\mathbb{R} $ stronger than $||\\cdot ||_{*} ,$ then taking $K=\\{ x\\in X: ||x||^{*} \\leqslant 1\\}$ you obtain $p_K (x) =||x||^{*} \\nleqslant C||x||_{*}$ for any $C\\in\\mathbb{R}.$</p>\n" }, { "answer_id": 2088495, "score": 6, "text": "<p>It is not true that two non-intersecting convex sets can be separated by a continuous functional without further assumptions (Such as one is closed and the other compact, or one of the convex subsets has nonempty interior). </p>\n\n<p>Here is a counterexample (c.f exercise 1.14 of Brezis). In $\\ell^1(\\mathbb{R})$ define the two closed subspaces \n\\begin{align*}\nA &amp;= \\{x\\in \\ell^1 | x_{2n}=0 \\text{ for all } n\\in\\mathbb{N}\\} \\\\\nB &amp;= \\{x\\in \\ell^1 | x_{2n}=\\frac{1}{2^n}x_{2n-1} \\text{ for all } n\\in\\mathbb{N}\\}.\n\\end{align*}</p>\n\n<p>Observe that $\\overline{A+B}=\\ell^1$. Let $p$ be the point defined by $$ p_{2n}=\\frac{1}{2^n}, \\quad p_{2n-1}=0.$$ We have that $p \\notin A + B$ (we can see this by working backwards from the definitions and seeing the only possible way this could be is if there exists a point such that $x_{2n-1}=1$ for all $n$; but no such point exists.)</p>\n\n<p>Now, set $C = A+p$, we have that $B\\cap C = \\emptyset$ where $B$ and $C$ are closed convex subsets, with empty interior. Suppose the linear functional $f$ separates $B$ and $C$. So for all $b\\in B$, $f(b) &lt; \\alpha$ and for all $a\\in A$, $f(a+p) &gt; \\alpha$. Now, as $B$ is a subspace, we must have $f(b)=0$ for all $b\\in B$ and similarly, $f(a)=0$ for all $a\\in A$. However, such a linear functional is not continuous, since $A+B$ is dense in $\\ell^1$ and so $f$ must be zero if continuous. Therefore $B$ and $C$ is an example of two convex subsets which cannot be separated by a closed hyperplane.</p>\n" }, { "answer_id": 2079425, "score": 5, "text": "<p>This cannot be true. Since If you cosider any infinite dimensional normed space $(X, ||\\cdot ||_{*} )$ and any other norm $||\\cdot ||^{*} :X\\to \\mathbb{R} $ stronger than $||\\cdot ||_{*} ,$ then taking $K=\\{ x\\in X: ||x||^{*} \\leqslant 1\\}$ you obtain $p_K (x) =||x||^{*} \\nleqslant C||x||_{*}$ for any $C\\in\\mathbb{R}.$</p>\n" } ]
Question: Hyperplane separation theorem, continuous linear functional Answer 1: <p>It is not true that two non-intersecting convex sets can be separated by a continuous functional without further assumptions (Such as one is closed and the other compact, or one of the convex subsets has nonempty interior). </p> <p>Here is a counterexample (c.f exercise 1.14 of Brezis). In $\ell^1(\mathbb{R})$ define the two closed subspaces \begin{align*} A &amp;= \{x\in \ell^1 | x_{2n}=0 \text{ for all } n\in\mathbb{N}\} \\ B &amp;= \{x\in \ell^1 | x_{2n}=\frac{1}{2^n}x_{2n-1} \text{ for all } n\in\mathbb{N}\}. \end{align*}</p> <p>Observe that $\overline{A+B}=\ell^1$. Let $p$ be the point defined by $$ p_{2n}=\frac{1}{2^n}, \quad p_{2n-1}=0.$$ We have that $p \notin A + B$ (we can see this by working backwards from the definitions and seeing the only possible way this could be is if there exists a point such that $x_{2n-1}=1$ for all $n$; but no such point exists.)</p> <p>Now, set $C = A+p$, we have that $B\cap C = \emptyset$ where $B$ and $C$ are closed convex subsets, with empty interior. Suppose the linear functional $f$ separates $B$ and $C$. So for all $b\in B$, $f(b) &lt; \alpha$ and for all $a\in A$, $f(a+p) &gt; \alpha$. Now, as $B$ is a subspace, we must have $f(b)=0$ for all $b\in B$ and similarly, $f(a)=0$ for all $a\in A$. However, such a linear functional is not continuous, since $A+B$ is dense in $\ell^1$ and so $f$ must be zero if continuous. Therefore $B$ and $C$ is an example of two convex subsets which cannot be separated by a closed hyperplane.</p> Answer 2: <p>This cannot be true. Since If you cosider any infinite dimensional normed space $(X, ||\cdot ||_{*} )$ and any other norm $||\cdot ||^{*} :X\to \mathbb{R} $ stronger than $||\cdot ||_{*} ,$ then taking $K=\{ x\in X: ||x||^{*} \leqslant 1\}$ you obtain $p_K (x) =||x||^{*} \nleqslant C||x||_{*}$ for any $C\in\mathbb{R}.$</p> Answer 3: <p>It is not true that two non-intersecting convex sets can be separated by a continuous functional without further assumptions (Such as one is closed and the other compact, or one of the convex subsets has nonempty interior). </p> <p>Here is a counterexample (c.f exercise 1.14 of Brezis). In $\ell^1(\mathbb{R})$ define the two closed subspaces \begin{align*} A &amp;= \{x\in \ell^1 | x_{2n}=0 \text{ for all } n\in\mathbb{N}\} \\ B &amp;= \{x\in \ell^1 | x_{2n}=\frac{1}{2^n}x_{2n-1} \text{ for all } n\in\mathbb{N}\}. \end{align*}</p> <p>Observe that $\overline{A+B}=\ell^1$. Let $p$ be the point defined by $$ p_{2n}=\frac{1}{2^n}, \quad p_{2n-1}=0.$$ We have that $p \notin A + B$ (we can see this by working backwards from the definitions and seeing the only possible way this could be is if there exists a point such that $x_{2n-1}=1$ for all $n$; but no such point exists.)</p> <p>Now, set $C = A+p$, we have that $B\cap C = \emptyset$ where $B$ and $C$ are closed convex subsets, with empty interior. Suppose the linear functional $f$ separates $B$ and $C$. So for all $b\in B$, $f(b) &lt; \alpha$ and for all $a\in A$, $f(a+p) &gt; \alpha$. Now, as $B$ is a subspace, we must have $f(b)=0$ for all $b\in B$ and similarly, $f(a)=0$ for all $a\in A$. However, such a linear functional is not continuous, since $A+B$ is dense in $\ell^1$ and so $f$ must be zero if continuous. Therefore $B$ and $C$ is an example of two convex subsets which cannot be separated by a closed hyperplane.</p> Answer 4: <p>This cannot be true. Since If you cosider any infinite dimensional normed space $(X, ||\cdot ||_{*} )$ and any other norm $||\cdot ||^{*} :X\to \mathbb{R} $ stronger than $||\cdot ||_{*} ,$ then taking $K=\{ x\in X: ||x||^{*} \leqslant 1\}$ you obtain $p_K (x) =||x||^{*} \nleqslant C||x||_{*}$ for any $C\in\mathbb{R}.$</p> Answer 5: <p>It is not true that two non-intersecting convex sets can be separated by a continuous functional without further assumptions (Such as one is closed and the other compact, or one of the convex subsets has nonempty interior). </p> <p>Here is a counterexample (c.f exercise 1.14 of Brezis). In $\ell^1(\mathbb{R})$ define the two closed subspaces \begin{align*} A &amp;= \{x\in \ell^1 | x_{2n}=0 \text{ for all } n\in\mathbb{N}\} \\ B &amp;= \{x\in \ell^1 | x_{2n}=\frac{1}{2^n}x_{2n-1} \text{ for all } n\in\mathbb{N}\}. \end{align*}</p> <p>Observe that $\overline{A+B}=\ell^1$. Let $p$ be the point defined by $$ p_{2n}=\frac{1}{2^n}, \quad p_{2n-1}=0.$$ We have that $p \notin A + B$ (we can see this by working backwards from the definitions and seeing the only possible way this could be is if there exists a point such that $x_{2n-1}=1$ for all $n$; but no such point exists.)</p> <p>Now, set $C = A+p$, we have that $B\cap C = \emptyset$ where $B$ and $C$ are closed convex subsets, with empty interior. Suppose the linear functional $f$ separates $B$ and $C$. So for all $b\in B$, $f(b) &lt; \alpha$ and for all $a\in A$, $f(a+p) &gt; \alpha$. Now, as $B$ is a subspace, we must have $f(b)=0$ for all $b\in B$ and similarly, $f(a)=0$ for all $a\in A$. However, such a linear functional is not continuous, since $A+B$ is dense in $\ell^1$ and so $f$ must be zero if continuous. Therefore $B$ and $C$ is an example of two convex subsets which cannot be separated by a closed hyperplane.</p> Answer 6: <p>This cannot be true. Since If you cosider any infinite dimensional normed space $(X, ||\cdot ||_{*} )$ and any other norm $||\cdot ||^{*} :X\to \mathbb{R} $ stronger than $||\cdot ||_{*} ,$ then taking $K=\{ x\in X: ||x||^{*} \leqslant 1\}$ you obtain $p_K (x) =||x||^{*} \nleqslant C||x||_{*}$ for any $C\in\mathbb{R}.$</p> Answer 7: <p>It is not true that two non-intersecting convex sets can be separated by a continuous functional without further assumptions (Such as one is closed and the other compact, or one of the convex subsets has nonempty interior). </p> <p>Here is a counterexample (c.f exercise 1.14 of Brezis). In $\ell^1(\mathbb{R})$ define the two closed subspaces \begin{align*} A &amp;= \{x\in \ell^1 | x_{2n}=0 \text{ for all } n\in\mathbb{N}\} \\ B &amp;= \{x\in \ell^1 | x_{2n}=\frac{1}{2^n}x_{2n-1} \text{ for all } n\in\mathbb{N}\}. \end{align*}</p> <p>Observe that $\overline{A+B}=\ell^1$. Let $p$ be the point defined by $$ p_{2n}=\frac{1}{2^n}, \quad p_{2n-1}=0.$$ We have that $p \notin A + B$ (we can see this by working backwards from the definitions and seeing the only possible way this could be is if there exists a point such that $x_{2n-1}=1$ for all $n$; but no such point exists.)</p> <p>Now, set $C = A+p$, we have that $B\cap C = \emptyset$ where $B$ and $C$ are closed convex subsets, with empty interior. Suppose the linear functional $f$ separates $B$ and $C$. So for all $b\in B$, $f(b) &lt; \alpha$ and for all $a\in A$, $f(a+p) &gt; \alpha$. Now, as $B$ is a subspace, we must have $f(b)=0$ for all $b\in B$ and similarly, $f(a)=0$ for all $a\in A$. However, such a linear functional is not continuous, since $A+B$ is dense in $\ell^1$ and so $f$ must be zero if continuous. Therefore $B$ and $C$ is an example of two convex subsets which cannot be separated by a closed hyperplane.</p> Answer 8: <p>This cannot be true. Since If you cosider any infinite dimensional normed space $(X, ||\cdot ||_{*} )$ and any other norm $||\cdot ||^{*} :X\to \mathbb{R} $ stronger than $||\cdot ||_{*} ,$ then taking $K=\{ x\in X: ||x||^{*} \leqslant 1\}$ you obtain $p_K (x) =||x||^{*} \nleqslant C||x||_{*}$ for any $C\in\mathbb{R}.$</p> Answer 9: <p>It is not true that two non-intersecting convex sets can be separated by a continuous functional without further assumptions (Such as one is closed and the other compact, or one of the convex subsets has nonempty interior). </p> <p>Here is a counterexample (c.f exercise 1.14 of Brezis). In $\ell^1(\mathbb{R})$ define the two closed subspaces \begin{align*} A &amp;= \{x\in \ell^1 | x_{2n}=0 \text{ for all } n\in\mathbb{N}\} \\ B &amp;= \{x\in \ell^1 | x_{2n}=\frac{1}{2^n}x_{2n-1} \text{ for all } n\in\mathbb{N}\}. \end{align*}</p> <p>Observe that $\overline{A+B}=\ell^1$. Let $p$ be the point defined by $$ p_{2n}=\frac{1}{2^n}, \quad p_{2n-1}=0.$$ We have that $p \notin A + B$ (we can see this by working backwards from the definitions and seeing the only possible way this could be is if there exists a point such that $x_{2n-1}=1$ for all $n$; but no such point exists.)</p> <p>Now, set $C = A+p$, we have that $B\cap C = \emptyset$ where $B$ and $C$ are closed convex subsets, with empty interior. Suppose the linear functional $f$ separates $B$ and $C$. So for all $b\in B$, $f(b) &lt; \alpha$ and for all $a\in A$, $f(a+p) &gt; \alpha$. Now, as $B$ is a subspace, we must have $f(b)=0$ for all $b\in B$ and similarly, $f(a)=0$ for all $a\in A$. However, such a linear functional is not continuous, since $A+B$ is dense in $\ell^1$ and so $f$ must be zero if continuous. Therefore $B$ and $C$ is an example of two convex subsets which cannot be separated by a closed hyperplane.</p> Answer 10: <p>This cannot be true. Since If you cosider any infinite dimensional normed space $(X, ||\cdot ||_{*} )$ and any other norm $||\cdot ||^{*} :X\to \mathbb{R} $ stronger than $||\cdot ||_{*} ,$ then taking $K=\{ x\in X: ||x||^{*} \leqslant 1\}$ you obtain $p_K (x) =||x||^{*} \nleqslant C||x||_{*}$ for any $C\in\mathbb{R}.$</p>
Why is $\lim\limits_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x}$ equal to $0$?
2,079,417
4
https://math.stackexchange.com/questions/2079417/why-is-lim-limits-x-to-infty-frac1-sqrtx22x-x-equal-to-0
<p>So I made one exercise, which was $\lim_{x\to +\infty} \frac{1}{\sqrt{x^2-2x}-x}$ I solved this one by:</p> <p>$\lim \limits_{x\to +\infty} \frac{1}{\sqrt{x^2-2x}-x} \frac{\sqrt{x^2-2x}+x}{\sqrt{x^2-2x}+x} $</p> <p>$\lim \limits_{x\to +\infty} \frac{\sqrt{x^2}\sqrt{1-\frac{2}{x}}+x}{-2x}$</p> <p>$\lim \limits_{x\to +\infty} \frac{\sqrt{1-\frac{2}{x}}+1}{-2} = -1 $</p> <p>The next exercise is $\lim \limits_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x}$ I thought that I could solve this one in the same way by:</p> <p>$\lim \limits_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x} \frac{\sqrt{x^2+2x}+x}{\sqrt{x^2+2x}+x} $</p> <p>$\lim \limits_{x\to -\infty} \frac{\sqrt{x^2}\sqrt{1+\frac{2}{x}}+x}{2x}$</p> <p>$\lim \limits_{x\to -\infty} \frac{\sqrt{1+\frac{2}{x}}+1}{2} = 1 $</p> <p>But apparently, the answer is $0$...</p> <p>Why can't the second one be solved in the same way as the first? And why is the answer $0$?</p>
[ { "answer_id": 2079430, "score": 5, "text": "<p>When $x\\rightarrow-\\infty$, you don't have $0$ as denominator, but $+\\infty$, because when $x \\rightarrow-\\infty, \\sqrt{x^2 -2x} \\sim |x|$ and, as $x &lt; 0$, $|x| = -x$, so you have $\\lim\\limits_{x\\to -\\infty}\\frac {1}{-x-x} = \\lim\\limits_{x\\to -\\infty}\\frac {1}{-2x} = 0$.</p>\n" }, { "answer_id": 2079499, "score": 3, "text": "<p>We already have answers showing correct ways to solve the problem.\nWhat I found interesting was the implied question:\nwhere is the error in the steps taken in the body of the question,\nwhich appeared to show that the limit is $1$?</p>\n\n<p>The steps in question are\n\\begin{align}\n\\newcommand{\\?}{\\stackrel{?}{=}}\n\\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x}-x} \n &amp;\\? \\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x}-x}\n \\frac{\\sqrt{x^2+2x}+x}{\\sqrt{x^2+2x}+x} \\tag1\\\\\n &amp;\\? \\lim_{x\\to -\\infty} \\frac{\\sqrt{x^2}\\sqrt{1+\\frac{2}{x}}+x}{2x} \\tag2\\\\\n &amp;\\? \\lim_{x\\to -\\infty} \\frac{\\sqrt{1+\\frac{2}{x}}+1}{2} \\tag3\\\\\n &amp;\\? 1 \\tag4\n\\end{align}</p>\n\n<p>The error is in the third step (from line $2$ to line $3$).\nIn fact, if $x &lt; 0,$ then $\\sqrt{x^2} = -x,$ so\n\\begin{align}\n\\lim_{x\\to -\\infty} \\frac{\\sqrt{x^2}\\sqrt{1+\\frac{2}{x}}+x}{2x} \n &amp;= \\lim_{x\\to -\\infty} \\frac{-x\\sqrt{1+\\frac{2}{x}}+x}{2x} \\\\\n &amp;= \\lim_{x\\to -\\infty} \\frac{-\\sqrt{1+\\frac{2}{x}}+1}{2} \\\\\n &amp;\\neq \\lim_{x\\to -\\infty} \\frac{\\sqrt{1+\\frac{2}{x}}+1}{2}\n\\end{align}</p>\n\n<p>One way to avoid errors of this sort (other than constant vigilance\nwhen manipulating square roots of expressions over negative-valued variables)\nis to begin by observing that\n$$\n\\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x} - x} \n = \\lim_{y\\to +\\infty} \\frac{1}{\\sqrt{y^2-2y} + y}.\n$$\nOne can then work on the limit over $y,$ confident in the knowledge that\n$\\sqrt{y^2}/y = 1.$</p>\n" }, { "answer_id": 2079432, "score": 2, "text": "<p>Let $-1/x=h$</p>\n\n<p>$x^2+2x=\\dfrac{1-2h}{h^2}\\implies\\sqrt{x^2+2x}=\\dfrac{\\sqrt{1-2h}}{\\sqrt{h^2}}=\\dfrac{\\sqrt{1-2h}}{|h|}$</p>\n\n<p>As $h\\to0^+, h&gt;0,|h|=+h$</p>\n\n<p>$$\\lim_{x\\to-\\infty}\\dfrac1{\\sqrt{x^2+2x}-x}=\\lim_{h\\to0^+}\\dfrac h{\\sqrt{1-2h}+1}=?$$</p>\n" }, { "answer_id": 2079430, "score": 5, "text": "<p>When $x\\rightarrow-\\infty$, you don't have $0$ as denominator, but $+\\infty$, because when $x \\rightarrow-\\infty, \\sqrt{x^2 -2x} \\sim |x|$ and, as $x &lt; 0$, $|x| = -x$, so you have $\\lim\\limits_{x\\to -\\infty}\\frac {1}{-x-x} = \\lim\\limits_{x\\to -\\infty}\\frac {1}{-2x} = 0$.</p>\n" }, { "answer_id": 2079499, "score": 3, "text": "<p>We already have answers showing correct ways to solve the problem.\nWhat I found interesting was the implied question:\nwhere is the error in the steps taken in the body of the question,\nwhich appeared to show that the limit is $1$?</p>\n\n<p>The steps in question are\n\\begin{align}\n\\newcommand{\\?}{\\stackrel{?}{=}}\n\\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x}-x} \n &amp;\\? \\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x}-x}\n \\frac{\\sqrt{x^2+2x}+x}{\\sqrt{x^2+2x}+x} \\tag1\\\\\n &amp;\\? \\lim_{x\\to -\\infty} \\frac{\\sqrt{x^2}\\sqrt{1+\\frac{2}{x}}+x}{2x} \\tag2\\\\\n &amp;\\? \\lim_{x\\to -\\infty} \\frac{\\sqrt{1+\\frac{2}{x}}+1}{2} \\tag3\\\\\n &amp;\\? 1 \\tag4\n\\end{align}</p>\n\n<p>The error is in the third step (from line $2$ to line $3$).\nIn fact, if $x &lt; 0,$ then $\\sqrt{x^2} = -x,$ so\n\\begin{align}\n\\lim_{x\\to -\\infty} \\frac{\\sqrt{x^2}\\sqrt{1+\\frac{2}{x}}+x}{2x} \n &amp;= \\lim_{x\\to -\\infty} \\frac{-x\\sqrt{1+\\frac{2}{x}}+x}{2x} \\\\\n &amp;= \\lim_{x\\to -\\infty} \\frac{-\\sqrt{1+\\frac{2}{x}}+1}{2} \\\\\n &amp;\\neq \\lim_{x\\to -\\infty} \\frac{\\sqrt{1+\\frac{2}{x}}+1}{2}\n\\end{align}</p>\n\n<p>One way to avoid errors of this sort (other than constant vigilance\nwhen manipulating square roots of expressions over negative-valued variables)\nis to begin by observing that\n$$\n\\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x} - x} \n = \\lim_{y\\to +\\infty} \\frac{1}{\\sqrt{y^2-2y} + y}.\n$$\nOne can then work on the limit over $y,$ confident in the knowledge that\n$\\sqrt{y^2}/y = 1.$</p>\n" }, { "answer_id": 2079432, "score": 2, "text": "<p>Let $-1/x=h$</p>\n\n<p>$x^2+2x=\\dfrac{1-2h}{h^2}\\implies\\sqrt{x^2+2x}=\\dfrac{\\sqrt{1-2h}}{\\sqrt{h^2}}=\\dfrac{\\sqrt{1-2h}}{|h|}$</p>\n\n<p>As $h\\to0^+, h&gt;0,|h|=+h$</p>\n\n<p>$$\\lim_{x\\to-\\infty}\\dfrac1{\\sqrt{x^2+2x}-x}=\\lim_{h\\to0^+}\\dfrac h{\\sqrt{1-2h}+1}=?$$</p>\n" }, { "answer_id": 2079430, "score": 5, "text": "<p>When $x\\rightarrow-\\infty$, you don't have $0$ as denominator, but $+\\infty$, because when $x \\rightarrow-\\infty, \\sqrt{x^2 -2x} \\sim |x|$ and, as $x &lt; 0$, $|x| = -x$, so you have $\\lim\\limits_{x\\to -\\infty}\\frac {1}{-x-x} = \\lim\\limits_{x\\to -\\infty}\\frac {1}{-2x} = 0$.</p>\n" }, { "answer_id": 2079499, "score": 3, "text": "<p>We already have answers showing correct ways to solve the problem.\nWhat I found interesting was the implied question:\nwhere is the error in the steps taken in the body of the question,\nwhich appeared to show that the limit is $1$?</p>\n\n<p>The steps in question are\n\\begin{align}\n\\newcommand{\\?}{\\stackrel{?}{=}}\n\\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x}-x} \n &amp;\\? \\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x}-x}\n \\frac{\\sqrt{x^2+2x}+x}{\\sqrt{x^2+2x}+x} \\tag1\\\\\n &amp;\\? \\lim_{x\\to -\\infty} \\frac{\\sqrt{x^2}\\sqrt{1+\\frac{2}{x}}+x}{2x} \\tag2\\\\\n &amp;\\? \\lim_{x\\to -\\infty} \\frac{\\sqrt{1+\\frac{2}{x}}+1}{2} \\tag3\\\\\n &amp;\\? 1 \\tag4\n\\end{align}</p>\n\n<p>The error is in the third step (from line $2$ to line $3$).\nIn fact, if $x &lt; 0,$ then $\\sqrt{x^2} = -x,$ so\n\\begin{align}\n\\lim_{x\\to -\\infty} \\frac{\\sqrt{x^2}\\sqrt{1+\\frac{2}{x}}+x}{2x} \n &amp;= \\lim_{x\\to -\\infty} \\frac{-x\\sqrt{1+\\frac{2}{x}}+x}{2x} \\\\\n &amp;= \\lim_{x\\to -\\infty} \\frac{-\\sqrt{1+\\frac{2}{x}}+1}{2} \\\\\n &amp;\\neq \\lim_{x\\to -\\infty} \\frac{\\sqrt{1+\\frac{2}{x}}+1}{2}\n\\end{align}</p>\n\n<p>One way to avoid errors of this sort (other than constant vigilance\nwhen manipulating square roots of expressions over negative-valued variables)\nis to begin by observing that\n$$\n\\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x} - x} \n = \\lim_{y\\to +\\infty} \\frac{1}{\\sqrt{y^2-2y} + y}.\n$$\nOne can then work on the limit over $y,$ confident in the knowledge that\n$\\sqrt{y^2}/y = 1.$</p>\n" }, { "answer_id": 2079432, "score": 2, "text": "<p>Let $-1/x=h$</p>\n\n<p>$x^2+2x=\\dfrac{1-2h}{h^2}\\implies\\sqrt{x^2+2x}=\\dfrac{\\sqrt{1-2h}}{\\sqrt{h^2}}=\\dfrac{\\sqrt{1-2h}}{|h|}$</p>\n\n<p>As $h\\to0^+, h&gt;0,|h|=+h$</p>\n\n<p>$$\\lim_{x\\to-\\infty}\\dfrac1{\\sqrt{x^2+2x}-x}=\\lim_{h\\to0^+}\\dfrac h{\\sqrt{1-2h}+1}=?$$</p>\n" }, { "answer_id": 2079430, "score": 5, "text": "<p>When $x\\rightarrow-\\infty$, you don't have $0$ as denominator, but $+\\infty$, because when $x \\rightarrow-\\infty, \\sqrt{x^2 -2x} \\sim |x|$ and, as $x &lt; 0$, $|x| = -x$, so you have $\\lim\\limits_{x\\to -\\infty}\\frac {1}{-x-x} = \\lim\\limits_{x\\to -\\infty}\\frac {1}{-2x} = 0$.</p>\n" }, { "answer_id": 2079499, "score": 3, "text": "<p>We already have answers showing correct ways to solve the problem.\nWhat I found interesting was the implied question:\nwhere is the error in the steps taken in the body of the question,\nwhich appeared to show that the limit is $1$?</p>\n\n<p>The steps in question are\n\\begin{align}\n\\newcommand{\\?}{\\stackrel{?}{=}}\n\\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x}-x} \n &amp;\\? \\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x}-x}\n \\frac{\\sqrt{x^2+2x}+x}{\\sqrt{x^2+2x}+x} \\tag1\\\\\n &amp;\\? \\lim_{x\\to -\\infty} \\frac{\\sqrt{x^2}\\sqrt{1+\\frac{2}{x}}+x}{2x} \\tag2\\\\\n &amp;\\? \\lim_{x\\to -\\infty} \\frac{\\sqrt{1+\\frac{2}{x}}+1}{2} \\tag3\\\\\n &amp;\\? 1 \\tag4\n\\end{align}</p>\n\n<p>The error is in the third step (from line $2$ to line $3$).\nIn fact, if $x &lt; 0,$ then $\\sqrt{x^2} = -x,$ so\n\\begin{align}\n\\lim_{x\\to -\\infty} \\frac{\\sqrt{x^2}\\sqrt{1+\\frac{2}{x}}+x}{2x} \n &amp;= \\lim_{x\\to -\\infty} \\frac{-x\\sqrt{1+\\frac{2}{x}}+x}{2x} \\\\\n &amp;= \\lim_{x\\to -\\infty} \\frac{-\\sqrt{1+\\frac{2}{x}}+1}{2} \\\\\n &amp;\\neq \\lim_{x\\to -\\infty} \\frac{\\sqrt{1+\\frac{2}{x}}+1}{2}\n\\end{align}</p>\n\n<p>One way to avoid errors of this sort (other than constant vigilance\nwhen manipulating square roots of expressions over negative-valued variables)\nis to begin by observing that\n$$\n\\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x} - x} \n = \\lim_{y\\to +\\infty} \\frac{1}{\\sqrt{y^2-2y} + y}.\n$$\nOne can then work on the limit over $y,$ confident in the knowledge that\n$\\sqrt{y^2}/y = 1.$</p>\n" }, { "answer_id": 2079432, "score": 2, "text": "<p>Let $-1/x=h$</p>\n\n<p>$x^2+2x=\\dfrac{1-2h}{h^2}\\implies\\sqrt{x^2+2x}=\\dfrac{\\sqrt{1-2h}}{\\sqrt{h^2}}=\\dfrac{\\sqrt{1-2h}}{|h|}$</p>\n\n<p>As $h\\to0^+, h&gt;0,|h|=+h$</p>\n\n<p>$$\\lim_{x\\to-\\infty}\\dfrac1{\\sqrt{x^2+2x}-x}=\\lim_{h\\to0^+}\\dfrac h{\\sqrt{1-2h}+1}=?$$</p>\n" }, { "answer_id": 2079430, "score": 5, "text": "<p>When $x\\rightarrow-\\infty$, you don't have $0$ as denominator, but $+\\infty$, because when $x \\rightarrow-\\infty, \\sqrt{x^2 -2x} \\sim |x|$ and, as $x &lt; 0$, $|x| = -x$, so you have $\\lim\\limits_{x\\to -\\infty}\\frac {1}{-x-x} = \\lim\\limits_{x\\to -\\infty}\\frac {1}{-2x} = 0$.</p>\n" }, { "answer_id": 2079499, "score": 3, "text": "<p>We already have answers showing correct ways to solve the problem.\nWhat I found interesting was the implied question:\nwhere is the error in the steps taken in the body of the question,\nwhich appeared to show that the limit is $1$?</p>\n\n<p>The steps in question are\n\\begin{align}\n\\newcommand{\\?}{\\stackrel{?}{=}}\n\\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x}-x} \n &amp;\\? \\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x}-x}\n \\frac{\\sqrt{x^2+2x}+x}{\\sqrt{x^2+2x}+x} \\tag1\\\\\n &amp;\\? \\lim_{x\\to -\\infty} \\frac{\\sqrt{x^2}\\sqrt{1+\\frac{2}{x}}+x}{2x} \\tag2\\\\\n &amp;\\? \\lim_{x\\to -\\infty} \\frac{\\sqrt{1+\\frac{2}{x}}+1}{2} \\tag3\\\\\n &amp;\\? 1 \\tag4\n\\end{align}</p>\n\n<p>The error is in the third step (from line $2$ to line $3$).\nIn fact, if $x &lt; 0,$ then $\\sqrt{x^2} = -x,$ so\n\\begin{align}\n\\lim_{x\\to -\\infty} \\frac{\\sqrt{x^2}\\sqrt{1+\\frac{2}{x}}+x}{2x} \n &amp;= \\lim_{x\\to -\\infty} \\frac{-x\\sqrt{1+\\frac{2}{x}}+x}{2x} \\\\\n &amp;= \\lim_{x\\to -\\infty} \\frac{-\\sqrt{1+\\frac{2}{x}}+1}{2} \\\\\n &amp;\\neq \\lim_{x\\to -\\infty} \\frac{\\sqrt{1+\\frac{2}{x}}+1}{2}\n\\end{align}</p>\n\n<p>One way to avoid errors of this sort (other than constant vigilance\nwhen manipulating square roots of expressions over negative-valued variables)\nis to begin by observing that\n$$\n\\lim_{x\\to -\\infty} \\frac{1}{\\sqrt{x^2+2x} - x} \n = \\lim_{y\\to +\\infty} \\frac{1}{\\sqrt{y^2-2y} + y}.\n$$\nOne can then work on the limit over $y,$ confident in the knowledge that\n$\\sqrt{y^2}/y = 1.$</p>\n" }, { "answer_id": 2079432, "score": 2, "text": "<p>Let $-1/x=h$</p>\n\n<p>$x^2+2x=\\dfrac{1-2h}{h^2}\\implies\\sqrt{x^2+2x}=\\dfrac{\\sqrt{1-2h}}{\\sqrt{h^2}}=\\dfrac{\\sqrt{1-2h}}{|h|}$</p>\n\n<p>As $h\\to0^+, h&gt;0,|h|=+h$</p>\n\n<p>$$\\lim_{x\\to-\\infty}\\dfrac1{\\sqrt{x^2+2x}-x}=\\lim_{h\\to0^+}\\dfrac h{\\sqrt{1-2h}+1}=?$$</p>\n" } ]
Question: Why is $\lim\limits_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x}$ equal to $0$? Answer 1: <p>When $x\rightarrow-\infty$, you don't have $0$ as denominator, but $+\infty$, because when $x \rightarrow-\infty, \sqrt{x^2 -2x} \sim |x|$ and, as $x &lt; 0$, $|x| = -x$, so you have $\lim\limits_{x\to -\infty}\frac {1}{-x-x} = \lim\limits_{x\to -\infty}\frac {1}{-2x} = 0$.</p> Answer 2: <p>We already have answers showing correct ways to solve the problem. What I found interesting was the implied question: where is the error in the steps taken in the body of the question, which appeared to show that the limit is $1$?</p> <p>The steps in question are \begin{align} \newcommand{\?}{\stackrel{?}{=}} \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x} &amp;\? \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x} \frac{\sqrt{x^2+2x}+x}{\sqrt{x^2+2x}+x} \tag1\\ &amp;\? \lim_{x\to -\infty} \frac{\sqrt{x^2}\sqrt{1+\frac{2}{x}}+x}{2x} \tag2\\ &amp;\? \lim_{x\to -\infty} \frac{\sqrt{1+\frac{2}{x}}+1}{2} \tag3\\ &amp;\? 1 \tag4 \end{align}</p> <p>The error is in the third step (from line $2$ to line $3$). In fact, if $x &lt; 0,$ then $\sqrt{x^2} = -x,$ so \begin{align} \lim_{x\to -\infty} \frac{\sqrt{x^2}\sqrt{1+\frac{2}{x}}+x}{2x} &amp;= \lim_{x\to -\infty} \frac{-x\sqrt{1+\frac{2}{x}}+x}{2x} \\ &amp;= \lim_{x\to -\infty} \frac{-\sqrt{1+\frac{2}{x}}+1}{2} \\ &amp;\neq \lim_{x\to -\infty} \frac{\sqrt{1+\frac{2}{x}}+1}{2} \end{align}</p> <p>One way to avoid errors of this sort (other than constant vigilance when manipulating square roots of expressions over negative-valued variables) is to begin by observing that $$ \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x} - x} = \lim_{y\to +\infty} \frac{1}{\sqrt{y^2-2y} + y}. $$ One can then work on the limit over $y,$ confident in the knowledge that $\sqrt{y^2}/y = 1.$</p> Answer 3: <p>Let $-1/x=h$</p> <p>$x^2+2x=\dfrac{1-2h}{h^2}\implies\sqrt{x^2+2x}=\dfrac{\sqrt{1-2h}}{\sqrt{h^2}}=\dfrac{\sqrt{1-2h}}{|h|}$</p> <p>As $h\to0^+, h&gt;0,|h|=+h$</p> <p>$$\lim_{x\to-\infty}\dfrac1{\sqrt{x^2+2x}-x}=\lim_{h\to0^+}\dfrac h{\sqrt{1-2h}+1}=?$$</p> Answer 4: <p>When $x\rightarrow-\infty$, you don't have $0$ as denominator, but $+\infty$, because when $x \rightarrow-\infty, \sqrt{x^2 -2x} \sim |x|$ and, as $x &lt; 0$, $|x| = -x$, so you have $\lim\limits_{x\to -\infty}\frac {1}{-x-x} = \lim\limits_{x\to -\infty}\frac {1}{-2x} = 0$.</p> Answer 5: <p>We already have answers showing correct ways to solve the problem. What I found interesting was the implied question: where is the error in the steps taken in the body of the question, which appeared to show that the limit is $1$?</p> <p>The steps in question are \begin{align} \newcommand{\?}{\stackrel{?}{=}} \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x} &amp;\? \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x} \frac{\sqrt{x^2+2x}+x}{\sqrt{x^2+2x}+x} \tag1\\ &amp;\? \lim_{x\to -\infty} \frac{\sqrt{x^2}\sqrt{1+\frac{2}{x}}+x}{2x} \tag2\\ &amp;\? \lim_{x\to -\infty} \frac{\sqrt{1+\frac{2}{x}}+1}{2} \tag3\\ &amp;\? 1 \tag4 \end{align}</p> <p>The error is in the third step (from line $2$ to line $3$). In fact, if $x &lt; 0,$ then $\sqrt{x^2} = -x,$ so \begin{align} \lim_{x\to -\infty} \frac{\sqrt{x^2}\sqrt{1+\frac{2}{x}}+x}{2x} &amp;= \lim_{x\to -\infty} \frac{-x\sqrt{1+\frac{2}{x}}+x}{2x} \\ &amp;= \lim_{x\to -\infty} \frac{-\sqrt{1+\frac{2}{x}}+1}{2} \\ &amp;\neq \lim_{x\to -\infty} \frac{\sqrt{1+\frac{2}{x}}+1}{2} \end{align}</p> <p>One way to avoid errors of this sort (other than constant vigilance when manipulating square roots of expressions over negative-valued variables) is to begin by observing that $$ \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x} - x} = \lim_{y\to +\infty} \frac{1}{\sqrt{y^2-2y} + y}. $$ One can then work on the limit over $y,$ confident in the knowledge that $\sqrt{y^2}/y = 1.$</p> Answer 6: <p>Let $-1/x=h$</p> <p>$x^2+2x=\dfrac{1-2h}{h^2}\implies\sqrt{x^2+2x}=\dfrac{\sqrt{1-2h}}{\sqrt{h^2}}=\dfrac{\sqrt{1-2h}}{|h|}$</p> <p>As $h\to0^+, h&gt;0,|h|=+h$</p> <p>$$\lim_{x\to-\infty}\dfrac1{\sqrt{x^2+2x}-x}=\lim_{h\to0^+}\dfrac h{\sqrt{1-2h}+1}=?$$</p> Answer 7: <p>When $x\rightarrow-\infty$, you don't have $0$ as denominator, but $+\infty$, because when $x \rightarrow-\infty, \sqrt{x^2 -2x} \sim |x|$ and, as $x &lt; 0$, $|x| = -x$, so you have $\lim\limits_{x\to -\infty}\frac {1}{-x-x} = \lim\limits_{x\to -\infty}\frac {1}{-2x} = 0$.</p> Answer 8: <p>We already have answers showing correct ways to solve the problem. What I found interesting was the implied question: where is the error in the steps taken in the body of the question, which appeared to show that the limit is $1$?</p> <p>The steps in question are \begin{align} \newcommand{\?}{\stackrel{?}{=}} \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x} &amp;\? \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x} \frac{\sqrt{x^2+2x}+x}{\sqrt{x^2+2x}+x} \tag1\\ &amp;\? \lim_{x\to -\infty} \frac{\sqrt{x^2}\sqrt{1+\frac{2}{x}}+x}{2x} \tag2\\ &amp;\? \lim_{x\to -\infty} \frac{\sqrt{1+\frac{2}{x}}+1}{2} \tag3\\ &amp;\? 1 \tag4 \end{align}</p> <p>The error is in the third step (from line $2$ to line $3$). In fact, if $x &lt; 0,$ then $\sqrt{x^2} = -x,$ so \begin{align} \lim_{x\to -\infty} \frac{\sqrt{x^2}\sqrt{1+\frac{2}{x}}+x}{2x} &amp;= \lim_{x\to -\infty} \frac{-x\sqrt{1+\frac{2}{x}}+x}{2x} \\ &amp;= \lim_{x\to -\infty} \frac{-\sqrt{1+\frac{2}{x}}+1}{2} \\ &amp;\neq \lim_{x\to -\infty} \frac{\sqrt{1+\frac{2}{x}}+1}{2} \end{align}</p> <p>One way to avoid errors of this sort (other than constant vigilance when manipulating square roots of expressions over negative-valued variables) is to begin by observing that $$ \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x} - x} = \lim_{y\to +\infty} \frac{1}{\sqrt{y^2-2y} + y}. $$ One can then work on the limit over $y,$ confident in the knowledge that $\sqrt{y^2}/y = 1.$</p> Answer 9: <p>Let $-1/x=h$</p> <p>$x^2+2x=\dfrac{1-2h}{h^2}\implies\sqrt{x^2+2x}=\dfrac{\sqrt{1-2h}}{\sqrt{h^2}}=\dfrac{\sqrt{1-2h}}{|h|}$</p> <p>As $h\to0^+, h&gt;0,|h|=+h$</p> <p>$$\lim_{x\to-\infty}\dfrac1{\sqrt{x^2+2x}-x}=\lim_{h\to0^+}\dfrac h{\sqrt{1-2h}+1}=?$$</p> Answer 10: <p>When $x\rightarrow-\infty$, you don't have $0$ as denominator, but $+\infty$, because when $x \rightarrow-\infty, \sqrt{x^2 -2x} \sim |x|$ and, as $x &lt; 0$, $|x| = -x$, so you have $\lim\limits_{x\to -\infty}\frac {1}{-x-x} = \lim\limits_{x\to -\infty}\frac {1}{-2x} = 0$.</p> Answer 11: <p>We already have answers showing correct ways to solve the problem. What I found interesting was the implied question: where is the error in the steps taken in the body of the question, which appeared to show that the limit is $1$?</p> <p>The steps in question are \begin{align} \newcommand{\?}{\stackrel{?}{=}} \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x} &amp;\? \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x} \frac{\sqrt{x^2+2x}+x}{\sqrt{x^2+2x}+x} \tag1\\ &amp;\? \lim_{x\to -\infty} \frac{\sqrt{x^2}\sqrt{1+\frac{2}{x}}+x}{2x} \tag2\\ &amp;\? \lim_{x\to -\infty} \frac{\sqrt{1+\frac{2}{x}}+1}{2} \tag3\\ &amp;\? 1 \tag4 \end{align}</p> <p>The error is in the third step (from line $2$ to line $3$). In fact, if $x &lt; 0,$ then $\sqrt{x^2} = -x,$ so \begin{align} \lim_{x\to -\infty} \frac{\sqrt{x^2}\sqrt{1+\frac{2}{x}}+x}{2x} &amp;= \lim_{x\to -\infty} \frac{-x\sqrt{1+\frac{2}{x}}+x}{2x} \\ &amp;= \lim_{x\to -\infty} \frac{-\sqrt{1+\frac{2}{x}}+1}{2} \\ &amp;\neq \lim_{x\to -\infty} \frac{\sqrt{1+\frac{2}{x}}+1}{2} \end{align}</p> <p>One way to avoid errors of this sort (other than constant vigilance when manipulating square roots of expressions over negative-valued variables) is to begin by observing that $$ \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x} - x} = \lim_{y\to +\infty} \frac{1}{\sqrt{y^2-2y} + y}. $$ One can then work on the limit over $y,$ confident in the knowledge that $\sqrt{y^2}/y = 1.$</p> Answer 12: <p>Let $-1/x=h$</p> <p>$x^2+2x=\dfrac{1-2h}{h^2}\implies\sqrt{x^2+2x}=\dfrac{\sqrt{1-2h}}{\sqrt{h^2}}=\dfrac{\sqrt{1-2h}}{|h|}$</p> <p>As $h\to0^+, h&gt;0,|h|=+h$</p> <p>$$\lim_{x\to-\infty}\dfrac1{\sqrt{x^2+2x}-x}=\lim_{h\to0^+}\dfrac h{\sqrt{1-2h}+1}=?$$</p> Answer 13: <p>When $x\rightarrow-\infty$, you don't have $0$ as denominator, but $+\infty$, because when $x \rightarrow-\infty, \sqrt{x^2 -2x} \sim |x|$ and, as $x &lt; 0$, $|x| = -x$, so you have $\lim\limits_{x\to -\infty}\frac {1}{-x-x} = \lim\limits_{x\to -\infty}\frac {1}{-2x} = 0$.</p> Answer 14: <p>We already have answers showing correct ways to solve the problem. What I found interesting was the implied question: where is the error in the steps taken in the body of the question, which appeared to show that the limit is $1$?</p> <p>The steps in question are \begin{align} \newcommand{\?}{\stackrel{?}{=}} \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x} &amp;\? \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x}-x} \frac{\sqrt{x^2+2x}+x}{\sqrt{x^2+2x}+x} \tag1\\ &amp;\? \lim_{x\to -\infty} \frac{\sqrt{x^2}\sqrt{1+\frac{2}{x}}+x}{2x} \tag2\\ &amp;\? \lim_{x\to -\infty} \frac{\sqrt{1+\frac{2}{x}}+1}{2} \tag3\\ &amp;\? 1 \tag4 \end{align}</p> <p>The error is in the third step (from line $2$ to line $3$). In fact, if $x &lt; 0,$ then $\sqrt{x^2} = -x,$ so \begin{align} \lim_{x\to -\infty} \frac{\sqrt{x^2}\sqrt{1+\frac{2}{x}}+x}{2x} &amp;= \lim_{x\to -\infty} \frac{-x\sqrt{1+\frac{2}{x}}+x}{2x} \\ &amp;= \lim_{x\to -\infty} \frac{-\sqrt{1+\frac{2}{x}}+1}{2} \\ &amp;\neq \lim_{x\to -\infty} \frac{\sqrt{1+\frac{2}{x}}+1}{2} \end{align}</p> <p>One way to avoid errors of this sort (other than constant vigilance when manipulating square roots of expressions over negative-valued variables) is to begin by observing that $$ \lim_{x\to -\infty} \frac{1}{\sqrt{x^2+2x} - x} = \lim_{y\to +\infty} \frac{1}{\sqrt{y^2-2y} + y}. $$ One can then work on the limit over $y,$ confident in the knowledge that $\sqrt{y^2}/y = 1.$</p> Answer 15: <p>Let $-1/x=h$</p> <p>$x^2+2x=\dfrac{1-2h}{h^2}\implies\sqrt{x^2+2x}=\dfrac{\sqrt{1-2h}}{\sqrt{h^2}}=\dfrac{\sqrt{1-2h}}{|h|}$</p> <p>As $h\to0^+, h&gt;0,|h|=+h$</p> <p>$$\lim_{x\to-\infty}\dfrac1{\sqrt{x^2+2x}-x}=\lim_{h\to0^+}\dfrac h{\sqrt{1-2h}+1}=?$$</p>
How to think about orbits and stabilizers
2,079,589
4
https://math.stackexchange.com/questions/2079589/how-to-think-about-orbits-and-stabilizers
<p>I'm trying to grasp the notion of <strong>orbits</strong> and <strong>stabilizers</strong>. I have two questions:</p> <ol> <li>If we look at the picture below, how can we determine $|G(2 \to 6)|$ and $|{G_2}|$ given the following definitions:</li> </ol> <p>$$ \begin{gathered} G(x \to y) = \{ g \in G|g(x) = y\} \hfill \\ {G_x} = G(x \to x) \hfill \\ \end{gathered} $$</p> <ol start="2"> <li>Why is $\{1,3,5\}$ an orbit, but not $\{1,2,3\}$?</li> </ol> <p><a href="https://i.stack.imgur.com/sc4li.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sc4li.jpg" alt="The graph representing G"></a></p>
[ { "answer_id": 2079594, "score": 2, "text": "<p>The orbits and stabilizers depend on the group of symmetries under consideration. In this case that group seems to be the symmetries of the hexagon that preserve the inscribed triangle.</p>\n\n<p>That answers your second question: you can move $1$ to $3$ and $5$ but not to $2$.</p>\n\n<p>For the first question, look for the symmetries that move $2$ to $6$, then look for the symmetries that fix a vertex (the identity will always work but there may be more}.</p>\n" }, { "answer_id": 2079602, "score": 1, "text": "<p>You need to specify the group that's acting on this set. Based on the second question:</p>\n\n<blockquote>\n <p>Why is $\\{1,3,5\\}$ an orbit, but not $\\{1,2,3\\}$?</p>\n</blockquote>\n\n<p>I'm guessing the symmetry group of a triangle is acting on the triangle with vertices at $1$, $3$, and $5$ (and by extension is acting on the entire hexagon by preserving the relative positions of the points w.r.t. the triangle). If this is indeed the case, we cannot have $\\{1, 2, 3\\}$ as an orbit since sending the vertex labeled \"$1$\" to the point labeled \"$2$\" cannot preserve the triangle to which vertex \"$1$\" belongs. In other words, the triangle post-symmetry movement needs to be in the exact same orientation as the original triangle (ignoring labeled vertices).</p>\n\n<hr>\n\n<p>To tackle the first question, first recall that every element in the symmetry group of a triangle is some combination of a chosen flip $F$ and a $120^\\circ$ clockwise rotation $R$. You can check that $F$ and $R$ anticommute in the sense that $FR = R^{-1}F$. Armed with this knowledge, you can write down $6$ unique triangle-preserving isometries: $\\{R^kF^j \\ | \\ 0 \\leq k \\leq 2 \\text{ and } 0 \\leq j \\leq 1 \\}$. See which ones will send $2 \\mapsto 6$ and which ones will send $2 \\mapsto 2$ when applied to the triangle in the figure (and hence the hexagon).</p>\n" }, { "answer_id": 2081255, "score": 0, "text": "<p>Symmetry is a fancy word for automorphism. Determining the symmetries of the graph in your post is simply asking you to determine the automorphism group of the graph. </p>\n\n<p>Recall that if $G, H$ are graphs, an isomorphism $\\phi : V(G) \\to V(H)$ is a bijection such that if $ij \\in E(G)$, then $\\phi(i)\\phi(j) \\in E(H)$. This second condition is called a graph homomorphism:</p>\n\n<blockquote>\n <p>$ij \\in E(G)$, then $\\phi(i)\\phi(j) \\in E(H)$. </p>\n</blockquote>\n\n<p>An automorphism is an isomorphism $\\phi : V(G) \\to V(G)$, where $G$ is a graph. Trivially, the identity map is an automorphism of a graph. The automorphism group of $G$, $\\text{Aut}(G)$, measures how symmetric the graph is. That is, how can we relabel the vertices to preserve the graph structure. It is, however, more intuitive to think of sending vertex $i$ to vertex $j$, rather than relabeling. That is, the group action is a dynamical process.</p>\n\n<p>Now from the definition of a graph isomorphism, we have the following lemma:</p>\n\n<p><strong>Lemma:</strong> Let $G, H$ be isomorphic graphs, and let $\\phi : V(G) \\to V(H)$ be an isomorphism. Then for each $v \\in V(G)$, $\\text{deg}(v) = \\text{deg}(\\phi(v))$. </p>\n\n<p>Take some time to prove this lemma (and feel free to comment if you need a hint).</p>\n\n<p>Kaj Hansen's answer addresses the automorphism group of your graph from a geometric perspective. I'll expand from a more graph theoretic perspective. Recall that $\\text{Aut}(C_{6}) \\cong D_{12}$ (where $C_{6}$ is the cycle graph on six-vertices, or the regular hexagon). By the above lemma, in your graph $G$, we have that $1$ cannot map to $2$ under the action of $\\text{Aut}(G)$. As $\\text{Aut}(G) \\leq D_{12}$, we have that $|\\text{Aut}(G)| \\leq 6$. Now show that $D_{6} \\cong \\text{Sym}(3) \\leq \\text{Aut}(G)$, by showing the action of $D_{6}$ on the triangle induces an action on $G$.</p>\n" }, { "answer_id": 2223616, "score": 0, "text": "<p>If an automorphism of the graph maps vertex $a$ to vertex $b$, then the edges incident to $a$ are mapped to edges incident to $b$. Hence, the degrees of vertices $a$ and $b$ are equal. In the graph shown, vertices $1$ and $2$ have degrees $4$ and $2$, respectively. Hence, there does not exist any automorphism which maps vertex $1$ to vertex $2$. Hence, vertices $1$ and $2$ cannot be in the same orbit of the action of the automorphism group $G$ of the graph. </p>\n\n<p>Let $r$ be the permutation $(1,2,3,4,5,6)$ of the vertex set. As mentioned in the previous paragraph $r \\notin G$. However, it can be observed that $r^2$ preserves adjacency and so $r^2 \\in G$. Similarly, $r^4 \\in G$. Hence, $G$ contains the set $\\{1,r^2,r^4\\}$ of three rotations. Also, $G$ contains the reflection $s = (1,3)(4,5)$ about the line which passes through vertex $2$ and the center of the figure. Thus, $G$ contains the subgroup $\\langle s,r^2 \\rangle$, which consists of $6$ permutations. To show that this subgroup is indeed the full automorphism group of the graph, we can use the orbit-stabilizer lemma, which asserts that $|G| = |G_2|~|2^G|$. Observe that if $g \\in G_2$, then $g$ must permute the neighbors of $2$ among themselves, and so $g$ either fixes vertex $1$ or maps vertex $1$ to vertex $3$. In the former case, $g$ must be the identity, and in the latter case $g=s$. Hence, $|G_2|=2$, whence $|G| = |G_2|~|2^G| = 2 \\cdot 3 = 6$. </p>\n\n<p>From the theory of permutation groups, we know that if $G$ acts on a set $V$ and $x, y \\in V$ are in the same orbit, then $|G_x| = |G(x \\rightarrow y)|$. Since vertices $6$ and $2$ are in the same orbit, $|G(2 \\rightarrow 6)|$ is equal to $|G_2|$, which equals $2$. </p>\n" }, { "answer_id": 2079594, "score": 2, "text": "<p>The orbits and stabilizers depend on the group of symmetries under consideration. In this case that group seems to be the symmetries of the hexagon that preserve the inscribed triangle.</p>\n\n<p>That answers your second question: you can move $1$ to $3$ and $5$ but not to $2$.</p>\n\n<p>For the first question, look for the symmetries that move $2$ to $6$, then look for the symmetries that fix a vertex (the identity will always work but there may be more}.</p>\n" }, { "answer_id": 2079602, "score": 1, "text": "<p>You need to specify the group that's acting on this set. Based on the second question:</p>\n\n<blockquote>\n <p>Why is $\\{1,3,5\\}$ an orbit, but not $\\{1,2,3\\}$?</p>\n</blockquote>\n\n<p>I'm guessing the symmetry group of a triangle is acting on the triangle with vertices at $1$, $3$, and $5$ (and by extension is acting on the entire hexagon by preserving the relative positions of the points w.r.t. the triangle). If this is indeed the case, we cannot have $\\{1, 2, 3\\}$ as an orbit since sending the vertex labeled \"$1$\" to the point labeled \"$2$\" cannot preserve the triangle to which vertex \"$1$\" belongs. In other words, the triangle post-symmetry movement needs to be in the exact same orientation as the original triangle (ignoring labeled vertices).</p>\n\n<hr>\n\n<p>To tackle the first question, first recall that every element in the symmetry group of a triangle is some combination of a chosen flip $F$ and a $120^\\circ$ clockwise rotation $R$. You can check that $F$ and $R$ anticommute in the sense that $FR = R^{-1}F$. Armed with this knowledge, you can write down $6$ unique triangle-preserving isometries: $\\{R^kF^j \\ | \\ 0 \\leq k \\leq 2 \\text{ and } 0 \\leq j \\leq 1 \\}$. See which ones will send $2 \\mapsto 6$ and which ones will send $2 \\mapsto 2$ when applied to the triangle in the figure (and hence the hexagon).</p>\n" }, { "answer_id": 2081255, "score": 0, "text": "<p>Symmetry is a fancy word for automorphism. Determining the symmetries of the graph in your post is simply asking you to determine the automorphism group of the graph. </p>\n\n<p>Recall that if $G, H$ are graphs, an isomorphism $\\phi : V(G) \\to V(H)$ is a bijection such that if $ij \\in E(G)$, then $\\phi(i)\\phi(j) \\in E(H)$. This second condition is called a graph homomorphism:</p>\n\n<blockquote>\n <p>$ij \\in E(G)$, then $\\phi(i)\\phi(j) \\in E(H)$. </p>\n</blockquote>\n\n<p>An automorphism is an isomorphism $\\phi : V(G) \\to V(G)$, where $G$ is a graph. Trivially, the identity map is an automorphism of a graph. The automorphism group of $G$, $\\text{Aut}(G)$, measures how symmetric the graph is. That is, how can we relabel the vertices to preserve the graph structure. It is, however, more intuitive to think of sending vertex $i$ to vertex $j$, rather than relabeling. That is, the group action is a dynamical process.</p>\n\n<p>Now from the definition of a graph isomorphism, we have the following lemma:</p>\n\n<p><strong>Lemma:</strong> Let $G, H$ be isomorphic graphs, and let $\\phi : V(G) \\to V(H)$ be an isomorphism. Then for each $v \\in V(G)$, $\\text{deg}(v) = \\text{deg}(\\phi(v))$. </p>\n\n<p>Take some time to prove this lemma (and feel free to comment if you need a hint).</p>\n\n<p>Kaj Hansen's answer addresses the automorphism group of your graph from a geometric perspective. I'll expand from a more graph theoretic perspective. Recall that $\\text{Aut}(C_{6}) \\cong D_{12}$ (where $C_{6}$ is the cycle graph on six-vertices, or the regular hexagon). By the above lemma, in your graph $G$, we have that $1$ cannot map to $2$ under the action of $\\text{Aut}(G)$. As $\\text{Aut}(G) \\leq D_{12}$, we have that $|\\text{Aut}(G)| \\leq 6$. Now show that $D_{6} \\cong \\text{Sym}(3) \\leq \\text{Aut}(G)$, by showing the action of $D_{6}$ on the triangle induces an action on $G$.</p>\n" }, { "answer_id": 2223616, "score": 0, "text": "<p>If an automorphism of the graph maps vertex $a$ to vertex $b$, then the edges incident to $a$ are mapped to edges incident to $b$. Hence, the degrees of vertices $a$ and $b$ are equal. In the graph shown, vertices $1$ and $2$ have degrees $4$ and $2$, respectively. Hence, there does not exist any automorphism which maps vertex $1$ to vertex $2$. Hence, vertices $1$ and $2$ cannot be in the same orbit of the action of the automorphism group $G$ of the graph. </p>\n\n<p>Let $r$ be the permutation $(1,2,3,4,5,6)$ of the vertex set. As mentioned in the previous paragraph $r \\notin G$. However, it can be observed that $r^2$ preserves adjacency and so $r^2 \\in G$. Similarly, $r^4 \\in G$. Hence, $G$ contains the set $\\{1,r^2,r^4\\}$ of three rotations. Also, $G$ contains the reflection $s = (1,3)(4,5)$ about the line which passes through vertex $2$ and the center of the figure. Thus, $G$ contains the subgroup $\\langle s,r^2 \\rangle$, which consists of $6$ permutations. To show that this subgroup is indeed the full automorphism group of the graph, we can use the orbit-stabilizer lemma, which asserts that $|G| = |G_2|~|2^G|$. Observe that if $g \\in G_2$, then $g$ must permute the neighbors of $2$ among themselves, and so $g$ either fixes vertex $1$ or maps vertex $1$ to vertex $3$. In the former case, $g$ must be the identity, and in the latter case $g=s$. Hence, $|G_2|=2$, whence $|G| = |G_2|~|2^G| = 2 \\cdot 3 = 6$. </p>\n\n<p>From the theory of permutation groups, we know that if $G$ acts on a set $V$ and $x, y \\in V$ are in the same orbit, then $|G_x| = |G(x \\rightarrow y)|$. Since vertices $6$ and $2$ are in the same orbit, $|G(2 \\rightarrow 6)|$ is equal to $|G_2|$, which equals $2$. </p>\n" }, { "answer_id": 2079594, "score": 2, "text": "<p>The orbits and stabilizers depend on the group of symmetries under consideration. In this case that group seems to be the symmetries of the hexagon that preserve the inscribed triangle.</p>\n\n<p>That answers your second question: you can move $1$ to $3$ and $5$ but not to $2$.</p>\n\n<p>For the first question, look for the symmetries that move $2$ to $6$, then look for the symmetries that fix a vertex (the identity will always work but there may be more}.</p>\n" }, { "answer_id": 2079602, "score": 1, "text": "<p>You need to specify the group that's acting on this set. Based on the second question:</p>\n\n<blockquote>\n <p>Why is $\\{1,3,5\\}$ an orbit, but not $\\{1,2,3\\}$?</p>\n</blockquote>\n\n<p>I'm guessing the symmetry group of a triangle is acting on the triangle with vertices at $1$, $3$, and $5$ (and by extension is acting on the entire hexagon by preserving the relative positions of the points w.r.t. the triangle). If this is indeed the case, we cannot have $\\{1, 2, 3\\}$ as an orbit since sending the vertex labeled \"$1$\" to the point labeled \"$2$\" cannot preserve the triangle to which vertex \"$1$\" belongs. In other words, the triangle post-symmetry movement needs to be in the exact same orientation as the original triangle (ignoring labeled vertices).</p>\n\n<hr>\n\n<p>To tackle the first question, first recall that every element in the symmetry group of a triangle is some combination of a chosen flip $F$ and a $120^\\circ$ clockwise rotation $R$. You can check that $F$ and $R$ anticommute in the sense that $FR = R^{-1}F$. Armed with this knowledge, you can write down $6$ unique triangle-preserving isometries: $\\{R^kF^j \\ | \\ 0 \\leq k \\leq 2 \\text{ and } 0 \\leq j \\leq 1 \\}$. See which ones will send $2 \\mapsto 6$ and which ones will send $2 \\mapsto 2$ when applied to the triangle in the figure (and hence the hexagon).</p>\n" }, { "answer_id": 2081255, "score": 0, "text": "<p>Symmetry is a fancy word for automorphism. Determining the symmetries of the graph in your post is simply asking you to determine the automorphism group of the graph. </p>\n\n<p>Recall that if $G, H$ are graphs, an isomorphism $\\phi : V(G) \\to V(H)$ is a bijection such that if $ij \\in E(G)$, then $\\phi(i)\\phi(j) \\in E(H)$. This second condition is called a graph homomorphism:</p>\n\n<blockquote>\n <p>$ij \\in E(G)$, then $\\phi(i)\\phi(j) \\in E(H)$. </p>\n</blockquote>\n\n<p>An automorphism is an isomorphism $\\phi : V(G) \\to V(G)$, where $G$ is a graph. Trivially, the identity map is an automorphism of a graph. The automorphism group of $G$, $\\text{Aut}(G)$, measures how symmetric the graph is. That is, how can we relabel the vertices to preserve the graph structure. It is, however, more intuitive to think of sending vertex $i$ to vertex $j$, rather than relabeling. That is, the group action is a dynamical process.</p>\n\n<p>Now from the definition of a graph isomorphism, we have the following lemma:</p>\n\n<p><strong>Lemma:</strong> Let $G, H$ be isomorphic graphs, and let $\\phi : V(G) \\to V(H)$ be an isomorphism. Then for each $v \\in V(G)$, $\\text{deg}(v) = \\text{deg}(\\phi(v))$. </p>\n\n<p>Take some time to prove this lemma (and feel free to comment if you need a hint).</p>\n\n<p>Kaj Hansen's answer addresses the automorphism group of your graph from a geometric perspective. I'll expand from a more graph theoretic perspective. Recall that $\\text{Aut}(C_{6}) \\cong D_{12}$ (where $C_{6}$ is the cycle graph on six-vertices, or the regular hexagon). By the above lemma, in your graph $G$, we have that $1$ cannot map to $2$ under the action of $\\text{Aut}(G)$. As $\\text{Aut}(G) \\leq D_{12}$, we have that $|\\text{Aut}(G)| \\leq 6$. Now show that $D_{6} \\cong \\text{Sym}(3) \\leq \\text{Aut}(G)$, by showing the action of $D_{6}$ on the triangle induces an action on $G$.</p>\n" }, { "answer_id": 2223616, "score": 0, "text": "<p>If an automorphism of the graph maps vertex $a$ to vertex $b$, then the edges incident to $a$ are mapped to edges incident to $b$. Hence, the degrees of vertices $a$ and $b$ are equal. In the graph shown, vertices $1$ and $2$ have degrees $4$ and $2$, respectively. Hence, there does not exist any automorphism which maps vertex $1$ to vertex $2$. Hence, vertices $1$ and $2$ cannot be in the same orbit of the action of the automorphism group $G$ of the graph. </p>\n\n<p>Let $r$ be the permutation $(1,2,3,4,5,6)$ of the vertex set. As mentioned in the previous paragraph $r \\notin G$. However, it can be observed that $r^2$ preserves adjacency and so $r^2 \\in G$. Similarly, $r^4 \\in G$. Hence, $G$ contains the set $\\{1,r^2,r^4\\}$ of three rotations. Also, $G$ contains the reflection $s = (1,3)(4,5)$ about the line which passes through vertex $2$ and the center of the figure. Thus, $G$ contains the subgroup $\\langle s,r^2 \\rangle$, which consists of $6$ permutations. To show that this subgroup is indeed the full automorphism group of the graph, we can use the orbit-stabilizer lemma, which asserts that $|G| = |G_2|~|2^G|$. Observe that if $g \\in G_2$, then $g$ must permute the neighbors of $2$ among themselves, and so $g$ either fixes vertex $1$ or maps vertex $1$ to vertex $3$. In the former case, $g$ must be the identity, and in the latter case $g=s$. Hence, $|G_2|=2$, whence $|G| = |G_2|~|2^G| = 2 \\cdot 3 = 6$. </p>\n\n<p>From the theory of permutation groups, we know that if $G$ acts on a set $V$ and $x, y \\in V$ are in the same orbit, then $|G_x| = |G(x \\rightarrow y)|$. Since vertices $6$ and $2$ are in the same orbit, $|G(2 \\rightarrow 6)|$ is equal to $|G_2|$, which equals $2$. </p>\n" }, { "answer_id": 2079594, "score": 2, "text": "<p>The orbits and stabilizers depend on the group of symmetries under consideration. In this case that group seems to be the symmetries of the hexagon that preserve the inscribed triangle.</p>\n\n<p>That answers your second question: you can move $1$ to $3$ and $5$ but not to $2$.</p>\n\n<p>For the first question, look for the symmetries that move $2$ to $6$, then look for the symmetries that fix a vertex (the identity will always work but there may be more}.</p>\n" }, { "answer_id": 2079602, "score": 1, "text": "<p>You need to specify the group that's acting on this set. Based on the second question:</p>\n\n<blockquote>\n <p>Why is $\\{1,3,5\\}$ an orbit, but not $\\{1,2,3\\}$?</p>\n</blockquote>\n\n<p>I'm guessing the symmetry group of a triangle is acting on the triangle with vertices at $1$, $3$, and $5$ (and by extension is acting on the entire hexagon by preserving the relative positions of the points w.r.t. the triangle). If this is indeed the case, we cannot have $\\{1, 2, 3\\}$ as an orbit since sending the vertex labeled \"$1$\" to the point labeled \"$2$\" cannot preserve the triangle to which vertex \"$1$\" belongs. In other words, the triangle post-symmetry movement needs to be in the exact same orientation as the original triangle (ignoring labeled vertices).</p>\n\n<hr>\n\n<p>To tackle the first question, first recall that every element in the symmetry group of a triangle is some combination of a chosen flip $F$ and a $120^\\circ$ clockwise rotation $R$. You can check that $F$ and $R$ anticommute in the sense that $FR = R^{-1}F$. Armed with this knowledge, you can write down $6$ unique triangle-preserving isometries: $\\{R^kF^j \\ | \\ 0 \\leq k \\leq 2 \\text{ and } 0 \\leq j \\leq 1 \\}$. See which ones will send $2 \\mapsto 6$ and which ones will send $2 \\mapsto 2$ when applied to the triangle in the figure (and hence the hexagon).</p>\n" }, { "answer_id": 2081255, "score": 0, "text": "<p>Symmetry is a fancy word for automorphism. Determining the symmetries of the graph in your post is simply asking you to determine the automorphism group of the graph. </p>\n\n<p>Recall that if $G, H$ are graphs, an isomorphism $\\phi : V(G) \\to V(H)$ is a bijection such that if $ij \\in E(G)$, then $\\phi(i)\\phi(j) \\in E(H)$. This second condition is called a graph homomorphism:</p>\n\n<blockquote>\n <p>$ij \\in E(G)$, then $\\phi(i)\\phi(j) \\in E(H)$. </p>\n</blockquote>\n\n<p>An automorphism is an isomorphism $\\phi : V(G) \\to V(G)$, where $G$ is a graph. Trivially, the identity map is an automorphism of a graph. The automorphism group of $G$, $\\text{Aut}(G)$, measures how symmetric the graph is. That is, how can we relabel the vertices to preserve the graph structure. It is, however, more intuitive to think of sending vertex $i$ to vertex $j$, rather than relabeling. That is, the group action is a dynamical process.</p>\n\n<p>Now from the definition of a graph isomorphism, we have the following lemma:</p>\n\n<p><strong>Lemma:</strong> Let $G, H$ be isomorphic graphs, and let $\\phi : V(G) \\to V(H)$ be an isomorphism. Then for each $v \\in V(G)$, $\\text{deg}(v) = \\text{deg}(\\phi(v))$. </p>\n\n<p>Take some time to prove this lemma (and feel free to comment if you need a hint).</p>\n\n<p>Kaj Hansen's answer addresses the automorphism group of your graph from a geometric perspective. I'll expand from a more graph theoretic perspective. Recall that $\\text{Aut}(C_{6}) \\cong D_{12}$ (where $C_{6}$ is the cycle graph on six-vertices, or the regular hexagon). By the above lemma, in your graph $G$, we have that $1$ cannot map to $2$ under the action of $\\text{Aut}(G)$. As $\\text{Aut}(G) \\leq D_{12}$, we have that $|\\text{Aut}(G)| \\leq 6$. Now show that $D_{6} \\cong \\text{Sym}(3) \\leq \\text{Aut}(G)$, by showing the action of $D_{6}$ on the triangle induces an action on $G$.</p>\n" }, { "answer_id": 2223616, "score": 0, "text": "<p>If an automorphism of the graph maps vertex $a$ to vertex $b$, then the edges incident to $a$ are mapped to edges incident to $b$. Hence, the degrees of vertices $a$ and $b$ are equal. In the graph shown, vertices $1$ and $2$ have degrees $4$ and $2$, respectively. Hence, there does not exist any automorphism which maps vertex $1$ to vertex $2$. Hence, vertices $1$ and $2$ cannot be in the same orbit of the action of the automorphism group $G$ of the graph. </p>\n\n<p>Let $r$ be the permutation $(1,2,3,4,5,6)$ of the vertex set. As mentioned in the previous paragraph $r \\notin G$. However, it can be observed that $r^2$ preserves adjacency and so $r^2 \\in G$. Similarly, $r^4 \\in G$. Hence, $G$ contains the set $\\{1,r^2,r^4\\}$ of three rotations. Also, $G$ contains the reflection $s = (1,3)(4,5)$ about the line which passes through vertex $2$ and the center of the figure. Thus, $G$ contains the subgroup $\\langle s,r^2 \\rangle$, which consists of $6$ permutations. To show that this subgroup is indeed the full automorphism group of the graph, we can use the orbit-stabilizer lemma, which asserts that $|G| = |G_2|~|2^G|$. Observe that if $g \\in G_2$, then $g$ must permute the neighbors of $2$ among themselves, and so $g$ either fixes vertex $1$ or maps vertex $1$ to vertex $3$. In the former case, $g$ must be the identity, and in the latter case $g=s$. Hence, $|G_2|=2$, whence $|G| = |G_2|~|2^G| = 2 \\cdot 3 = 6$. </p>\n\n<p>From the theory of permutation groups, we know that if $G$ acts on a set $V$ and $x, y \\in V$ are in the same orbit, then $|G_x| = |G(x \\rightarrow y)|$. Since vertices $6$ and $2$ are in the same orbit, $|G(2 \\rightarrow 6)|$ is equal to $|G_2|$, which equals $2$. </p>\n" }, { "answer_id": 2079594, "score": 2, "text": "<p>The orbits and stabilizers depend on the group of symmetries under consideration. In this case that group seems to be the symmetries of the hexagon that preserve the inscribed triangle.</p>\n\n<p>That answers your second question: you can move $1$ to $3$ and $5$ but not to $2$.</p>\n\n<p>For the first question, look for the symmetries that move $2$ to $6$, then look for the symmetries that fix a vertex (the identity will always work but there may be more}.</p>\n" }, { "answer_id": 2079602, "score": 1, "text": "<p>You need to specify the group that's acting on this set. Based on the second question:</p>\n\n<blockquote>\n <p>Why is $\\{1,3,5\\}$ an orbit, but not $\\{1,2,3\\}$?</p>\n</blockquote>\n\n<p>I'm guessing the symmetry group of a triangle is acting on the triangle with vertices at $1$, $3$, and $5$ (and by extension is acting on the entire hexagon by preserving the relative positions of the points w.r.t. the triangle). If this is indeed the case, we cannot have $\\{1, 2, 3\\}$ as an orbit since sending the vertex labeled \"$1$\" to the point labeled \"$2$\" cannot preserve the triangle to which vertex \"$1$\" belongs. In other words, the triangle post-symmetry movement needs to be in the exact same orientation as the original triangle (ignoring labeled vertices).</p>\n\n<hr>\n\n<p>To tackle the first question, first recall that every element in the symmetry group of a triangle is some combination of a chosen flip $F$ and a $120^\\circ$ clockwise rotation $R$. You can check that $F$ and $R$ anticommute in the sense that $FR = R^{-1}F$. Armed with this knowledge, you can write down $6$ unique triangle-preserving isometries: $\\{R^kF^j \\ | \\ 0 \\leq k \\leq 2 \\text{ and } 0 \\leq j \\leq 1 \\}$. See which ones will send $2 \\mapsto 6$ and which ones will send $2 \\mapsto 2$ when applied to the triangle in the figure (and hence the hexagon).</p>\n" }, { "answer_id": 2081255, "score": 0, "text": "<p>Symmetry is a fancy word for automorphism. Determining the symmetries of the graph in your post is simply asking you to determine the automorphism group of the graph. </p>\n\n<p>Recall that if $G, H$ are graphs, an isomorphism $\\phi : V(G) \\to V(H)$ is a bijection such that if $ij \\in E(G)$, then $\\phi(i)\\phi(j) \\in E(H)$. This second condition is called a graph homomorphism:</p>\n\n<blockquote>\n <p>$ij \\in E(G)$, then $\\phi(i)\\phi(j) \\in E(H)$. </p>\n</blockquote>\n\n<p>An automorphism is an isomorphism $\\phi : V(G) \\to V(G)$, where $G$ is a graph. Trivially, the identity map is an automorphism of a graph. The automorphism group of $G$, $\\text{Aut}(G)$, measures how symmetric the graph is. That is, how can we relabel the vertices to preserve the graph structure. It is, however, more intuitive to think of sending vertex $i$ to vertex $j$, rather than relabeling. That is, the group action is a dynamical process.</p>\n\n<p>Now from the definition of a graph isomorphism, we have the following lemma:</p>\n\n<p><strong>Lemma:</strong> Let $G, H$ be isomorphic graphs, and let $\\phi : V(G) \\to V(H)$ be an isomorphism. Then for each $v \\in V(G)$, $\\text{deg}(v) = \\text{deg}(\\phi(v))$. </p>\n\n<p>Take some time to prove this lemma (and feel free to comment if you need a hint).</p>\n\n<p>Kaj Hansen's answer addresses the automorphism group of your graph from a geometric perspective. I'll expand from a more graph theoretic perspective. Recall that $\\text{Aut}(C_{6}) \\cong D_{12}$ (where $C_{6}$ is the cycle graph on six-vertices, or the regular hexagon). By the above lemma, in your graph $G$, we have that $1$ cannot map to $2$ under the action of $\\text{Aut}(G)$. As $\\text{Aut}(G) \\leq D_{12}$, we have that $|\\text{Aut}(G)| \\leq 6$. Now show that $D_{6} \\cong \\text{Sym}(3) \\leq \\text{Aut}(G)$, by showing the action of $D_{6}$ on the triangle induces an action on $G$.</p>\n" }, { "answer_id": 2223616, "score": 0, "text": "<p>If an automorphism of the graph maps vertex $a$ to vertex $b$, then the edges incident to $a$ are mapped to edges incident to $b$. Hence, the degrees of vertices $a$ and $b$ are equal. In the graph shown, vertices $1$ and $2$ have degrees $4$ and $2$, respectively. Hence, there does not exist any automorphism which maps vertex $1$ to vertex $2$. Hence, vertices $1$ and $2$ cannot be in the same orbit of the action of the automorphism group $G$ of the graph. </p>\n\n<p>Let $r$ be the permutation $(1,2,3,4,5,6)$ of the vertex set. As mentioned in the previous paragraph $r \\notin G$. However, it can be observed that $r^2$ preserves adjacency and so $r^2 \\in G$. Similarly, $r^4 \\in G$. Hence, $G$ contains the set $\\{1,r^2,r^4\\}$ of three rotations. Also, $G$ contains the reflection $s = (1,3)(4,5)$ about the line which passes through vertex $2$ and the center of the figure. Thus, $G$ contains the subgroup $\\langle s,r^2 \\rangle$, which consists of $6$ permutations. To show that this subgroup is indeed the full automorphism group of the graph, we can use the orbit-stabilizer lemma, which asserts that $|G| = |G_2|~|2^G|$. Observe that if $g \\in G_2$, then $g$ must permute the neighbors of $2$ among themselves, and so $g$ either fixes vertex $1$ or maps vertex $1$ to vertex $3$. In the former case, $g$ must be the identity, and in the latter case $g=s$. Hence, $|G_2|=2$, whence $|G| = |G_2|~|2^G| = 2 \\cdot 3 = 6$. </p>\n\n<p>From the theory of permutation groups, we know that if $G$ acts on a set $V$ and $x, y \\in V$ are in the same orbit, then $|G_x| = |G(x \\rightarrow y)|$. Since vertices $6$ and $2$ are in the same orbit, $|G(2 \\rightarrow 6)|$ is equal to $|G_2|$, which equals $2$. </p>\n" } ]
Question: How to think about orbits and stabilizers Answer 1: <p>The orbits and stabilizers depend on the group of symmetries under consideration. In this case that group seems to be the symmetries of the hexagon that preserve the inscribed triangle.</p> <p>That answers your second question: you can move $1$ to $3$ and $5$ but not to $2$.</p> <p>For the first question, look for the symmetries that move $2$ to $6$, then look for the symmetries that fix a vertex (the identity will always work but there may be more}.</p> Answer 2: <p>You need to specify the group that's acting on this set. Based on the second question:</p> <blockquote> <p>Why is $\{1,3,5\}$ an orbit, but not $\{1,2,3\}$?</p> </blockquote> <p>I'm guessing the symmetry group of a triangle is acting on the triangle with vertices at $1$, $3$, and $5$ (and by extension is acting on the entire hexagon by preserving the relative positions of the points w.r.t. the triangle). If this is indeed the case, we cannot have $\{1, 2, 3\}$ as an orbit since sending the vertex labeled "$1$" to the point labeled "$2$" cannot preserve the triangle to which vertex "$1$" belongs. In other words, the triangle post-symmetry movement needs to be in the exact same orientation as the original triangle (ignoring labeled vertices).</p> <hr> <p>To tackle the first question, first recall that every element in the symmetry group of a triangle is some combination of a chosen flip $F$ and a $120^\circ$ clockwise rotation $R$. You can check that $F$ and $R$ anticommute in the sense that $FR = R^{-1}F$. Armed with this knowledge, you can write down $6$ unique triangle-preserving isometries: $\{R^kF^j \ | \ 0 \leq k \leq 2 \text{ and } 0 \leq j \leq 1 \}$. See which ones will send $2 \mapsto 6$ and which ones will send $2 \mapsto 2$ when applied to the triangle in the figure (and hence the hexagon).</p> Answer 3: <p>Symmetry is a fancy word for automorphism. Determining the symmetries of the graph in your post is simply asking you to determine the automorphism group of the graph. </p> <p>Recall that if $G, H$ are graphs, an isomorphism $\phi : V(G) \to V(H)$ is a bijection such that if $ij \in E(G)$, then $\phi(i)\phi(j) \in E(H)$. This second condition is called a graph homomorphism:</p> <blockquote> <p>$ij \in E(G)$, then $\phi(i)\phi(j) \in E(H)$. </p> </blockquote> <p>An automorphism is an isomorphism $\phi : V(G) \to V(G)$, where $G$ is a graph. Trivially, the identity map is an automorphism of a graph. The automorphism group of $G$, $\text{Aut}(G)$, measures how symmetric the graph is. That is, how can we relabel the vertices to preserve the graph structure. It is, however, more intuitive to think of sending vertex $i$ to vertex $j$, rather than relabeling. That is, the group action is a dynamical process.</p> <p>Now from the definition of a graph isomorphism, we have the following lemma:</p> <p><strong>Lemma:</strong> Let $G, H$ be isomorphic graphs, and let $\phi : V(G) \to V(H)$ be an isomorphism. Then for each $v \in V(G)$, $\text{deg}(v) = \text{deg}(\phi(v))$. </p> <p>Take some time to prove this lemma (and feel free to comment if you need a hint).</p> <p>Kaj Hansen's answer addresses the automorphism group of your graph from a geometric perspective. I'll expand from a more graph theoretic perspective. Recall that $\text{Aut}(C_{6}) \cong D_{12}$ (where $C_{6}$ is the cycle graph on six-vertices, or the regular hexagon). By the above lemma, in your graph $G$, we have that $1$ cannot map to $2$ under the action of $\text{Aut}(G)$. As $\text{Aut}(G) \leq D_{12}$, we have that $|\text{Aut}(G)| \leq 6$. Now show that $D_{6} \cong \text{Sym}(3) \leq \text{Aut}(G)$, by showing the action of $D_{6}$ on the triangle induces an action on $G$.</p> Answer 4: <p>If an automorphism of the graph maps vertex $a$ to vertex $b$, then the edges incident to $a$ are mapped to edges incident to $b$. Hence, the degrees of vertices $a$ and $b$ are equal. In the graph shown, vertices $1$ and $2$ have degrees $4$ and $2$, respectively. Hence, there does not exist any automorphism which maps vertex $1$ to vertex $2$. Hence, vertices $1$ and $2$ cannot be in the same orbit of the action of the automorphism group $G$ of the graph. </p> <p>Let $r$ be the permutation $(1,2,3,4,5,6)$ of the vertex set. As mentioned in the previous paragraph $r \notin G$. However, it can be observed that $r^2$ preserves adjacency and so $r^2 \in G$. Similarly, $r^4 \in G$. Hence, $G$ contains the set $\{1,r^2,r^4\}$ of three rotations. Also, $G$ contains the reflection $s = (1,3)(4,5)$ about the line which passes through vertex $2$ and the center of the figure. Thus, $G$ contains the subgroup $\langle s,r^2 \rangle$, which consists of $6$ permutations. To show that this subgroup is indeed the full automorphism group of the graph, we can use the orbit-stabilizer lemma, which asserts that $|G| = |G_2|~|2^G|$. Observe that if $g \in G_2$, then $g$ must permute the neighbors of $2$ among themselves, and so $g$ either fixes vertex $1$ or maps vertex $1$ to vertex $3$. In the former case, $g$ must be the identity, and in the latter case $g=s$. Hence, $|G_2|=2$, whence $|G| = |G_2|~|2^G| = 2 \cdot 3 = 6$. </p> <p>From the theory of permutation groups, we know that if $G$ acts on a set $V$ and $x, y \in V$ are in the same orbit, then $|G_x| = |G(x \rightarrow y)|$. Since vertices $6$ and $2$ are in the same orbit, $|G(2 \rightarrow 6)|$ is equal to $|G_2|$, which equals $2$. </p> Answer 5: <p>The orbits and stabilizers depend on the group of symmetries under consideration. In this case that group seems to be the symmetries of the hexagon that preserve the inscribed triangle.</p> <p>That answers your second question: you can move $1$ to $3$ and $5$ but not to $2$.</p> <p>For the first question, look for the symmetries that move $2$ to $6$, then look for the symmetries that fix a vertex (the identity will always work but there may be more}.</p> Answer 6: <p>You need to specify the group that's acting on this set. Based on the second question:</p> <blockquote> <p>Why is $\{1,3,5\}$ an orbit, but not $\{1,2,3\}$?</p> </blockquote> <p>I'm guessing the symmetry group of a triangle is acting on the triangle with vertices at $1$, $3$, and $5$ (and by extension is acting on the entire hexagon by preserving the relative positions of the points w.r.t. the triangle). If this is indeed the case, we cannot have $\{1, 2, 3\}$ as an orbit since sending the vertex labeled "$1$" to the point labeled "$2$" cannot preserve the triangle to which vertex "$1$" belongs. In other words, the triangle post-symmetry movement needs to be in the exact same orientation as the original triangle (ignoring labeled vertices).</p> <hr> <p>To tackle the first question, first recall that every element in the symmetry group of a triangle is some combination of a chosen flip $F$ and a $120^\circ$ clockwise rotation $R$. You can check that $F$ and $R$ anticommute in the sense that $FR = R^{-1}F$. Armed with this knowledge, you can write down $6$ unique triangle-preserving isometries: $\{R^kF^j \ | \ 0 \leq k \leq 2 \text{ and } 0 \leq j \leq 1 \}$. See which ones will send $2 \mapsto 6$ and which ones will send $2 \mapsto 2$ when applied to the triangle in the figure (and hence the hexagon).</p> Answer 7: <p>Symmetry is a fancy word for automorphism. Determining the symmetries of the graph in your post is simply asking you to determine the automorphism group of the graph. </p> <p>Recall that if $G, H$ are graphs, an isomorphism $\phi : V(G) \to V(H)$ is a bijection such that if $ij \in E(G)$, then $\phi(i)\phi(j) \in E(H)$. This second condition is called a graph homomorphism:</p> <blockquote> <p>$ij \in E(G)$, then $\phi(i)\phi(j) \in E(H)$. </p> </blockquote> <p>An automorphism is an isomorphism $\phi : V(G) \to V(G)$, where $G$ is a graph. Trivially, the identity map is an automorphism of a graph. The automorphism group of $G$, $\text{Aut}(G)$, measures how symmetric the graph is. That is, how can we relabel the vertices to preserve the graph structure. It is, however, more intuitive to think of sending vertex $i$ to vertex $j$, rather than relabeling. That is, the group action is a dynamical process.</p> <p>Now from the definition of a graph isomorphism, we have the following lemma:</p> <p><strong>Lemma:</strong> Let $G, H$ be isomorphic graphs, and let $\phi : V(G) \to V(H)$ be an isomorphism. Then for each $v \in V(G)$, $\text{deg}(v) = \text{deg}(\phi(v))$. </p> <p>Take some time to prove this lemma (and feel free to comment if you need a hint).</p> <p>Kaj Hansen's answer addresses the automorphism group of your graph from a geometric perspective. I'll expand from a more graph theoretic perspective. Recall that $\text{Aut}(C_{6}) \cong D_{12}$ (where $C_{6}$ is the cycle graph on six-vertices, or the regular hexagon). By the above lemma, in your graph $G$, we have that $1$ cannot map to $2$ under the action of $\text{Aut}(G)$. As $\text{Aut}(G) \leq D_{12}$, we have that $|\text{Aut}(G)| \leq 6$. Now show that $D_{6} \cong \text{Sym}(3) \leq \text{Aut}(G)$, by showing the action of $D_{6}$ on the triangle induces an action on $G$.</p> Answer 8: <p>If an automorphism of the graph maps vertex $a$ to vertex $b$, then the edges incident to $a$ are mapped to edges incident to $b$. Hence, the degrees of vertices $a$ and $b$ are equal. In the graph shown, vertices $1$ and $2$ have degrees $4$ and $2$, respectively. Hence, there does not exist any automorphism which maps vertex $1$ to vertex $2$. Hence, vertices $1$ and $2$ cannot be in the same orbit of the action of the automorphism group $G$ of the graph. </p> <p>Let $r$ be the permutation $(1,2,3,4,5,6)$ of the vertex set. As mentioned in the previous paragraph $r \notin G$. However, it can be observed that $r^2$ preserves adjacency and so $r^2 \in G$. Similarly, $r^4 \in G$. Hence, $G$ contains the set $\{1,r^2,r^4\}$ of three rotations. Also, $G$ contains the reflection $s = (1,3)(4,5)$ about the line which passes through vertex $2$ and the center of the figure. Thus, $G$ contains the subgroup $\langle s,r^2 \rangle$, which consists of $6$ permutations. To show that this subgroup is indeed the full automorphism group of the graph, we can use the orbit-stabilizer lemma, which asserts that $|G| = |G_2|~|2^G|$. Observe that if $g \in G_2$, then $g$ must permute the neighbors of $2$ among themselves, and so $g$ either fixes vertex $1$ or maps vertex $1$ to vertex $3$. In the former case, $g$ must be the identity, and in the latter case $g=s$. Hence, $|G_2|=2$, whence $|G| = |G_2|~|2^G| = 2 \cdot 3 = 6$. </p> <p>From the theory of permutation groups, we know that if $G$ acts on a set $V$ and $x, y \in V$ are in the same orbit, then $|G_x| = |G(x \rightarrow y)|$. Since vertices $6$ and $2$ are in the same orbit, $|G(2 \rightarrow 6)|$ is equal to $|G_2|$, which equals $2$. </p> Answer 9: <p>The orbits and stabilizers depend on the group of symmetries under consideration. In this case that group seems to be the symmetries of the hexagon that preserve the inscribed triangle.</p> <p>That answers your second question: you can move $1$ to $3$ and $5$ but not to $2$.</p> <p>For the first question, look for the symmetries that move $2$ to $6$, then look for the symmetries that fix a vertex (the identity will always work but there may be more}.</p> Answer 10: <p>You need to specify the group that's acting on this set. Based on the second question:</p> <blockquote> <p>Why is $\{1,3,5\}$ an orbit, but not $\{1,2,3\}$?</p> </blockquote> <p>I'm guessing the symmetry group of a triangle is acting on the triangle with vertices at $1$, $3$, and $5$ (and by extension is acting on the entire hexagon by preserving the relative positions of the points w.r.t. the triangle). If this is indeed the case, we cannot have $\{1, 2, 3\}$ as an orbit since sending the vertex labeled "$1$" to the point labeled "$2$" cannot preserve the triangle to which vertex "$1$" belongs. In other words, the triangle post-symmetry movement needs to be in the exact same orientation as the original triangle (ignoring labeled vertices).</p> <hr> <p>To tackle the first question, first recall that every element in the symmetry group of a triangle is some combination of a chosen flip $F$ and a $120^\circ$ clockwise rotation $R$. You can check that $F$ and $R$ anticommute in the sense that $FR = R^{-1}F$. Armed with this knowledge, you can write down $6$ unique triangle-preserving isometries: $\{R^kF^j \ | \ 0 \leq k \leq 2 \text{ and } 0 \leq j \leq 1 \}$. See which ones will send $2 \mapsto 6$ and which ones will send $2 \mapsto 2$ when applied to the triangle in the figure (and hence the hexagon).</p> Answer 11: <p>Symmetry is a fancy word for automorphism. Determining the symmetries of the graph in your post is simply asking you to determine the automorphism group of the graph. </p> <p>Recall that if $G, H$ are graphs, an isomorphism $\phi : V(G) \to V(H)$ is a bijection such that if $ij \in E(G)$, then $\phi(i)\phi(j) \in E(H)$. This second condition is called a graph homomorphism:</p> <blockquote> <p>$ij \in E(G)$, then $\phi(i)\phi(j) \in E(H)$. </p> </blockquote> <p>An automorphism is an isomorphism $\phi : V(G) \to V(G)$, where $G$ is a graph. Trivially, the identity map is an automorphism of a graph. The automorphism group of $G$, $\text{Aut}(G)$, measures how symmetric the graph is. That is, how can we relabel the vertices to preserve the graph structure. It is, however, more intuitive to think of sending vertex $i$ to vertex $j$, rather than relabeling. That is, the group action is a dynamical process.</p> <p>Now from the definition of a graph isomorphism, we have the following lemma:</p> <p><strong>Lemma:</strong> Let $G, H$ be isomorphic graphs, and let $\phi : V(G) \to V(H)$ be an isomorphism. Then for each $v \in V(G)$, $\text{deg}(v) = \text{deg}(\phi(v))$. </p> <p>Take some time to prove this lemma (and feel free to comment if you need a hint).</p> <p>Kaj Hansen's answer addresses the automorphism group of your graph from a geometric perspective. I'll expand from a more graph theoretic perspective. Recall that $\text{Aut}(C_{6}) \cong D_{12}$ (where $C_{6}$ is the cycle graph on six-vertices, or the regular hexagon). By the above lemma, in your graph $G$, we have that $1$ cannot map to $2$ under the action of $\text{Aut}(G)$. As $\text{Aut}(G) \leq D_{12}$, we have that $|\text{Aut}(G)| \leq 6$. Now show that $D_{6} \cong \text{Sym}(3) \leq \text{Aut}(G)$, by showing the action of $D_{6}$ on the triangle induces an action on $G$.</p> Answer 12: <p>If an automorphism of the graph maps vertex $a$ to vertex $b$, then the edges incident to $a$ are mapped to edges incident to $b$. Hence, the degrees of vertices $a$ and $b$ are equal. In the graph shown, vertices $1$ and $2$ have degrees $4$ and $2$, respectively. Hence, there does not exist any automorphism which maps vertex $1$ to vertex $2$. Hence, vertices $1$ and $2$ cannot be in the same orbit of the action of the automorphism group $G$ of the graph. </p> <p>Let $r$ be the permutation $(1,2,3,4,5,6)$ of the vertex set. As mentioned in the previous paragraph $r \notin G$. However, it can be observed that $r^2$ preserves adjacency and so $r^2 \in G$. Similarly, $r^4 \in G$. Hence, $G$ contains the set $\{1,r^2,r^4\}$ of three rotations. Also, $G$ contains the reflection $s = (1,3)(4,5)$ about the line which passes through vertex $2$ and the center of the figure. Thus, $G$ contains the subgroup $\langle s,r^2 \rangle$, which consists of $6$ permutations. To show that this subgroup is indeed the full automorphism group of the graph, we can use the orbit-stabilizer lemma, which asserts that $|G| = |G_2|~|2^G|$. Observe that if $g \in G_2$, then $g$ must permute the neighbors of $2$ among themselves, and so $g$ either fixes vertex $1$ or maps vertex $1$ to vertex $3$. In the former case, $g$ must be the identity, and in the latter case $g=s$. Hence, $|G_2|=2$, whence $|G| = |G_2|~|2^G| = 2 \cdot 3 = 6$. </p> <p>From the theory of permutation groups, we know that if $G$ acts on a set $V$ and $x, y \in V$ are in the same orbit, then $|G_x| = |G(x \rightarrow y)|$. Since vertices $6$ and $2$ are in the same orbit, $|G(2 \rightarrow 6)|$ is equal to $|G_2|$, which equals $2$. </p> Answer 13: <p>The orbits and stabilizers depend on the group of symmetries under consideration. In this case that group seems to be the symmetries of the hexagon that preserve the inscribed triangle.</p> <p>That answers your second question: you can move $1$ to $3$ and $5$ but not to $2$.</p> <p>For the first question, look for the symmetries that move $2$ to $6$, then look for the symmetries that fix a vertex (the identity will always work but there may be more}.</p> Answer 14: <p>You need to specify the group that's acting on this set. Based on the second question:</p> <blockquote> <p>Why is $\{1,3,5\}$ an orbit, but not $\{1,2,3\}$?</p> </blockquote> <p>I'm guessing the symmetry group of a triangle is acting on the triangle with vertices at $1$, $3$, and $5$ (and by extension is acting on the entire hexagon by preserving the relative positions of the points w.r.t. the triangle). If this is indeed the case, we cannot have $\{1, 2, 3\}$ as an orbit since sending the vertex labeled "$1$" to the point labeled "$2$" cannot preserve the triangle to which vertex "$1$" belongs. In other words, the triangle post-symmetry movement needs to be in the exact same orientation as the original triangle (ignoring labeled vertices).</p> <hr> <p>To tackle the first question, first recall that every element in the symmetry group of a triangle is some combination of a chosen flip $F$ and a $120^\circ$ clockwise rotation $R$. You can check that $F$ and $R$ anticommute in the sense that $FR = R^{-1}F$. Armed with this knowledge, you can write down $6$ unique triangle-preserving isometries: $\{R^kF^j \ | \ 0 \leq k \leq 2 \text{ and } 0 \leq j \leq 1 \}$. See which ones will send $2 \mapsto 6$ and which ones will send $2 \mapsto 2$ when applied to the triangle in the figure (and hence the hexagon).</p> Answer 15: <p>Symmetry is a fancy word for automorphism. Determining the symmetries of the graph in your post is simply asking you to determine the automorphism group of the graph. </p> <p>Recall that if $G, H$ are graphs, an isomorphism $\phi : V(G) \to V(H)$ is a bijection such that if $ij \in E(G)$, then $\phi(i)\phi(j) \in E(H)$. This second condition is called a graph homomorphism:</p> <blockquote> <p>$ij \in E(G)$, then $\phi(i)\phi(j) \in E(H)$. </p> </blockquote> <p>An automorphism is an isomorphism $\phi : V(G) \to V(G)$, where $G$ is a graph. Trivially, the identity map is an automorphism of a graph. The automorphism group of $G$, $\text{Aut}(G)$, measures how symmetric the graph is. That is, how can we relabel the vertices to preserve the graph structure. It is, however, more intuitive to think of sending vertex $i$ to vertex $j$, rather than relabeling. That is, the group action is a dynamical process.</p> <p>Now from the definition of a graph isomorphism, we have the following lemma:</p> <p><strong>Lemma:</strong> Let $G, H$ be isomorphic graphs, and let $\phi : V(G) \to V(H)$ be an isomorphism. Then for each $v \in V(G)$, $\text{deg}(v) = \text{deg}(\phi(v))$. </p> <p>Take some time to prove this lemma (and feel free to comment if you need a hint).</p> <p>Kaj Hansen's answer addresses the automorphism group of your graph from a geometric perspective. I'll expand from a more graph theoretic perspective. Recall that $\text{Aut}(C_{6}) \cong D_{12}$ (where $C_{6}$ is the cycle graph on six-vertices, or the regular hexagon). By the above lemma, in your graph $G$, we have that $1$ cannot map to $2$ under the action of $\text{Aut}(G)$. As $\text{Aut}(G) \leq D_{12}$, we have that $|\text{Aut}(G)| \leq 6$. Now show that $D_{6} \cong \text{Sym}(3) \leq \text{Aut}(G)$, by showing the action of $D_{6}$ on the triangle induces an action on $G$.</p> Answer 16: <p>If an automorphism of the graph maps vertex $a$ to vertex $b$, then the edges incident to $a$ are mapped to edges incident to $b$. Hence, the degrees of vertices $a$ and $b$ are equal. In the graph shown, vertices $1$ and $2$ have degrees $4$ and $2$, respectively. Hence, there does not exist any automorphism which maps vertex $1$ to vertex $2$. Hence, vertices $1$ and $2$ cannot be in the same orbit of the action of the automorphism group $G$ of the graph. </p> <p>Let $r$ be the permutation $(1,2,3,4,5,6)$ of the vertex set. As mentioned in the previous paragraph $r \notin G$. However, it can be observed that $r^2$ preserves adjacency and so $r^2 \in G$. Similarly, $r^4 \in G$. Hence, $G$ contains the set $\{1,r^2,r^4\}$ of three rotations. Also, $G$ contains the reflection $s = (1,3)(4,5)$ about the line which passes through vertex $2$ and the center of the figure. Thus, $G$ contains the subgroup $\langle s,r^2 \rangle$, which consists of $6$ permutations. To show that this subgroup is indeed the full automorphism group of the graph, we can use the orbit-stabilizer lemma, which asserts that $|G| = |G_2|~|2^G|$. Observe that if $g \in G_2$, then $g$ must permute the neighbors of $2$ among themselves, and so $g$ either fixes vertex $1$ or maps vertex $1$ to vertex $3$. In the former case, $g$ must be the identity, and in the latter case $g=s$. Hence, $|G_2|=2$, whence $|G| = |G_2|~|2^G| = 2 \cdot 3 = 6$. </p> <p>From the theory of permutation groups, we know that if $G$ acts on a set $V$ and $x, y \in V$ are in the same orbit, then $|G_x| = |G(x \rightarrow y)|$. Since vertices $6$ and $2$ are in the same orbit, $|G(2 \rightarrow 6)|$ is equal to $|G_2|$, which equals $2$. </p> Answer 17: <p>The orbits and stabilizers depend on the group of symmetries under consideration. In this case that group seems to be the symmetries of the hexagon that preserve the inscribed triangle.</p> <p>That answers your second question: you can move $1$ to $3$ and $5$ but not to $2$.</p> <p>For the first question, look for the symmetries that move $2$ to $6$, then look for the symmetries that fix a vertex (the identity will always work but there may be more}.</p> Answer 18: <p>You need to specify the group that's acting on this set. Based on the second question:</p> <blockquote> <p>Why is $\{1,3,5\}$ an orbit, but not $\{1,2,3\}$?</p> </blockquote> <p>I'm guessing the symmetry group of a triangle is acting on the triangle with vertices at $1$, $3$, and $5$ (and by extension is acting on the entire hexagon by preserving the relative positions of the points w.r.t. the triangle). If this is indeed the case, we cannot have $\{1, 2, 3\}$ as an orbit since sending the vertex labeled "$1$" to the point labeled "$2$" cannot preserve the triangle to which vertex "$1$" belongs. In other words, the triangle post-symmetry movement needs to be in the exact same orientation as the original triangle (ignoring labeled vertices).</p> <hr> <p>To tackle the first question, first recall that every element in the symmetry group of a triangle is some combination of a chosen flip $F$ and a $120^\circ$ clockwise rotation $R$. You can check that $F$ and $R$ anticommute in the sense that $FR = R^{-1}F$. Armed with this knowledge, you can write down $6$ unique triangle-preserving isometries: $\{R^kF^j \ | \ 0 \leq k \leq 2 \text{ and } 0 \leq j \leq 1 \}$. See which ones will send $2 \mapsto 6$ and which ones will send $2 \mapsto 2$ when applied to the triangle in the figure (and hence the hexagon).</p> Answer 19: <p>Symmetry is a fancy word for automorphism. Determining the symmetries of the graph in your post is simply asking you to determine the automorphism group of the graph. </p> <p>Recall that if $G, H$ are graphs, an isomorphism $\phi : V(G) \to V(H)$ is a bijection such that if $ij \in E(G)$, then $\phi(i)\phi(j) \in E(H)$. This second condition is called a graph homomorphism:</p> <blockquote> <p>$ij \in E(G)$, then $\phi(i)\phi(j) \in E(H)$. </p> </blockquote> <p>An automorphism is an isomorphism $\phi : V(G) \to V(G)$, where $G$ is a graph. Trivially, the identity map is an automorphism of a graph. The automorphism group of $G$, $\text{Aut}(G)$, measures how symmetric the graph is. That is, how can we relabel the vertices to preserve the graph structure. It is, however, more intuitive to think of sending vertex $i$ to vertex $j$, rather than relabeling. That is, the group action is a dynamical process.</p> <p>Now from the definition of a graph isomorphism, we have the following lemma:</p> <p><strong>Lemma:</strong> Let $G, H$ be isomorphic graphs, and let $\phi : V(G) \to V(H)$ be an isomorphism. Then for each $v \in V(G)$, $\text{deg}(v) = \text{deg}(\phi(v))$. </p> <p>Take some time to prove this lemma (and feel free to comment if you need a hint).</p> <p>Kaj Hansen's answer addresses the automorphism group of your graph from a geometric perspective. I'll expand from a more graph theoretic perspective. Recall that $\text{Aut}(C_{6}) \cong D_{12}$ (where $C_{6}$ is the cycle graph on six-vertices, or the regular hexagon). By the above lemma, in your graph $G$, we have that $1$ cannot map to $2$ under the action of $\text{Aut}(G)$. As $\text{Aut}(G) \leq D_{12}$, we have that $|\text{Aut}(G)| \leq 6$. Now show that $D_{6} \cong \text{Sym}(3) \leq \text{Aut}(G)$, by showing the action of $D_{6}$ on the triangle induces an action on $G$.</p> Answer 20: <p>If an automorphism of the graph maps vertex $a$ to vertex $b$, then the edges incident to $a$ are mapped to edges incident to $b$. Hence, the degrees of vertices $a$ and $b$ are equal. In the graph shown, vertices $1$ and $2$ have degrees $4$ and $2$, respectively. Hence, there does not exist any automorphism which maps vertex $1$ to vertex $2$. Hence, vertices $1$ and $2$ cannot be in the same orbit of the action of the automorphism group $G$ of the graph. </p> <p>Let $r$ be the permutation $(1,2,3,4,5,6)$ of the vertex set. As mentioned in the previous paragraph $r \notin G$. However, it can be observed that $r^2$ preserves adjacency and so $r^2 \in G$. Similarly, $r^4 \in G$. Hence, $G$ contains the set $\{1,r^2,r^4\}$ of three rotations. Also, $G$ contains the reflection $s = (1,3)(4,5)$ about the line which passes through vertex $2$ and the center of the figure. Thus, $G$ contains the subgroup $\langle s,r^2 \rangle$, which consists of $6$ permutations. To show that this subgroup is indeed the full automorphism group of the graph, we can use the orbit-stabilizer lemma, which asserts that $|G| = |G_2|~|2^G|$. Observe that if $g \in G_2$, then $g$ must permute the neighbors of $2$ among themselves, and so $g$ either fixes vertex $1$ or maps vertex $1$ to vertex $3$. In the former case, $g$ must be the identity, and in the latter case $g=s$. Hence, $|G_2|=2$, whence $|G| = |G_2|~|2^G| = 2 \cdot 3 = 6$. </p> <p>From the theory of permutation groups, we know that if $G$ acts on a set $V$ and $x, y \in V$ are in the same orbit, then $|G_x| = |G(x \rightarrow y)|$. Since vertices $6$ and $2$ are in the same orbit, $|G(2 \rightarrow 6)|$ is equal to $|G_2|$, which equals $2$. </p>
How to show that the disc algebra $A(D)$ is a Banach space?
2,079,658
4
https://math.stackexchange.com/questions/2079658/how-to-show-that-the-disc-algebra-ad-is-a-banach-space
<p>I just know that <span class="math-container">$A(D)$</span> is a space of functions analytic on the open unit disc and it is a subspace of <span class="math-container">$H^\infty$</span> and <span class="math-container">$H^\infty$</span> is an hardy space while <span class="math-container">$H^\infty$</span> is defined as the vector space of bounded holomorphic functions on the disk. How can I show the disc algebra is a Banach space?</p> <p>I would be so appreciated if you help me.</p>
[ { "answer_id": 2079671, "score": 4, "text": "<p>The magic phrase is <em>Weierstraß-convergence-theorem</em> (I am not 100 % sure if this is its name in the english literature). The point is that $A(D)$ is obviously a subalgebra of $C(D)$ (set of continuous functions on $D$) which is a Banach space (even Banach algebra) w.r.t the maximumnorm/supremumnorm (again I'm not sure about the english word, anyway I think you know this statement). An important and well known theorem of Weierstraß states that a sequence of holomorhpic locally uniformly convergent functions converge to a holomorhic function. That is to say that $A(D)$ is closed inside $C(D)$.</p>\n" }, { "answer_id": 2080323, "score": 3, "text": "<p>$\\scriptstyle \\text{Let me add some details :}$ </p>\n\n<p>If a sequence of bounded continuous functions converges in the $\\sup$ norm then the limit is continuous and bounded. </p>\n\n<p>If a sequence $f_n(z)$ of holomorphic functions converges uniformly to $f(z)$ then $\\int_\\gamma f(z) dz= \\lim_{n \\to \\infty} \\int_\\gamma f_n(z)dz = 0$ for any closed contour $\\gamma$, so that $F(z) = \\int_a^z f(s)ds$ is well-defined (it doesn't depend on the path $a \\to z$) and holomorphic, which means by the \n'holomorphic $\\implies$ analytic' theorem that $f(z) = F'(z)$ is analytic too.</p>\n\n<p>Altogether, a Cauchy sequence in the disk algebra converges to an analytic function continuous on the boundary.</p>\n" }, { "answer_id": 2101492, "score": 2, "text": "<p>$\n\\def \\d{\\mathrm{d}}$\nSuppose $\\{f_n\\}$ is a Cauchy sequence in the disc algebra, than it converges to a continuous function $f$ on the disc.\n For every closed triangle $\\Delta\\subset D$, we have\n \\begin{align*}\n\t\t&amp;\\quad\\int_{\\partial\\Delta}f(z)\\d z \\\\&amp;=\\lim_{n\\to\\infty}\\int_{\\partial\\Delta}f_n(z)\\d z\n({\\text{the Domainated Convergence Theorem}})\\\\\n&amp;=0({\\text{Cauchy's Theorem in a Convex Set}}).\n\t\t\\end{align*} \n Hence $f$ is holomorphic by Morera's Theorem.</p>\n\n<p>All of these can be found in chapter 10 of <em>Real and Complex Analysis</em> by Rudin </p>\n" }, { "answer_id": 2079671, "score": 4, "text": "<p>The magic phrase is <em>Weierstraß-convergence-theorem</em> (I am not 100 % sure if this is its name in the english literature). The point is that $A(D)$ is obviously a subalgebra of $C(D)$ (set of continuous functions on $D$) which is a Banach space (even Banach algebra) w.r.t the maximumnorm/supremumnorm (again I'm not sure about the english word, anyway I think you know this statement). An important and well known theorem of Weierstraß states that a sequence of holomorhpic locally uniformly convergent functions converge to a holomorhic function. That is to say that $A(D)$ is closed inside $C(D)$.</p>\n" }, { "answer_id": 2080323, "score": 3, "text": "<p>$\\scriptstyle \\text{Let me add some details :}$ </p>\n\n<p>If a sequence of bounded continuous functions converges in the $\\sup$ norm then the limit is continuous and bounded. </p>\n\n<p>If a sequence $f_n(z)$ of holomorphic functions converges uniformly to $f(z)$ then $\\int_\\gamma f(z) dz= \\lim_{n \\to \\infty} \\int_\\gamma f_n(z)dz = 0$ for any closed contour $\\gamma$, so that $F(z) = \\int_a^z f(s)ds$ is well-defined (it doesn't depend on the path $a \\to z$) and holomorphic, which means by the \n'holomorphic $\\implies$ analytic' theorem that $f(z) = F'(z)$ is analytic too.</p>\n\n<p>Altogether, a Cauchy sequence in the disk algebra converges to an analytic function continuous on the boundary.</p>\n" }, { "answer_id": 2101492, "score": 2, "text": "<p>$\n\\def \\d{\\mathrm{d}}$\nSuppose $\\{f_n\\}$ is a Cauchy sequence in the disc algebra, than it converges to a continuous function $f$ on the disc.\n For every closed triangle $\\Delta\\subset D$, we have\n \\begin{align*}\n\t\t&amp;\\quad\\int_{\\partial\\Delta}f(z)\\d z \\\\&amp;=\\lim_{n\\to\\infty}\\int_{\\partial\\Delta}f_n(z)\\d z\n({\\text{the Domainated Convergence Theorem}})\\\\\n&amp;=0({\\text{Cauchy's Theorem in a Convex Set}}).\n\t\t\\end{align*} \n Hence $f$ is holomorphic by Morera's Theorem.</p>\n\n<p>All of these can be found in chapter 10 of <em>Real and Complex Analysis</em> by Rudin </p>\n" }, { "answer_id": 2079671, "score": 4, "text": "<p>The magic phrase is <em>Weierstraß-convergence-theorem</em> (I am not 100 % sure if this is its name in the english literature). The point is that $A(D)$ is obviously a subalgebra of $C(D)$ (set of continuous functions on $D$) which is a Banach space (even Banach algebra) w.r.t the maximumnorm/supremumnorm (again I'm not sure about the english word, anyway I think you know this statement). An important and well known theorem of Weierstraß states that a sequence of holomorhpic locally uniformly convergent functions converge to a holomorhic function. That is to say that $A(D)$ is closed inside $C(D)$.</p>\n" }, { "answer_id": 2080323, "score": 3, "text": "<p>$\\scriptstyle \\text{Let me add some details :}$ </p>\n\n<p>If a sequence of bounded continuous functions converges in the $\\sup$ norm then the limit is continuous and bounded. </p>\n\n<p>If a sequence $f_n(z)$ of holomorphic functions converges uniformly to $f(z)$ then $\\int_\\gamma f(z) dz= \\lim_{n \\to \\infty} \\int_\\gamma f_n(z)dz = 0$ for any closed contour $\\gamma$, so that $F(z) = \\int_a^z f(s)ds$ is well-defined (it doesn't depend on the path $a \\to z$) and holomorphic, which means by the \n'holomorphic $\\implies$ analytic' theorem that $f(z) = F'(z)$ is analytic too.</p>\n\n<p>Altogether, a Cauchy sequence in the disk algebra converges to an analytic function continuous on the boundary.</p>\n" }, { "answer_id": 2101492, "score": 2, "text": "<p>$\n\\def \\d{\\mathrm{d}}$\nSuppose $\\{f_n\\}$ is a Cauchy sequence in the disc algebra, than it converges to a continuous function $f$ on the disc.\n For every closed triangle $\\Delta\\subset D$, we have\n \\begin{align*}\n\t\t&amp;\\quad\\int_{\\partial\\Delta}f(z)\\d z \\\\&amp;=\\lim_{n\\to\\infty}\\int_{\\partial\\Delta}f_n(z)\\d z\n({\\text{the Domainated Convergence Theorem}})\\\\\n&amp;=0({\\text{Cauchy's Theorem in a Convex Set}}).\n\t\t\\end{align*} \n Hence $f$ is holomorphic by Morera's Theorem.</p>\n\n<p>All of these can be found in chapter 10 of <em>Real and Complex Analysis</em> by Rudin </p>\n" }, { "answer_id": 2079671, "score": 4, "text": "<p>The magic phrase is <em>Weierstraß-convergence-theorem</em> (I am not 100 % sure if this is its name in the english literature). The point is that $A(D)$ is obviously a subalgebra of $C(D)$ (set of continuous functions on $D$) which is a Banach space (even Banach algebra) w.r.t the maximumnorm/supremumnorm (again I'm not sure about the english word, anyway I think you know this statement). An important and well known theorem of Weierstraß states that a sequence of holomorhpic locally uniformly convergent functions converge to a holomorhic function. That is to say that $A(D)$ is closed inside $C(D)$.</p>\n" }, { "answer_id": 2080323, "score": 3, "text": "<p>$\\scriptstyle \\text{Let me add some details :}$ </p>\n\n<p>If a sequence of bounded continuous functions converges in the $\\sup$ norm then the limit is continuous and bounded. </p>\n\n<p>If a sequence $f_n(z)$ of holomorphic functions converges uniformly to $f(z)$ then $\\int_\\gamma f(z) dz= \\lim_{n \\to \\infty} \\int_\\gamma f_n(z)dz = 0$ for any closed contour $\\gamma$, so that $F(z) = \\int_a^z f(s)ds$ is well-defined (it doesn't depend on the path $a \\to z$) and holomorphic, which means by the \n'holomorphic $\\implies$ analytic' theorem that $f(z) = F'(z)$ is analytic too.</p>\n\n<p>Altogether, a Cauchy sequence in the disk algebra converges to an analytic function continuous on the boundary.</p>\n" }, { "answer_id": 2101492, "score": 2, "text": "<p>$\n\\def \\d{\\mathrm{d}}$\nSuppose $\\{f_n\\}$ is a Cauchy sequence in the disc algebra, than it converges to a continuous function $f$ on the disc.\n For every closed triangle $\\Delta\\subset D$, we have\n \\begin{align*}\n\t\t&amp;\\quad\\int_{\\partial\\Delta}f(z)\\d z \\\\&amp;=\\lim_{n\\to\\infty}\\int_{\\partial\\Delta}f_n(z)\\d z\n({\\text{the Domainated Convergence Theorem}})\\\\\n&amp;=0({\\text{Cauchy's Theorem in a Convex Set}}).\n\t\t\\end{align*} \n Hence $f$ is holomorphic by Morera's Theorem.</p>\n\n<p>All of these can be found in chapter 10 of <em>Real and Complex Analysis</em> by Rudin </p>\n" }, { "answer_id": 2079671, "score": 4, "text": "<p>The magic phrase is <em>Weierstraß-convergence-theorem</em> (I am not 100 % sure if this is its name in the english literature). The point is that $A(D)$ is obviously a subalgebra of $C(D)$ (set of continuous functions on $D$) which is a Banach space (even Banach algebra) w.r.t the maximumnorm/supremumnorm (again I'm not sure about the english word, anyway I think you know this statement). An important and well known theorem of Weierstraß states that a sequence of holomorhpic locally uniformly convergent functions converge to a holomorhic function. That is to say that $A(D)$ is closed inside $C(D)$.</p>\n" }, { "answer_id": 2080323, "score": 3, "text": "<p>$\\scriptstyle \\text{Let me add some details :}$ </p>\n\n<p>If a sequence of bounded continuous functions converges in the $\\sup$ norm then the limit is continuous and bounded. </p>\n\n<p>If a sequence $f_n(z)$ of holomorphic functions converges uniformly to $f(z)$ then $\\int_\\gamma f(z) dz= \\lim_{n \\to \\infty} \\int_\\gamma f_n(z)dz = 0$ for any closed contour $\\gamma$, so that $F(z) = \\int_a^z f(s)ds$ is well-defined (it doesn't depend on the path $a \\to z$) and holomorphic, which means by the \n'holomorphic $\\implies$ analytic' theorem that $f(z) = F'(z)$ is analytic too.</p>\n\n<p>Altogether, a Cauchy sequence in the disk algebra converges to an analytic function continuous on the boundary.</p>\n" }, { "answer_id": 2101492, "score": 2, "text": "<p>$\n\\def \\d{\\mathrm{d}}$\nSuppose $\\{f_n\\}$ is a Cauchy sequence in the disc algebra, than it converges to a continuous function $f$ on the disc.\n For every closed triangle $\\Delta\\subset D$, we have\n \\begin{align*}\n\t\t&amp;\\quad\\int_{\\partial\\Delta}f(z)\\d z \\\\&amp;=\\lim_{n\\to\\infty}\\int_{\\partial\\Delta}f_n(z)\\d z\n({\\text{the Domainated Convergence Theorem}})\\\\\n&amp;=0({\\text{Cauchy's Theorem in a Convex Set}}).\n\t\t\\end{align*} \n Hence $f$ is holomorphic by Morera's Theorem.</p>\n\n<p>All of these can be found in chapter 10 of <em>Real and Complex Analysis</em> by Rudin </p>\n" } ]
Question: How to show that the disc algebra $A(D)$ is a Banach space? Answer 1: <p>The magic phrase is <em>Weierstraß-convergence-theorem</em> (I am not 100 % sure if this is its name in the english literature). The point is that $A(D)$ is obviously a subalgebra of $C(D)$ (set of continuous functions on $D$) which is a Banach space (even Banach algebra) w.r.t the maximumnorm/supremumnorm (again I'm not sure about the english word, anyway I think you know this statement). An important and well known theorem of Weierstraß states that a sequence of holomorhpic locally uniformly convergent functions converge to a holomorhic function. That is to say that $A(D)$ is closed inside $C(D)$.</p> Answer 2: <p>$\scriptstyle \text{Let me add some details :}$ </p> <p>If a sequence of bounded continuous functions converges in the $\sup$ norm then the limit is continuous and bounded. </p> <p>If a sequence $f_n(z)$ of holomorphic functions converges uniformly to $f(z)$ then $\int_\gamma f(z) dz= \lim_{n \to \infty} \int_\gamma f_n(z)dz = 0$ for any closed contour $\gamma$, so that $F(z) = \int_a^z f(s)ds$ is well-defined (it doesn't depend on the path $a \to z$) and holomorphic, which means by the 'holomorphic $\implies$ analytic' theorem that $f(z) = F'(z)$ is analytic too.</p> <p>Altogether, a Cauchy sequence in the disk algebra converges to an analytic function continuous on the boundary.</p> Answer 3: <p>$ \def \d{\mathrm{d}}$ Suppose $\{f_n\}$ is a Cauchy sequence in the disc algebra, than it converges to a continuous function $f$ on the disc. For every closed triangle $\Delta\subset D$, we have \begin{align*} &amp;\quad\int_{\partial\Delta}f(z)\d z \\&amp;=\lim_{n\to\infty}\int_{\partial\Delta}f_n(z)\d z ({\text{the Domainated Convergence Theorem}})\\ &amp;=0({\text{Cauchy's Theorem in a Convex Set}}). \end{align*} Hence $f$ is holomorphic by Morera's Theorem.</p> <p>All of these can be found in chapter 10 of <em>Real and Complex Analysis</em> by Rudin </p> Answer 4: <p>The magic phrase is <em>Weierstraß-convergence-theorem</em> (I am not 100 % sure if this is its name in the english literature). The point is that $A(D)$ is obviously a subalgebra of $C(D)$ (set of continuous functions on $D$) which is a Banach space (even Banach algebra) w.r.t the maximumnorm/supremumnorm (again I'm not sure about the english word, anyway I think you know this statement). An important and well known theorem of Weierstraß states that a sequence of holomorhpic locally uniformly convergent functions converge to a holomorhic function. That is to say that $A(D)$ is closed inside $C(D)$.</p> Answer 5: <p>$\scriptstyle \text{Let me add some details :}$ </p> <p>If a sequence of bounded continuous functions converges in the $\sup$ norm then the limit is continuous and bounded. </p> <p>If a sequence $f_n(z)$ of holomorphic functions converges uniformly to $f(z)$ then $\int_\gamma f(z) dz= \lim_{n \to \infty} \int_\gamma f_n(z)dz = 0$ for any closed contour $\gamma$, so that $F(z) = \int_a^z f(s)ds$ is well-defined (it doesn't depend on the path $a \to z$) and holomorphic, which means by the 'holomorphic $\implies$ analytic' theorem that $f(z) = F'(z)$ is analytic too.</p> <p>Altogether, a Cauchy sequence in the disk algebra converges to an analytic function continuous on the boundary.</p> Answer 6: <p>$ \def \d{\mathrm{d}}$ Suppose $\{f_n\}$ is a Cauchy sequence in the disc algebra, than it converges to a continuous function $f$ on the disc. For every closed triangle $\Delta\subset D$, we have \begin{align*} &amp;\quad\int_{\partial\Delta}f(z)\d z \\&amp;=\lim_{n\to\infty}\int_{\partial\Delta}f_n(z)\d z ({\text{the Domainated Convergence Theorem}})\\ &amp;=0({\text{Cauchy's Theorem in a Convex Set}}). \end{align*} Hence $f$ is holomorphic by Morera's Theorem.</p> <p>All of these can be found in chapter 10 of <em>Real and Complex Analysis</em> by Rudin </p> Answer 7: <p>The magic phrase is <em>Weierstraß-convergence-theorem</em> (I am not 100 % sure if this is its name in the english literature). The point is that $A(D)$ is obviously a subalgebra of $C(D)$ (set of continuous functions on $D$) which is a Banach space (even Banach algebra) w.r.t the maximumnorm/supremumnorm (again I'm not sure about the english word, anyway I think you know this statement). An important and well known theorem of Weierstraß states that a sequence of holomorhpic locally uniformly convergent functions converge to a holomorhic function. That is to say that $A(D)$ is closed inside $C(D)$.</p> Answer 8: <p>$\scriptstyle \text{Let me add some details :}$ </p> <p>If a sequence of bounded continuous functions converges in the $\sup$ norm then the limit is continuous and bounded. </p> <p>If a sequence $f_n(z)$ of holomorphic functions converges uniformly to $f(z)$ then $\int_\gamma f(z) dz= \lim_{n \to \infty} \int_\gamma f_n(z)dz = 0$ for any closed contour $\gamma$, so that $F(z) = \int_a^z f(s)ds$ is well-defined (it doesn't depend on the path $a \to z$) and holomorphic, which means by the 'holomorphic $\implies$ analytic' theorem that $f(z) = F'(z)$ is analytic too.</p> <p>Altogether, a Cauchy sequence in the disk algebra converges to an analytic function continuous on the boundary.</p> Answer 9: <p>$ \def \d{\mathrm{d}}$ Suppose $\{f_n\}$ is a Cauchy sequence in the disc algebra, than it converges to a continuous function $f$ on the disc. For every closed triangle $\Delta\subset D$, we have \begin{align*} &amp;\quad\int_{\partial\Delta}f(z)\d z \\&amp;=\lim_{n\to\infty}\int_{\partial\Delta}f_n(z)\d z ({\text{the Domainated Convergence Theorem}})\\ &amp;=0({\text{Cauchy's Theorem in a Convex Set}}). \end{align*} Hence $f$ is holomorphic by Morera's Theorem.</p> <p>All of these can be found in chapter 10 of <em>Real and Complex Analysis</em> by Rudin </p> Answer 10: <p>The magic phrase is <em>Weierstraß-convergence-theorem</em> (I am not 100 % sure if this is its name in the english literature). The point is that $A(D)$ is obviously a subalgebra of $C(D)$ (set of continuous functions on $D$) which is a Banach space (even Banach algebra) w.r.t the maximumnorm/supremumnorm (again I'm not sure about the english word, anyway I think you know this statement). An important and well known theorem of Weierstraß states that a sequence of holomorhpic locally uniformly convergent functions converge to a holomorhic function. That is to say that $A(D)$ is closed inside $C(D)$.</p> Answer 11: <p>$\scriptstyle \text{Let me add some details :}$ </p> <p>If a sequence of bounded continuous functions converges in the $\sup$ norm then the limit is continuous and bounded. </p> <p>If a sequence $f_n(z)$ of holomorphic functions converges uniformly to $f(z)$ then $\int_\gamma f(z) dz= \lim_{n \to \infty} \int_\gamma f_n(z)dz = 0$ for any closed contour $\gamma$, so that $F(z) = \int_a^z f(s)ds$ is well-defined (it doesn't depend on the path $a \to z$) and holomorphic, which means by the 'holomorphic $\implies$ analytic' theorem that $f(z) = F'(z)$ is analytic too.</p> <p>Altogether, a Cauchy sequence in the disk algebra converges to an analytic function continuous on the boundary.</p> Answer 12: <p>$ \def \d{\mathrm{d}}$ Suppose $\{f_n\}$ is a Cauchy sequence in the disc algebra, than it converges to a continuous function $f$ on the disc. For every closed triangle $\Delta\subset D$, we have \begin{align*} &amp;\quad\int_{\partial\Delta}f(z)\d z \\&amp;=\lim_{n\to\infty}\int_{\partial\Delta}f_n(z)\d z ({\text{the Domainated Convergence Theorem}})\\ &amp;=0({\text{Cauchy's Theorem in a Convex Set}}). \end{align*} Hence $f$ is holomorphic by Morera's Theorem.</p> <p>All of these can be found in chapter 10 of <em>Real and Complex Analysis</em> by Rudin </p> Answer 13: <p>The magic phrase is <em>Weierstraß-convergence-theorem</em> (I am not 100 % sure if this is its name in the english literature). The point is that $A(D)$ is obviously a subalgebra of $C(D)$ (set of continuous functions on $D$) which is a Banach space (even Banach algebra) w.r.t the maximumnorm/supremumnorm (again I'm not sure about the english word, anyway I think you know this statement). An important and well known theorem of Weierstraß states that a sequence of holomorhpic locally uniformly convergent functions converge to a holomorhic function. That is to say that $A(D)$ is closed inside $C(D)$.</p> Answer 14: <p>$\scriptstyle \text{Let me add some details :}$ </p> <p>If a sequence of bounded continuous functions converges in the $\sup$ norm then the limit is continuous and bounded. </p> <p>If a sequence $f_n(z)$ of holomorphic functions converges uniformly to $f(z)$ then $\int_\gamma f(z) dz= \lim_{n \to \infty} \int_\gamma f_n(z)dz = 0$ for any closed contour $\gamma$, so that $F(z) = \int_a^z f(s)ds$ is well-defined (it doesn't depend on the path $a \to z$) and holomorphic, which means by the 'holomorphic $\implies$ analytic' theorem that $f(z) = F'(z)$ is analytic too.</p> <p>Altogether, a Cauchy sequence in the disk algebra converges to an analytic function continuous on the boundary.</p> Answer 15: <p>$ \def \d{\mathrm{d}}$ Suppose $\{f_n\}$ is a Cauchy sequence in the disc algebra, than it converges to a continuous function $f$ on the disc. For every closed triangle $\Delta\subset D$, we have \begin{align*} &amp;\quad\int_{\partial\Delta}f(z)\d z \\&amp;=\lim_{n\to\infty}\int_{\partial\Delta}f_n(z)\d z ({\text{the Domainated Convergence Theorem}})\\ &amp;=0({\text{Cauchy's Theorem in a Convex Set}}). \end{align*} Hence $f$ is holomorphic by Morera's Theorem.</p> <p>All of these can be found in chapter 10 of <em>Real and Complex Analysis</em> by Rudin </p>
What is the value of $a/b/c$?
2,079,663
4
https://math.stackexchange.com/questions/2079663/what-is-the-value-of-a-b-c
<p>What is the value of $$a/b/c,$$</p> <p>for $a,b,c&gt;0$?</p> <p>Is it $ac/b$ or $a/bc$?</p> <p>I get confused about this when I was studying wireless communications. The unit of the spectral efficiency is defined as $bit/second/Hz$. </p> <ul> <li>if it is $ac/b$ then the unit of spectral efficiency will be $bit Hz/second$ which is $bit/second^2$; and </li> <li>if it is $a/bc$ then I will have $bit/secondHz$ so it is simply $bit$. </li> </ul> <p>Hence, I guess</p> <p>$$a/b/c=a/bc.$$</p> <p>But that's not true, isn't it?</p>
[ { "answer_id": 2079677, "score": 5, "text": "<p>Answering the question you really seemed to want to ask.</p>\n\n<p>Yes. Bits per second per Hertz comes to just bits. But, it needs to interpreted. That unit is used when you are using a channel with a given bandwith $W$ (measure in Hertz, so for example you may have access to a band of width $W=5$ MHz). Then if you use that channel with efficiency of $k$ bits/second/Hertz for a time slice of length $T$ (seconds), you expect to transmit $kTW$ bits. The Hertz's and seconds cancel, and this gives a pure number of bits. As you would expect: if you double your time you double the number of transmitted bits, if you double your bandwidth you again double the number of bits you can transmit.</p>\n\n<p>For details about the definition of spectral efficiency you probably want to ask at <a href=\"http://dsp.stackexchange.com\">DSP.SE</a>. I will add that often spectral efficiency is measure in bpcu (= bits per channel use). I'm not a telcomm engineer, so I won't say whether you include things like FEC code rate into spectral efficiency, or whether you just take into account the modulation scheme (such as QPSK-modulation = 2 bpcu, 16-QAM = 4 bpcu et cetera).</p>\n" }, { "answer_id": 2079667, "score": 4, "text": "<p>That depends on where you choose to put the <em>big</em> fractions because $\\frac{\\frac ab}{c}\\not=\\frac{a}{\\frac bc}$, the term $a/b/c$ is ambiguous. </p>\n" }, { "answer_id": 2079684, "score": 3, "text": "<p>It's a bit ambiguous without parentheses, but usually I would perform equivalent operations from left to right. So it would be</p>\n\n<p>$$(a/b)/c = \\frac{a/b}{c} = \\frac{a}{b}\\frac{1}{c} = \\frac{a}{bc}$$</p>\n\n<p>In this case you're dealing with physical units, so think about what spectral efficiency is trying to say. It's supposed to be a measure of how fast information can be transmitted with a given frequency range. It's the information \"velocity\" for a given bandwidth. If my communication system can transfer $100 bit/s$ with a $5Hz$ bandwidth, we would expect the spectral efficiency to be $20$ of some unit. In other words we're dividing </p>\n\n<p>$$\\frac{100bit/s}{5Hz}= 20\\frac{bit/s}{Hz}$$</p>\n\n<p>So the unit is (bit/s)/Hz.</p>\n" }, { "answer_id": 2079677, "score": 5, "text": "<p>Answering the question you really seemed to want to ask.</p>\n\n<p>Yes. Bits per second per Hertz comes to just bits. But, it needs to interpreted. That unit is used when you are using a channel with a given bandwith $W$ (measure in Hertz, so for example you may have access to a band of width $W=5$ MHz). Then if you use that channel with efficiency of $k$ bits/second/Hertz for a time slice of length $T$ (seconds), you expect to transmit $kTW$ bits. The Hertz's and seconds cancel, and this gives a pure number of bits. As you would expect: if you double your time you double the number of transmitted bits, if you double your bandwidth you again double the number of bits you can transmit.</p>\n\n<p>For details about the definition of spectral efficiency you probably want to ask at <a href=\"http://dsp.stackexchange.com\">DSP.SE</a>. I will add that often spectral efficiency is measure in bpcu (= bits per channel use). I'm not a telcomm engineer, so I won't say whether you include things like FEC code rate into spectral efficiency, or whether you just take into account the modulation scheme (such as QPSK-modulation = 2 bpcu, 16-QAM = 4 bpcu et cetera).</p>\n" }, { "answer_id": 2079667, "score": 4, "text": "<p>That depends on where you choose to put the <em>big</em> fractions because $\\frac{\\frac ab}{c}\\not=\\frac{a}{\\frac bc}$, the term $a/b/c$ is ambiguous. </p>\n" }, { "answer_id": 2079684, "score": 3, "text": "<p>It's a bit ambiguous without parentheses, but usually I would perform equivalent operations from left to right. So it would be</p>\n\n<p>$$(a/b)/c = \\frac{a/b}{c} = \\frac{a}{b}\\frac{1}{c} = \\frac{a}{bc}$$</p>\n\n<p>In this case you're dealing with physical units, so think about what spectral efficiency is trying to say. It's supposed to be a measure of how fast information can be transmitted with a given frequency range. It's the information \"velocity\" for a given bandwidth. If my communication system can transfer $100 bit/s$ with a $5Hz$ bandwidth, we would expect the spectral efficiency to be $20$ of some unit. In other words we're dividing </p>\n\n<p>$$\\frac{100bit/s}{5Hz}= 20\\frac{bit/s}{Hz}$$</p>\n\n<p>So the unit is (bit/s)/Hz.</p>\n" }, { "answer_id": 2079677, "score": 5, "text": "<p>Answering the question you really seemed to want to ask.</p>\n\n<p>Yes. Bits per second per Hertz comes to just bits. But, it needs to interpreted. That unit is used when you are using a channel with a given bandwith $W$ (measure in Hertz, so for example you may have access to a band of width $W=5$ MHz). Then if you use that channel with efficiency of $k$ bits/second/Hertz for a time slice of length $T$ (seconds), you expect to transmit $kTW$ bits. The Hertz's and seconds cancel, and this gives a pure number of bits. As you would expect: if you double your time you double the number of transmitted bits, if you double your bandwidth you again double the number of bits you can transmit.</p>\n\n<p>For details about the definition of spectral efficiency you probably want to ask at <a href=\"http://dsp.stackexchange.com\">DSP.SE</a>. I will add that often spectral efficiency is measure in bpcu (= bits per channel use). I'm not a telcomm engineer, so I won't say whether you include things like FEC code rate into spectral efficiency, or whether you just take into account the modulation scheme (such as QPSK-modulation = 2 bpcu, 16-QAM = 4 bpcu et cetera).</p>\n" }, { "answer_id": 2079667, "score": 4, "text": "<p>That depends on where you choose to put the <em>big</em> fractions because $\\frac{\\frac ab}{c}\\not=\\frac{a}{\\frac bc}$, the term $a/b/c$ is ambiguous. </p>\n" }, { "answer_id": 2079684, "score": 3, "text": "<p>It's a bit ambiguous without parentheses, but usually I would perform equivalent operations from left to right. So it would be</p>\n\n<p>$$(a/b)/c = \\frac{a/b}{c} = \\frac{a}{b}\\frac{1}{c} = \\frac{a}{bc}$$</p>\n\n<p>In this case you're dealing with physical units, so think about what spectral efficiency is trying to say. It's supposed to be a measure of how fast information can be transmitted with a given frequency range. It's the information \"velocity\" for a given bandwidth. If my communication system can transfer $100 bit/s$ with a $5Hz$ bandwidth, we would expect the spectral efficiency to be $20$ of some unit. In other words we're dividing </p>\n\n<p>$$\\frac{100bit/s}{5Hz}= 20\\frac{bit/s}{Hz}$$</p>\n\n<p>So the unit is (bit/s)/Hz.</p>\n" }, { "answer_id": 2079677, "score": 5, "text": "<p>Answering the question you really seemed to want to ask.</p>\n\n<p>Yes. Bits per second per Hertz comes to just bits. But, it needs to interpreted. That unit is used when you are using a channel with a given bandwith $W$ (measure in Hertz, so for example you may have access to a band of width $W=5$ MHz). Then if you use that channel with efficiency of $k$ bits/second/Hertz for a time slice of length $T$ (seconds), you expect to transmit $kTW$ bits. The Hertz's and seconds cancel, and this gives a pure number of bits. As you would expect: if you double your time you double the number of transmitted bits, if you double your bandwidth you again double the number of bits you can transmit.</p>\n\n<p>For details about the definition of spectral efficiency you probably want to ask at <a href=\"http://dsp.stackexchange.com\">DSP.SE</a>. I will add that often spectral efficiency is measure in bpcu (= bits per channel use). I'm not a telcomm engineer, so I won't say whether you include things like FEC code rate into spectral efficiency, or whether you just take into account the modulation scheme (such as QPSK-modulation = 2 bpcu, 16-QAM = 4 bpcu et cetera).</p>\n" }, { "answer_id": 2079667, "score": 4, "text": "<p>That depends on where you choose to put the <em>big</em> fractions because $\\frac{\\frac ab}{c}\\not=\\frac{a}{\\frac bc}$, the term $a/b/c$ is ambiguous. </p>\n" }, { "answer_id": 2079684, "score": 3, "text": "<p>It's a bit ambiguous without parentheses, but usually I would perform equivalent operations from left to right. So it would be</p>\n\n<p>$$(a/b)/c = \\frac{a/b}{c} = \\frac{a}{b}\\frac{1}{c} = \\frac{a}{bc}$$</p>\n\n<p>In this case you're dealing with physical units, so think about what spectral efficiency is trying to say. It's supposed to be a measure of how fast information can be transmitted with a given frequency range. It's the information \"velocity\" for a given bandwidth. If my communication system can transfer $100 bit/s$ with a $5Hz$ bandwidth, we would expect the spectral efficiency to be $20$ of some unit. In other words we're dividing </p>\n\n<p>$$\\frac{100bit/s}{5Hz}= 20\\frac{bit/s}{Hz}$$</p>\n\n<p>So the unit is (bit/s)/Hz.</p>\n" }, { "answer_id": 2079677, "score": 5, "text": "<p>Answering the question you really seemed to want to ask.</p>\n\n<p>Yes. Bits per second per Hertz comes to just bits. But, it needs to interpreted. That unit is used when you are using a channel with a given bandwith $W$ (measure in Hertz, so for example you may have access to a band of width $W=5$ MHz). Then if you use that channel with efficiency of $k$ bits/second/Hertz for a time slice of length $T$ (seconds), you expect to transmit $kTW$ bits. The Hertz's and seconds cancel, and this gives a pure number of bits. As you would expect: if you double your time you double the number of transmitted bits, if you double your bandwidth you again double the number of bits you can transmit.</p>\n\n<p>For details about the definition of spectral efficiency you probably want to ask at <a href=\"http://dsp.stackexchange.com\">DSP.SE</a>. I will add that often spectral efficiency is measure in bpcu (= bits per channel use). I'm not a telcomm engineer, so I won't say whether you include things like FEC code rate into spectral efficiency, or whether you just take into account the modulation scheme (such as QPSK-modulation = 2 bpcu, 16-QAM = 4 bpcu et cetera).</p>\n" }, { "answer_id": 2079667, "score": 4, "text": "<p>That depends on where you choose to put the <em>big</em> fractions because $\\frac{\\frac ab}{c}\\not=\\frac{a}{\\frac bc}$, the term $a/b/c$ is ambiguous. </p>\n" }, { "answer_id": 2079684, "score": 3, "text": "<p>It's a bit ambiguous without parentheses, but usually I would perform equivalent operations from left to right. So it would be</p>\n\n<p>$$(a/b)/c = \\frac{a/b}{c} = \\frac{a}{b}\\frac{1}{c} = \\frac{a}{bc}$$</p>\n\n<p>In this case you're dealing with physical units, so think about what spectral efficiency is trying to say. It's supposed to be a measure of how fast information can be transmitted with a given frequency range. It's the information \"velocity\" for a given bandwidth. If my communication system can transfer $100 bit/s$ with a $5Hz$ bandwidth, we would expect the spectral efficiency to be $20$ of some unit. In other words we're dividing </p>\n\n<p>$$\\frac{100bit/s}{5Hz}= 20\\frac{bit/s}{Hz}$$</p>\n\n<p>So the unit is (bit/s)/Hz.</p>\n" } ]
Question: What is the value of $a/b/c$? Answer 1: <p>Answering the question you really seemed to want to ask.</p> <p>Yes. Bits per second per Hertz comes to just bits. But, it needs to interpreted. That unit is used when you are using a channel with a given bandwith $W$ (measure in Hertz, so for example you may have access to a band of width $W=5$ MHz). Then if you use that channel with efficiency of $k$ bits/second/Hertz for a time slice of length $T$ (seconds), you expect to transmit $kTW$ bits. The Hertz's and seconds cancel, and this gives a pure number of bits. As you would expect: if you double your time you double the number of transmitted bits, if you double your bandwidth you again double the number of bits you can transmit.</p> <p>For details about the definition of spectral efficiency you probably want to ask at <a href="http://dsp.stackexchange.com">DSP.SE</a>. I will add that often spectral efficiency is measure in bpcu (= bits per channel use). I'm not a telcomm engineer, so I won't say whether you include things like FEC code rate into spectral efficiency, or whether you just take into account the modulation scheme (such as QPSK-modulation = 2 bpcu, 16-QAM = 4 bpcu et cetera).</p> Answer 2: <p>That depends on where you choose to put the <em>big</em> fractions because $\frac{\frac ab}{c}\not=\frac{a}{\frac bc}$, the term $a/b/c$ is ambiguous. </p> Answer 3: <p>It's a bit ambiguous without parentheses, but usually I would perform equivalent operations from left to right. So it would be</p> <p>$$(a/b)/c = \frac{a/b}{c} = \frac{a}{b}\frac{1}{c} = \frac{a}{bc}$$</p> <p>In this case you're dealing with physical units, so think about what spectral efficiency is trying to say. It's supposed to be a measure of how fast information can be transmitted with a given frequency range. It's the information "velocity" for a given bandwidth. If my communication system can transfer $100 bit/s$ with a $5Hz$ bandwidth, we would expect the spectral efficiency to be $20$ of some unit. In other words we're dividing </p> <p>$$\frac{100bit/s}{5Hz}= 20\frac{bit/s}{Hz}$$</p> <p>So the unit is (bit/s)/Hz.</p> Answer 4: <p>Answering the question you really seemed to want to ask.</p> <p>Yes. Bits per second per Hertz comes to just bits. But, it needs to interpreted. That unit is used when you are using a channel with a given bandwith $W$ (measure in Hertz, so for example you may have access to a band of width $W=5$ MHz). Then if you use that channel with efficiency of $k$ bits/second/Hertz for a time slice of length $T$ (seconds), you expect to transmit $kTW$ bits. The Hertz's and seconds cancel, and this gives a pure number of bits. As you would expect: if you double your time you double the number of transmitted bits, if you double your bandwidth you again double the number of bits you can transmit.</p> <p>For details about the definition of spectral efficiency you probably want to ask at <a href="http://dsp.stackexchange.com">DSP.SE</a>. I will add that often spectral efficiency is measure in bpcu (= bits per channel use). I'm not a telcomm engineer, so I won't say whether you include things like FEC code rate into spectral efficiency, or whether you just take into account the modulation scheme (such as QPSK-modulation = 2 bpcu, 16-QAM = 4 bpcu et cetera).</p> Answer 5: <p>That depends on where you choose to put the <em>big</em> fractions because $\frac{\frac ab}{c}\not=\frac{a}{\frac bc}$, the term $a/b/c$ is ambiguous. </p> Answer 6: <p>It's a bit ambiguous without parentheses, but usually I would perform equivalent operations from left to right. So it would be</p> <p>$$(a/b)/c = \frac{a/b}{c} = \frac{a}{b}\frac{1}{c} = \frac{a}{bc}$$</p> <p>In this case you're dealing with physical units, so think about what spectral efficiency is trying to say. It's supposed to be a measure of how fast information can be transmitted with a given frequency range. It's the information "velocity" for a given bandwidth. If my communication system can transfer $100 bit/s$ with a $5Hz$ bandwidth, we would expect the spectral efficiency to be $20$ of some unit. In other words we're dividing </p> <p>$$\frac{100bit/s}{5Hz}= 20\frac{bit/s}{Hz}$$</p> <p>So the unit is (bit/s)/Hz.</p> Answer 7: <p>Answering the question you really seemed to want to ask.</p> <p>Yes. Bits per second per Hertz comes to just bits. But, it needs to interpreted. That unit is used when you are using a channel with a given bandwith $W$ (measure in Hertz, so for example you may have access to a band of width $W=5$ MHz). Then if you use that channel with efficiency of $k$ bits/second/Hertz for a time slice of length $T$ (seconds), you expect to transmit $kTW$ bits. The Hertz's and seconds cancel, and this gives a pure number of bits. As you would expect: if you double your time you double the number of transmitted bits, if you double your bandwidth you again double the number of bits you can transmit.</p> <p>For details about the definition of spectral efficiency you probably want to ask at <a href="http://dsp.stackexchange.com">DSP.SE</a>. I will add that often spectral efficiency is measure in bpcu (= bits per channel use). I'm not a telcomm engineer, so I won't say whether you include things like FEC code rate into spectral efficiency, or whether you just take into account the modulation scheme (such as QPSK-modulation = 2 bpcu, 16-QAM = 4 bpcu et cetera).</p> Answer 8: <p>That depends on where you choose to put the <em>big</em> fractions because $\frac{\frac ab}{c}\not=\frac{a}{\frac bc}$, the term $a/b/c$ is ambiguous. </p> Answer 9: <p>It's a bit ambiguous without parentheses, but usually I would perform equivalent operations from left to right. So it would be</p> <p>$$(a/b)/c = \frac{a/b}{c} = \frac{a}{b}\frac{1}{c} = \frac{a}{bc}$$</p> <p>In this case you're dealing with physical units, so think about what spectral efficiency is trying to say. It's supposed to be a measure of how fast information can be transmitted with a given frequency range. It's the information "velocity" for a given bandwidth. If my communication system can transfer $100 bit/s$ with a $5Hz$ bandwidth, we would expect the spectral efficiency to be $20$ of some unit. In other words we're dividing </p> <p>$$\frac{100bit/s}{5Hz}= 20\frac{bit/s}{Hz}$$</p> <p>So the unit is (bit/s)/Hz.</p> Answer 10: <p>Answering the question you really seemed to want to ask.</p> <p>Yes. Bits per second per Hertz comes to just bits. But, it needs to interpreted. That unit is used when you are using a channel with a given bandwith $W$ (measure in Hertz, so for example you may have access to a band of width $W=5$ MHz). Then if you use that channel with efficiency of $k$ bits/second/Hertz for a time slice of length $T$ (seconds), you expect to transmit $kTW$ bits. The Hertz's and seconds cancel, and this gives a pure number of bits. As you would expect: if you double your time you double the number of transmitted bits, if you double your bandwidth you again double the number of bits you can transmit.</p> <p>For details about the definition of spectral efficiency you probably want to ask at <a href="http://dsp.stackexchange.com">DSP.SE</a>. I will add that often spectral efficiency is measure in bpcu (= bits per channel use). I'm not a telcomm engineer, so I won't say whether you include things like FEC code rate into spectral efficiency, or whether you just take into account the modulation scheme (such as QPSK-modulation = 2 bpcu, 16-QAM = 4 bpcu et cetera).</p> Answer 11: <p>That depends on where you choose to put the <em>big</em> fractions because $\frac{\frac ab}{c}\not=\frac{a}{\frac bc}$, the term $a/b/c$ is ambiguous. </p> Answer 12: <p>It's a bit ambiguous without parentheses, but usually I would perform equivalent operations from left to right. So it would be</p> <p>$$(a/b)/c = \frac{a/b}{c} = \frac{a}{b}\frac{1}{c} = \frac{a}{bc}$$</p> <p>In this case you're dealing with physical units, so think about what spectral efficiency is trying to say. It's supposed to be a measure of how fast information can be transmitted with a given frequency range. It's the information "velocity" for a given bandwidth. If my communication system can transfer $100 bit/s$ with a $5Hz$ bandwidth, we would expect the spectral efficiency to be $20$ of some unit. In other words we're dividing </p> <p>$$\frac{100bit/s}{5Hz}= 20\frac{bit/s}{Hz}$$</p> <p>So the unit is (bit/s)/Hz.</p> Answer 13: <p>Answering the question you really seemed to want to ask.</p> <p>Yes. Bits per second per Hertz comes to just bits. But, it needs to interpreted. That unit is used when you are using a channel with a given bandwith $W$ (measure in Hertz, so for example you may have access to a band of width $W=5$ MHz). Then if you use that channel with efficiency of $k$ bits/second/Hertz for a time slice of length $T$ (seconds), you expect to transmit $kTW$ bits. The Hertz's and seconds cancel, and this gives a pure number of bits. As you would expect: if you double your time you double the number of transmitted bits, if you double your bandwidth you again double the number of bits you can transmit.</p> <p>For details about the definition of spectral efficiency you probably want to ask at <a href="http://dsp.stackexchange.com">DSP.SE</a>. I will add that often spectral efficiency is measure in bpcu (= bits per channel use). I'm not a telcomm engineer, so I won't say whether you include things like FEC code rate into spectral efficiency, or whether you just take into account the modulation scheme (such as QPSK-modulation = 2 bpcu, 16-QAM = 4 bpcu et cetera).</p> Answer 14: <p>That depends on where you choose to put the <em>big</em> fractions because $\frac{\frac ab}{c}\not=\frac{a}{\frac bc}$, the term $a/b/c$ is ambiguous. </p> Answer 15: <p>It's a bit ambiguous without parentheses, but usually I would perform equivalent operations from left to right. So it would be</p> <p>$$(a/b)/c = \frac{a/b}{c} = \frac{a}{b}\frac{1}{c} = \frac{a}{bc}$$</p> <p>In this case you're dealing with physical units, so think about what spectral efficiency is trying to say. It's supposed to be a measure of how fast information can be transmitted with a given frequency range. It's the information "velocity" for a given bandwidth. If my communication system can transfer $100 bit/s$ with a $5Hz$ bandwidth, we would expect the spectral efficiency to be $20$ of some unit. In other words we're dividing </p> <p>$$\frac{100bit/s}{5Hz}= 20\frac{bit/s}{Hz}$$</p> <p>So the unit is (bit/s)/Hz.</p>
Euler&#39;s Formula For Different Bases
2,079,805
4
https://math.stackexchange.com/questions/2079805/eulers-formula-for-different-bases
<p>I was messing around with Euler's formula by changing up the values $e$ and $\pi$, and found the following trend:</p> <p>$$a^{i\frac{\Large \pi}{\Large \log(a)}} = -1$$</p> <p>It seems to work for all positive real numbers. Is this true? I haven't learned about the identity in school so I apologize if this is trivial or common sense. Thanks!</p>
[ { "answer_id": 2079812, "score": 3, "text": "<p>By definition, $a^z=e^{z\\log a}$. So, for any $\\lambda$, \n$$\na^{i\\lambda}=e^{\\lambda\\log a}. \n$$\nTo get $-1$, we need $\\lambda\\log a=i(2k+1)\\pi$ for $k\\in \\mathbb N$. In particular, for $k=0$, we get $\\lambda=i\\pi/\\log a$. So, as you say,\n$$\na^{\\dfrac{i\\pi}{\\log a}}=-1.\n$$</p>\n" }, { "answer_id": 2079815, "score": 0, "text": "<p>Hint:</p>\n\n<p>For $a&gt;0$ and $a\\ne 1$ we have $a^{1/\\log a}=e$</p>\n" }, { "answer_id": 2079812, "score": 3, "text": "<p>By definition, $a^z=e^{z\\log a}$. So, for any $\\lambda$, \n$$\na^{i\\lambda}=e^{\\lambda\\log a}. \n$$\nTo get $-1$, we need $\\lambda\\log a=i(2k+1)\\pi$ for $k\\in \\mathbb N$. In particular, for $k=0$, we get $\\lambda=i\\pi/\\log a$. So, as you say,\n$$\na^{\\dfrac{i\\pi}{\\log a}}=-1.\n$$</p>\n" }, { "answer_id": 2079815, "score": 0, "text": "<p>Hint:</p>\n\n<p>For $a&gt;0$ and $a\\ne 1$ we have $a^{1/\\log a}=e$</p>\n" }, { "answer_id": 2079812, "score": 3, "text": "<p>By definition, $a^z=e^{z\\log a}$. So, for any $\\lambda$, \n$$\na^{i\\lambda}=e^{\\lambda\\log a}. \n$$\nTo get $-1$, we need $\\lambda\\log a=i(2k+1)\\pi$ for $k\\in \\mathbb N$. In particular, for $k=0$, we get $\\lambda=i\\pi/\\log a$. So, as you say,\n$$\na^{\\dfrac{i\\pi}{\\log a}}=-1.\n$$</p>\n" }, { "answer_id": 2079815, "score": 0, "text": "<p>Hint:</p>\n\n<p>For $a&gt;0$ and $a\\ne 1$ we have $a^{1/\\log a}=e$</p>\n" }, { "answer_id": 2079812, "score": 3, "text": "<p>By definition, $a^z=e^{z\\log a}$. So, for any $\\lambda$, \n$$\na^{i\\lambda}=e^{\\lambda\\log a}. \n$$\nTo get $-1$, we need $\\lambda\\log a=i(2k+1)\\pi$ for $k\\in \\mathbb N$. In particular, for $k=0$, we get $\\lambda=i\\pi/\\log a$. So, as you say,\n$$\na^{\\dfrac{i\\pi}{\\log a}}=-1.\n$$</p>\n" }, { "answer_id": 2079815, "score": 0, "text": "<p>Hint:</p>\n\n<p>For $a&gt;0$ and $a\\ne 1$ we have $a^{1/\\log a}=e$</p>\n" }, { "answer_id": 2079812, "score": 3, "text": "<p>By definition, $a^z=e^{z\\log a}$. So, for any $\\lambda$, \n$$\na^{i\\lambda}=e^{\\lambda\\log a}. \n$$\nTo get $-1$, we need $\\lambda\\log a=i(2k+1)\\pi$ for $k\\in \\mathbb N$. In particular, for $k=0$, we get $\\lambda=i\\pi/\\log a$. So, as you say,\n$$\na^{\\dfrac{i\\pi}{\\log a}}=-1.\n$$</p>\n" }, { "answer_id": 2079815, "score": 0, "text": "<p>Hint:</p>\n\n<p>For $a&gt;0$ and $a\\ne 1$ we have $a^{1/\\log a}=e$</p>\n" } ]
Question: Euler&#39;s Formula For Different Bases Answer 1: <p>By definition, $a^z=e^{z\log a}$. So, for any $\lambda$, $$ a^{i\lambda}=e^{\lambda\log a}. $$ To get $-1$, we need $\lambda\log a=i(2k+1)\pi$ for $k\in \mathbb N$. In particular, for $k=0$, we get $\lambda=i\pi/\log a$. So, as you say, $$ a^{\dfrac{i\pi}{\log a}}=-1. $$</p> Answer 2: <p>Hint:</p> <p>For $a&gt;0$ and $a\ne 1$ we have $a^{1/\log a}=e$</p> Answer 3: <p>By definition, $a^z=e^{z\log a}$. So, for any $\lambda$, $$ a^{i\lambda}=e^{\lambda\log a}. $$ To get $-1$, we need $\lambda\log a=i(2k+1)\pi$ for $k\in \mathbb N$. In particular, for $k=0$, we get $\lambda=i\pi/\log a$. So, as you say, $$ a^{\dfrac{i\pi}{\log a}}=-1. $$</p> Answer 4: <p>Hint:</p> <p>For $a&gt;0$ and $a\ne 1$ we have $a^{1/\log a}=e$</p> Answer 5: <p>By definition, $a^z=e^{z\log a}$. So, for any $\lambda$, $$ a^{i\lambda}=e^{\lambda\log a}. $$ To get $-1$, we need $\lambda\log a=i(2k+1)\pi$ for $k\in \mathbb N$. In particular, for $k=0$, we get $\lambda=i\pi/\log a$. So, as you say, $$ a^{\dfrac{i\pi}{\log a}}=-1. $$</p> Answer 6: <p>Hint:</p> <p>For $a&gt;0$ and $a\ne 1$ we have $a^{1/\log a}=e$</p> Answer 7: <p>By definition, $a^z=e^{z\log a}$. So, for any $\lambda$, $$ a^{i\lambda}=e^{\lambda\log a}. $$ To get $-1$, we need $\lambda\log a=i(2k+1)\pi$ for $k\in \mathbb N$. In particular, for $k=0$, we get $\lambda=i\pi/\log a$. So, as you say, $$ a^{\dfrac{i\pi}{\log a}}=-1. $$</p> Answer 8: <p>Hint:</p> <p>For $a&gt;0$ and $a\ne 1$ we have $a^{1/\log a}=e$</p> Answer 9: <p>By definition, $a^z=e^{z\log a}$. So, for any $\lambda$, $$ a^{i\lambda}=e^{\lambda\log a}. $$ To get $-1$, we need $\lambda\log a=i(2k+1)\pi$ for $k\in \mathbb N$. In particular, for $k=0$, we get $\lambda=i\pi/\log a$. So, as you say, $$ a^{\dfrac{i\pi}{\log a}}=-1. $$</p> Answer 10: <p>Hint:</p> <p>For $a&gt;0$ and $a\ne 1$ we have $a^{1/\log a}=e$</p>
Bayes Formula: Intuition
2,080,059
4
https://math.stackexchange.com/questions/2080059/bayes-formula-intuition
<p>I found an intuitive explanation for Bayes' Theorem but I do not understand point 3:</p> <p><a href="https://i.stack.imgur.com/vkcVw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vkcVw.png" alt="enter image description here"></a></p> <p>With $P(A|B) = \frac{P(A \cap B)}{P(B)}$ I see that P(B) is a scaling factor for partitions of B to be able to sum up to 1. But I can't seem to wrap my head around the meaning of the ratio in Bayes' Theorem.</p>
[ { "answer_id": 2080086, "score": 4, "text": "<p>I can give you a similar interpretation. Write it as\n$$\\Pr(B|A)=\\frac{\\Pr(A|B)\\Pr(B)}{\\Pr(A)}=\\frac{\\Pr(B\\cap A)}{\\Pr(A)}$$ </p>\n\n<p>First notice the difference between the two events $B$ and $B|A$. The former is an event in the sample space $\\Omega$ whose probability is one. We can write $\\Pr(B)$ as $$\\frac{\\Pr(B\\cap\\Omega)}{\\Pr(\\Omega)}=\\frac{\\Pr(B)}{\\Pr(\\Omega)}=\\frac{\\Pr(B)}{1}$$</p>\n\n<p>However, $B|A$ means that for event $B$ the sample space is <em>reduced</em> from $\\Omega$ to $A$, hence we have $\\Pr(A)$ in denomerator instead of $\\Pr(\\Omega)$. Similarly, in the numerator we have $\\Pr(B\\cap A)$ since we only consider $B$ in a subset of the sample space that intersects with $A$.</p>\n" }, { "answer_id": 2080088, "score": 3, "text": "<p>What is $P(A|B)$ intuitively? One way to think about probabilities is as proportions. For example, $P(A)$ is the proportion of the event $A$ happening out of every possible outcome, or if you prefer a more geometric intuition, imagine that $A$ is some shape on a piece of paper, $P(A)$ signifies the proportion of the total area that $A$ covers of the piece of paper.</p>\n\n<p>What then does the conditional probability $P(A|B)$ correspond to? If we had two shapes $A$ and $B$ on our piece of paper, the $P(A|B)$ corresponds to the proportion that $A$ covers of the shape $B$. In a way we are restricting our space to the shape $B$ and asking what the proportion of $A$ is out of the total space. In terms of probability, we are restricting all the possible event to those in which the event $B$ happens and asking what proportion does $A$ happen out of all possible events.</p>\n\n<p>The ratio $\\frac{P(A|B)}{P(A)}$ tells us how the proportion of $A$ changes after we restrict the total space to $B$. The interesting thing about this ratio is that it is symmetric in $A$ and $B$, which is one way to interpret Baye's Theorem. That is, given neither $P(A)$ or $P(B)$ are zero, $$\\frac{P(A|B)}{P(A)}=\\frac{P(B|A)}{P(B)}.$$</p>\n\n<p>In terms of the formula above, How would one figure out $P(B|A)$? Well this is the proportion of $B$ in $A$. Trivially, we have that $$P(B|A)=\\frac{P(B|A)}{P(B)}P(B). $$ So this is the proportion of $B$ in the total space times the ratio of how much it changes upon restriction of $A$, but Bayes theorem tell us that this ratio is exactly the same as $\\frac{P(A|B)}{P(A)}$, which gives the usual form of Bayes'.</p>\n" }, { "answer_id": 2080098, "score": 1, "text": "<p>Well, you have a grasp of the intuition behind the following:$$\\begin{align}\\mathsf P(A\\cap B)~&amp;=~\\mathsf P(B\\mid A)~\\mathsf P(A) \\\\[1ex] &amp;=~\\mathsf P(A\\mid B)~\\mathsf P(B)\\end{align}$$</p>\n\n<p>The measure of the intersection of A and B, equals the measure of event A times the proportion of event B within event A, and also equals the measure of event B times the proportion of event A within event B.</p>\n\n<p>So Bayes' Theorem is just a rearrangement:\n$$\\raise{1.5ex}{\\mathsf P(B\\mid A)} ~=~ \\dfrac{\\mathsf P(A\\mid B)~\\mathsf P(B)}{\\qquad\\qquad\\mathsf P(A)}$$</p>\n\n<p>The proportion of event B within event A is equal to the proportion of event A within event B times the measure of event B divided by the measure of event A.</p>\n" }, { "answer_id": 2300654, "score": 0, "text": "<p>When I watch Peter Falk play Columbo on TV, I think of Bayes' Theorem.</p>\n\n<p>Suppose you are a detective and your focus in on the most likely suspect, Mr. Brooks. As coincidences pile up, not even hard evidence, you 'know' he is your guy.</p>\n\n<p>Let $B$ represent that Mr. Brooks is guilty of murder, the cause for finding a dead body. Suppose as you investigate some new information comes to light, event $A$ and it seems relevant. As a detective, you know right away that</p>\n\n<p>$P(B|A)=\\lambda_A P(B) \\le 1$ and $\\lambda_A \\ge 0$</p>\n\n<p>You estimate $P(A|B)$ and from your experience in the world you have a good idea about $P(A)$.</p>\n\n<p>If these two numbers are equal, then $A$ is not providing any useful information, it is status quo experience. As a detective, you know about Bayes' Formula, but due to your coarse estimates, you are on the alert just for the really strange stuff.</p>\n\n<p>if $P(A|B) \\gt P(A)$ by a big magnitude, then the odds that Mr. Brooks is guilty really climbs.</p>\n\n<p>For example, you know if you ask certain questions 90% of the time you will hear the same type of answer from your suspects. Suppose you ask Mr. Brooks and he has a very strange response. You let $A$ denote <strong>strange answer event</strong>. As usual you don't worry too much about $P(A|B)$, but figure that a guilty person would have that strange response at least 30% of the time.</p>\n\n<p>The chances that Mr. Brooks is guilty just went up 3 fold. If $P(B|A)$ is now greater than $1$ don't worry. All that remains is gathering evidence that will hold up in Court.</p>\n\n<p>Detective Columbo: Uhh, Sir! Just one more thing - one more question.</p>\n" }, { "answer_id": 2080086, "score": 4, "text": "<p>I can give you a similar interpretation. Write it as\n$$\\Pr(B|A)=\\frac{\\Pr(A|B)\\Pr(B)}{\\Pr(A)}=\\frac{\\Pr(B\\cap A)}{\\Pr(A)}$$ </p>\n\n<p>First notice the difference between the two events $B$ and $B|A$. The former is an event in the sample space $\\Omega$ whose probability is one. We can write $\\Pr(B)$ as $$\\frac{\\Pr(B\\cap\\Omega)}{\\Pr(\\Omega)}=\\frac{\\Pr(B)}{\\Pr(\\Omega)}=\\frac{\\Pr(B)}{1}$$</p>\n\n<p>However, $B|A$ means that for event $B$ the sample space is <em>reduced</em> from $\\Omega$ to $A$, hence we have $\\Pr(A)$ in denomerator instead of $\\Pr(\\Omega)$. Similarly, in the numerator we have $\\Pr(B\\cap A)$ since we only consider $B$ in a subset of the sample space that intersects with $A$.</p>\n" }, { "answer_id": 2080088, "score": 3, "text": "<p>What is $P(A|B)$ intuitively? One way to think about probabilities is as proportions. For example, $P(A)$ is the proportion of the event $A$ happening out of every possible outcome, or if you prefer a more geometric intuition, imagine that $A$ is some shape on a piece of paper, $P(A)$ signifies the proportion of the total area that $A$ covers of the piece of paper.</p>\n\n<p>What then does the conditional probability $P(A|B)$ correspond to? If we had two shapes $A$ and $B$ on our piece of paper, the $P(A|B)$ corresponds to the proportion that $A$ covers of the shape $B$. In a way we are restricting our space to the shape $B$ and asking what the proportion of $A$ is out of the total space. In terms of probability, we are restricting all the possible event to those in which the event $B$ happens and asking what proportion does $A$ happen out of all possible events.</p>\n\n<p>The ratio $\\frac{P(A|B)}{P(A)}$ tells us how the proportion of $A$ changes after we restrict the total space to $B$. The interesting thing about this ratio is that it is symmetric in $A$ and $B$, which is one way to interpret Baye's Theorem. That is, given neither $P(A)$ or $P(B)$ are zero, $$\\frac{P(A|B)}{P(A)}=\\frac{P(B|A)}{P(B)}.$$</p>\n\n<p>In terms of the formula above, How would one figure out $P(B|A)$? Well this is the proportion of $B$ in $A$. Trivially, we have that $$P(B|A)=\\frac{P(B|A)}{P(B)}P(B). $$ So this is the proportion of $B$ in the total space times the ratio of how much it changes upon restriction of $A$, but Bayes theorem tell us that this ratio is exactly the same as $\\frac{P(A|B)}{P(A)}$, which gives the usual form of Bayes'.</p>\n" }, { "answer_id": 2080098, "score": 1, "text": "<p>Well, you have a grasp of the intuition behind the following:$$\\begin{align}\\mathsf P(A\\cap B)~&amp;=~\\mathsf P(B\\mid A)~\\mathsf P(A) \\\\[1ex] &amp;=~\\mathsf P(A\\mid B)~\\mathsf P(B)\\end{align}$$</p>\n\n<p>The measure of the intersection of A and B, equals the measure of event A times the proportion of event B within event A, and also equals the measure of event B times the proportion of event A within event B.</p>\n\n<p>So Bayes' Theorem is just a rearrangement:\n$$\\raise{1.5ex}{\\mathsf P(B\\mid A)} ~=~ \\dfrac{\\mathsf P(A\\mid B)~\\mathsf P(B)}{\\qquad\\qquad\\mathsf P(A)}$$</p>\n\n<p>The proportion of event B within event A is equal to the proportion of event A within event B times the measure of event B divided by the measure of event A.</p>\n" }, { "answer_id": 2300654, "score": 0, "text": "<p>When I watch Peter Falk play Columbo on TV, I think of Bayes' Theorem.</p>\n\n<p>Suppose you are a detective and your focus in on the most likely suspect, Mr. Brooks. As coincidences pile up, not even hard evidence, you 'know' he is your guy.</p>\n\n<p>Let $B$ represent that Mr. Brooks is guilty of murder, the cause for finding a dead body. Suppose as you investigate some new information comes to light, event $A$ and it seems relevant. As a detective, you know right away that</p>\n\n<p>$P(B|A)=\\lambda_A P(B) \\le 1$ and $\\lambda_A \\ge 0$</p>\n\n<p>You estimate $P(A|B)$ and from your experience in the world you have a good idea about $P(A)$.</p>\n\n<p>If these two numbers are equal, then $A$ is not providing any useful information, it is status quo experience. As a detective, you know about Bayes' Formula, but due to your coarse estimates, you are on the alert just for the really strange stuff.</p>\n\n<p>if $P(A|B) \\gt P(A)$ by a big magnitude, then the odds that Mr. Brooks is guilty really climbs.</p>\n\n<p>For example, you know if you ask certain questions 90% of the time you will hear the same type of answer from your suspects. Suppose you ask Mr. Brooks and he has a very strange response. You let $A$ denote <strong>strange answer event</strong>. As usual you don't worry too much about $P(A|B)$, but figure that a guilty person would have that strange response at least 30% of the time.</p>\n\n<p>The chances that Mr. Brooks is guilty just went up 3 fold. If $P(B|A)$ is now greater than $1$ don't worry. All that remains is gathering evidence that will hold up in Court.</p>\n\n<p>Detective Columbo: Uhh, Sir! Just one more thing - one more question.</p>\n" }, { "answer_id": 2080086, "score": 4, "text": "<p>I can give you a similar interpretation. Write it as\n$$\\Pr(B|A)=\\frac{\\Pr(A|B)\\Pr(B)}{\\Pr(A)}=\\frac{\\Pr(B\\cap A)}{\\Pr(A)}$$ </p>\n\n<p>First notice the difference between the two events $B$ and $B|A$. The former is an event in the sample space $\\Omega$ whose probability is one. We can write $\\Pr(B)$ as $$\\frac{\\Pr(B\\cap\\Omega)}{\\Pr(\\Omega)}=\\frac{\\Pr(B)}{\\Pr(\\Omega)}=\\frac{\\Pr(B)}{1}$$</p>\n\n<p>However, $B|A$ means that for event $B$ the sample space is <em>reduced</em> from $\\Omega$ to $A$, hence we have $\\Pr(A)$ in denomerator instead of $\\Pr(\\Omega)$. Similarly, in the numerator we have $\\Pr(B\\cap A)$ since we only consider $B$ in a subset of the sample space that intersects with $A$.</p>\n" }, { "answer_id": 2080088, "score": 3, "text": "<p>What is $P(A|B)$ intuitively? One way to think about probabilities is as proportions. For example, $P(A)$ is the proportion of the event $A$ happening out of every possible outcome, or if you prefer a more geometric intuition, imagine that $A$ is some shape on a piece of paper, $P(A)$ signifies the proportion of the total area that $A$ covers of the piece of paper.</p>\n\n<p>What then does the conditional probability $P(A|B)$ correspond to? If we had two shapes $A$ and $B$ on our piece of paper, the $P(A|B)$ corresponds to the proportion that $A$ covers of the shape $B$. In a way we are restricting our space to the shape $B$ and asking what the proportion of $A$ is out of the total space. In terms of probability, we are restricting all the possible event to those in which the event $B$ happens and asking what proportion does $A$ happen out of all possible events.</p>\n\n<p>The ratio $\\frac{P(A|B)}{P(A)}$ tells us how the proportion of $A$ changes after we restrict the total space to $B$. The interesting thing about this ratio is that it is symmetric in $A$ and $B$, which is one way to interpret Baye's Theorem. That is, given neither $P(A)$ or $P(B)$ are zero, $$\\frac{P(A|B)}{P(A)}=\\frac{P(B|A)}{P(B)}.$$</p>\n\n<p>In terms of the formula above, How would one figure out $P(B|A)$? Well this is the proportion of $B$ in $A$. Trivially, we have that $$P(B|A)=\\frac{P(B|A)}{P(B)}P(B). $$ So this is the proportion of $B$ in the total space times the ratio of how much it changes upon restriction of $A$, but Bayes theorem tell us that this ratio is exactly the same as $\\frac{P(A|B)}{P(A)}$, which gives the usual form of Bayes'.</p>\n" }, { "answer_id": 2080098, "score": 1, "text": "<p>Well, you have a grasp of the intuition behind the following:$$\\begin{align}\\mathsf P(A\\cap B)~&amp;=~\\mathsf P(B\\mid A)~\\mathsf P(A) \\\\[1ex] &amp;=~\\mathsf P(A\\mid B)~\\mathsf P(B)\\end{align}$$</p>\n\n<p>The measure of the intersection of A and B, equals the measure of event A times the proportion of event B within event A, and also equals the measure of event B times the proportion of event A within event B.</p>\n\n<p>So Bayes' Theorem is just a rearrangement:\n$$\\raise{1.5ex}{\\mathsf P(B\\mid A)} ~=~ \\dfrac{\\mathsf P(A\\mid B)~\\mathsf P(B)}{\\qquad\\qquad\\mathsf P(A)}$$</p>\n\n<p>The proportion of event B within event A is equal to the proportion of event A within event B times the measure of event B divided by the measure of event A.</p>\n" }, { "answer_id": 2300654, "score": 0, "text": "<p>When I watch Peter Falk play Columbo on TV, I think of Bayes' Theorem.</p>\n\n<p>Suppose you are a detective and your focus in on the most likely suspect, Mr. Brooks. As coincidences pile up, not even hard evidence, you 'know' he is your guy.</p>\n\n<p>Let $B$ represent that Mr. Brooks is guilty of murder, the cause for finding a dead body. Suppose as you investigate some new information comes to light, event $A$ and it seems relevant. As a detective, you know right away that</p>\n\n<p>$P(B|A)=\\lambda_A P(B) \\le 1$ and $\\lambda_A \\ge 0$</p>\n\n<p>You estimate $P(A|B)$ and from your experience in the world you have a good idea about $P(A)$.</p>\n\n<p>If these two numbers are equal, then $A$ is not providing any useful information, it is status quo experience. As a detective, you know about Bayes' Formula, but due to your coarse estimates, you are on the alert just for the really strange stuff.</p>\n\n<p>if $P(A|B) \\gt P(A)$ by a big magnitude, then the odds that Mr. Brooks is guilty really climbs.</p>\n\n<p>For example, you know if you ask certain questions 90% of the time you will hear the same type of answer from your suspects. Suppose you ask Mr. Brooks and he has a very strange response. You let $A$ denote <strong>strange answer event</strong>. As usual you don't worry too much about $P(A|B)$, but figure that a guilty person would have that strange response at least 30% of the time.</p>\n\n<p>The chances that Mr. Brooks is guilty just went up 3 fold. If $P(B|A)$ is now greater than $1$ don't worry. All that remains is gathering evidence that will hold up in Court.</p>\n\n<p>Detective Columbo: Uhh, Sir! Just one more thing - one more question.</p>\n" }, { "answer_id": 2080086, "score": 4, "text": "<p>I can give you a similar interpretation. Write it as\n$$\\Pr(B|A)=\\frac{\\Pr(A|B)\\Pr(B)}{\\Pr(A)}=\\frac{\\Pr(B\\cap A)}{\\Pr(A)}$$ </p>\n\n<p>First notice the difference between the two events $B$ and $B|A$. The former is an event in the sample space $\\Omega$ whose probability is one. We can write $\\Pr(B)$ as $$\\frac{\\Pr(B\\cap\\Omega)}{\\Pr(\\Omega)}=\\frac{\\Pr(B)}{\\Pr(\\Omega)}=\\frac{\\Pr(B)}{1}$$</p>\n\n<p>However, $B|A$ means that for event $B$ the sample space is <em>reduced</em> from $\\Omega$ to $A$, hence we have $\\Pr(A)$ in denomerator instead of $\\Pr(\\Omega)$. Similarly, in the numerator we have $\\Pr(B\\cap A)$ since we only consider $B$ in a subset of the sample space that intersects with $A$.</p>\n" }, { "answer_id": 2080088, "score": 3, "text": "<p>What is $P(A|B)$ intuitively? One way to think about probabilities is as proportions. For example, $P(A)$ is the proportion of the event $A$ happening out of every possible outcome, or if you prefer a more geometric intuition, imagine that $A$ is some shape on a piece of paper, $P(A)$ signifies the proportion of the total area that $A$ covers of the piece of paper.</p>\n\n<p>What then does the conditional probability $P(A|B)$ correspond to? If we had two shapes $A$ and $B$ on our piece of paper, the $P(A|B)$ corresponds to the proportion that $A$ covers of the shape $B$. In a way we are restricting our space to the shape $B$ and asking what the proportion of $A$ is out of the total space. In terms of probability, we are restricting all the possible event to those in which the event $B$ happens and asking what proportion does $A$ happen out of all possible events.</p>\n\n<p>The ratio $\\frac{P(A|B)}{P(A)}$ tells us how the proportion of $A$ changes after we restrict the total space to $B$. The interesting thing about this ratio is that it is symmetric in $A$ and $B$, which is one way to interpret Baye's Theorem. That is, given neither $P(A)$ or $P(B)$ are zero, $$\\frac{P(A|B)}{P(A)}=\\frac{P(B|A)}{P(B)}.$$</p>\n\n<p>In terms of the formula above, How would one figure out $P(B|A)$? Well this is the proportion of $B$ in $A$. Trivially, we have that $$P(B|A)=\\frac{P(B|A)}{P(B)}P(B). $$ So this is the proportion of $B$ in the total space times the ratio of how much it changes upon restriction of $A$, but Bayes theorem tell us that this ratio is exactly the same as $\\frac{P(A|B)}{P(A)}$, which gives the usual form of Bayes'.</p>\n" }, { "answer_id": 2080098, "score": 1, "text": "<p>Well, you have a grasp of the intuition behind the following:$$\\begin{align}\\mathsf P(A\\cap B)~&amp;=~\\mathsf P(B\\mid A)~\\mathsf P(A) \\\\[1ex] &amp;=~\\mathsf P(A\\mid B)~\\mathsf P(B)\\end{align}$$</p>\n\n<p>The measure of the intersection of A and B, equals the measure of event A times the proportion of event B within event A, and also equals the measure of event B times the proportion of event A within event B.</p>\n\n<p>So Bayes' Theorem is just a rearrangement:\n$$\\raise{1.5ex}{\\mathsf P(B\\mid A)} ~=~ \\dfrac{\\mathsf P(A\\mid B)~\\mathsf P(B)}{\\qquad\\qquad\\mathsf P(A)}$$</p>\n\n<p>The proportion of event B within event A is equal to the proportion of event A within event B times the measure of event B divided by the measure of event A.</p>\n" }, { "answer_id": 2300654, "score": 0, "text": "<p>When I watch Peter Falk play Columbo on TV, I think of Bayes' Theorem.</p>\n\n<p>Suppose you are a detective and your focus in on the most likely suspect, Mr. Brooks. As coincidences pile up, not even hard evidence, you 'know' he is your guy.</p>\n\n<p>Let $B$ represent that Mr. Brooks is guilty of murder, the cause for finding a dead body. Suppose as you investigate some new information comes to light, event $A$ and it seems relevant. As a detective, you know right away that</p>\n\n<p>$P(B|A)=\\lambda_A P(B) \\le 1$ and $\\lambda_A \\ge 0$</p>\n\n<p>You estimate $P(A|B)$ and from your experience in the world you have a good idea about $P(A)$.</p>\n\n<p>If these two numbers are equal, then $A$ is not providing any useful information, it is status quo experience. As a detective, you know about Bayes' Formula, but due to your coarse estimates, you are on the alert just for the really strange stuff.</p>\n\n<p>if $P(A|B) \\gt P(A)$ by a big magnitude, then the odds that Mr. Brooks is guilty really climbs.</p>\n\n<p>For example, you know if you ask certain questions 90% of the time you will hear the same type of answer from your suspects. Suppose you ask Mr. Brooks and he has a very strange response. You let $A$ denote <strong>strange answer event</strong>. As usual you don't worry too much about $P(A|B)$, but figure that a guilty person would have that strange response at least 30% of the time.</p>\n\n<p>The chances that Mr. Brooks is guilty just went up 3 fold. If $P(B|A)$ is now greater than $1$ don't worry. All that remains is gathering evidence that will hold up in Court.</p>\n\n<p>Detective Columbo: Uhh, Sir! Just one more thing - one more question.</p>\n" }, { "answer_id": 2080086, "score": 4, "text": "<p>I can give you a similar interpretation. Write it as\n$$\\Pr(B|A)=\\frac{\\Pr(A|B)\\Pr(B)}{\\Pr(A)}=\\frac{\\Pr(B\\cap A)}{\\Pr(A)}$$ </p>\n\n<p>First notice the difference between the two events $B$ and $B|A$. The former is an event in the sample space $\\Omega$ whose probability is one. We can write $\\Pr(B)$ as $$\\frac{\\Pr(B\\cap\\Omega)}{\\Pr(\\Omega)}=\\frac{\\Pr(B)}{\\Pr(\\Omega)}=\\frac{\\Pr(B)}{1}$$</p>\n\n<p>However, $B|A$ means that for event $B$ the sample space is <em>reduced</em> from $\\Omega$ to $A$, hence we have $\\Pr(A)$ in denomerator instead of $\\Pr(\\Omega)$. Similarly, in the numerator we have $\\Pr(B\\cap A)$ since we only consider $B$ in a subset of the sample space that intersects with $A$.</p>\n" }, { "answer_id": 2080088, "score": 3, "text": "<p>What is $P(A|B)$ intuitively? One way to think about probabilities is as proportions. For example, $P(A)$ is the proportion of the event $A$ happening out of every possible outcome, or if you prefer a more geometric intuition, imagine that $A$ is some shape on a piece of paper, $P(A)$ signifies the proportion of the total area that $A$ covers of the piece of paper.</p>\n\n<p>What then does the conditional probability $P(A|B)$ correspond to? If we had two shapes $A$ and $B$ on our piece of paper, the $P(A|B)$ corresponds to the proportion that $A$ covers of the shape $B$. In a way we are restricting our space to the shape $B$ and asking what the proportion of $A$ is out of the total space. In terms of probability, we are restricting all the possible event to those in which the event $B$ happens and asking what proportion does $A$ happen out of all possible events.</p>\n\n<p>The ratio $\\frac{P(A|B)}{P(A)}$ tells us how the proportion of $A$ changes after we restrict the total space to $B$. The interesting thing about this ratio is that it is symmetric in $A$ and $B$, which is one way to interpret Baye's Theorem. That is, given neither $P(A)$ or $P(B)$ are zero, $$\\frac{P(A|B)}{P(A)}=\\frac{P(B|A)}{P(B)}.$$</p>\n\n<p>In terms of the formula above, How would one figure out $P(B|A)$? Well this is the proportion of $B$ in $A$. Trivially, we have that $$P(B|A)=\\frac{P(B|A)}{P(B)}P(B). $$ So this is the proportion of $B$ in the total space times the ratio of how much it changes upon restriction of $A$, but Bayes theorem tell us that this ratio is exactly the same as $\\frac{P(A|B)}{P(A)}$, which gives the usual form of Bayes'.</p>\n" }, { "answer_id": 2080098, "score": 1, "text": "<p>Well, you have a grasp of the intuition behind the following:$$\\begin{align}\\mathsf P(A\\cap B)~&amp;=~\\mathsf P(B\\mid A)~\\mathsf P(A) \\\\[1ex] &amp;=~\\mathsf P(A\\mid B)~\\mathsf P(B)\\end{align}$$</p>\n\n<p>The measure of the intersection of A and B, equals the measure of event A times the proportion of event B within event A, and also equals the measure of event B times the proportion of event A within event B.</p>\n\n<p>So Bayes' Theorem is just a rearrangement:\n$$\\raise{1.5ex}{\\mathsf P(B\\mid A)} ~=~ \\dfrac{\\mathsf P(A\\mid B)~\\mathsf P(B)}{\\qquad\\qquad\\mathsf P(A)}$$</p>\n\n<p>The proportion of event B within event A is equal to the proportion of event A within event B times the measure of event B divided by the measure of event A.</p>\n" }, { "answer_id": 2300654, "score": 0, "text": "<p>When I watch Peter Falk play Columbo on TV, I think of Bayes' Theorem.</p>\n\n<p>Suppose you are a detective and your focus in on the most likely suspect, Mr. Brooks. As coincidences pile up, not even hard evidence, you 'know' he is your guy.</p>\n\n<p>Let $B$ represent that Mr. Brooks is guilty of murder, the cause for finding a dead body. Suppose as you investigate some new information comes to light, event $A$ and it seems relevant. As a detective, you know right away that</p>\n\n<p>$P(B|A)=\\lambda_A P(B) \\le 1$ and $\\lambda_A \\ge 0$</p>\n\n<p>You estimate $P(A|B)$ and from your experience in the world you have a good idea about $P(A)$.</p>\n\n<p>If these two numbers are equal, then $A$ is not providing any useful information, it is status quo experience. As a detective, you know about Bayes' Formula, but due to your coarse estimates, you are on the alert just for the really strange stuff.</p>\n\n<p>if $P(A|B) \\gt P(A)$ by a big magnitude, then the odds that Mr. Brooks is guilty really climbs.</p>\n\n<p>For example, you know if you ask certain questions 90% of the time you will hear the same type of answer from your suspects. Suppose you ask Mr. Brooks and he has a very strange response. You let $A$ denote <strong>strange answer event</strong>. As usual you don't worry too much about $P(A|B)$, but figure that a guilty person would have that strange response at least 30% of the time.</p>\n\n<p>The chances that Mr. Brooks is guilty just went up 3 fold. If $P(B|A)$ is now greater than $1$ don't worry. All that remains is gathering evidence that will hold up in Court.</p>\n\n<p>Detective Columbo: Uhh, Sir! Just one more thing - one more question.</p>\n" } ]
Question: Bayes Formula: Intuition Answer 1: <p>I can give you a similar interpretation. Write it as $$\Pr(B|A)=\frac{\Pr(A|B)\Pr(B)}{\Pr(A)}=\frac{\Pr(B\cap A)}{\Pr(A)}$$ </p> <p>First notice the difference between the two events $B$ and $B|A$. The former is an event in the sample space $\Omega$ whose probability is one. We can write $\Pr(B)$ as $$\frac{\Pr(B\cap\Omega)}{\Pr(\Omega)}=\frac{\Pr(B)}{\Pr(\Omega)}=\frac{\Pr(B)}{1}$$</p> <p>However, $B|A$ means that for event $B$ the sample space is <em>reduced</em> from $\Omega$ to $A$, hence we have $\Pr(A)$ in denomerator instead of $\Pr(\Omega)$. Similarly, in the numerator we have $\Pr(B\cap A)$ since we only consider $B$ in a subset of the sample space that intersects with $A$.</p> Answer 2: <p>What is $P(A|B)$ intuitively? One way to think about probabilities is as proportions. For example, $P(A)$ is the proportion of the event $A$ happening out of every possible outcome, or if you prefer a more geometric intuition, imagine that $A$ is some shape on a piece of paper, $P(A)$ signifies the proportion of the total area that $A$ covers of the piece of paper.</p> <p>What then does the conditional probability $P(A|B)$ correspond to? If we had two shapes $A$ and $B$ on our piece of paper, the $P(A|B)$ corresponds to the proportion that $A$ covers of the shape $B$. In a way we are restricting our space to the shape $B$ and asking what the proportion of $A$ is out of the total space. In terms of probability, we are restricting all the possible event to those in which the event $B$ happens and asking what proportion does $A$ happen out of all possible events.</p> <p>The ratio $\frac{P(A|B)}{P(A)}$ tells us how the proportion of $A$ changes after we restrict the total space to $B$. The interesting thing about this ratio is that it is symmetric in $A$ and $B$, which is one way to interpret Baye's Theorem. That is, given neither $P(A)$ or $P(B)$ are zero, $$\frac{P(A|B)}{P(A)}=\frac{P(B|A)}{P(B)}.$$</p> <p>In terms of the formula above, How would one figure out $P(B|A)$? Well this is the proportion of $B$ in $A$. Trivially, we have that $$P(B|A)=\frac{P(B|A)}{P(B)}P(B). $$ So this is the proportion of $B$ in the total space times the ratio of how much it changes upon restriction of $A$, but Bayes theorem tell us that this ratio is exactly the same as $\frac{P(A|B)}{P(A)}$, which gives the usual form of Bayes'.</p> Answer 3: <p>Well, you have a grasp of the intuition behind the following:$$\begin{align}\mathsf P(A\cap B)~&amp;=~\mathsf P(B\mid A)~\mathsf P(A) \\[1ex] &amp;=~\mathsf P(A\mid B)~\mathsf P(B)\end{align}$$</p> <p>The measure of the intersection of A and B, equals the measure of event A times the proportion of event B within event A, and also equals the measure of event B times the proportion of event A within event B.</p> <p>So Bayes' Theorem is just a rearrangement: $$\raise{1.5ex}{\mathsf P(B\mid A)} ~=~ \dfrac{\mathsf P(A\mid B)~\mathsf P(B)}{\qquad\qquad\mathsf P(A)}$$</p> <p>The proportion of event B within event A is equal to the proportion of event A within event B times the measure of event B divided by the measure of event A.</p> Answer 4: <p>When I watch Peter Falk play Columbo on TV, I think of Bayes' Theorem.</p> <p>Suppose you are a detective and your focus in on the most likely suspect, Mr. Brooks. As coincidences pile up, not even hard evidence, you 'know' he is your guy.</p> <p>Let $B$ represent that Mr. Brooks is guilty of murder, the cause for finding a dead body. Suppose as you investigate some new information comes to light, event $A$ and it seems relevant. As a detective, you know right away that</p> <p>$P(B|A)=\lambda_A P(B) \le 1$ and $\lambda_A \ge 0$</p> <p>You estimate $P(A|B)$ and from your experience in the world you have a good idea about $P(A)$.</p> <p>If these two numbers are equal, then $A$ is not providing any useful information, it is status quo experience. As a detective, you know about Bayes' Formula, but due to your coarse estimates, you are on the alert just for the really strange stuff.</p> <p>if $P(A|B) \gt P(A)$ by a big magnitude, then the odds that Mr. Brooks is guilty really climbs.</p> <p>For example, you know if you ask certain questions 90% of the time you will hear the same type of answer from your suspects. Suppose you ask Mr. Brooks and he has a very strange response. You let $A$ denote <strong>strange answer event</strong>. As usual you don't worry too much about $P(A|B)$, but figure that a guilty person would have that strange response at least 30% of the time.</p> <p>The chances that Mr. Brooks is guilty just went up 3 fold. If $P(B|A)$ is now greater than $1$ don't worry. All that remains is gathering evidence that will hold up in Court.</p> <p>Detective Columbo: Uhh, Sir! Just one more thing - one more question.</p> Answer 5: <p>I can give you a similar interpretation. Write it as $$\Pr(B|A)=\frac{\Pr(A|B)\Pr(B)}{\Pr(A)}=\frac{\Pr(B\cap A)}{\Pr(A)}$$ </p> <p>First notice the difference between the two events $B$ and $B|A$. The former is an event in the sample space $\Omega$ whose probability is one. We can write $\Pr(B)$ as $$\frac{\Pr(B\cap\Omega)}{\Pr(\Omega)}=\frac{\Pr(B)}{\Pr(\Omega)}=\frac{\Pr(B)}{1}$$</p> <p>However, $B|A$ means that for event $B$ the sample space is <em>reduced</em> from $\Omega$ to $A$, hence we have $\Pr(A)$ in denomerator instead of $\Pr(\Omega)$. Similarly, in the numerator we have $\Pr(B\cap A)$ since we only consider $B$ in a subset of the sample space that intersects with $A$.</p> Answer 6: <p>What is $P(A|B)$ intuitively? One way to think about probabilities is as proportions. For example, $P(A)$ is the proportion of the event $A$ happening out of every possible outcome, or if you prefer a more geometric intuition, imagine that $A$ is some shape on a piece of paper, $P(A)$ signifies the proportion of the total area that $A$ covers of the piece of paper.</p> <p>What then does the conditional probability $P(A|B)$ correspond to? If we had two shapes $A$ and $B$ on our piece of paper, the $P(A|B)$ corresponds to the proportion that $A$ covers of the shape $B$. In a way we are restricting our space to the shape $B$ and asking what the proportion of $A$ is out of the total space. In terms of probability, we are restricting all the possible event to those in which the event $B$ happens and asking what proportion does $A$ happen out of all possible events.</p> <p>The ratio $\frac{P(A|B)}{P(A)}$ tells us how the proportion of $A$ changes after we restrict the total space to $B$. The interesting thing about this ratio is that it is symmetric in $A$ and $B$, which is one way to interpret Baye's Theorem. That is, given neither $P(A)$ or $P(B)$ are zero, $$\frac{P(A|B)}{P(A)}=\frac{P(B|A)}{P(B)}.$$</p> <p>In terms of the formula above, How would one figure out $P(B|A)$? Well this is the proportion of $B$ in $A$. Trivially, we have that $$P(B|A)=\frac{P(B|A)}{P(B)}P(B). $$ So this is the proportion of $B$ in the total space times the ratio of how much it changes upon restriction of $A$, but Bayes theorem tell us that this ratio is exactly the same as $\frac{P(A|B)}{P(A)}$, which gives the usual form of Bayes'.</p> Answer 7: <p>Well, you have a grasp of the intuition behind the following:$$\begin{align}\mathsf P(A\cap B)~&amp;=~\mathsf P(B\mid A)~\mathsf P(A) \\[1ex] &amp;=~\mathsf P(A\mid B)~\mathsf P(B)\end{align}$$</p> <p>The measure of the intersection of A and B, equals the measure of event A times the proportion of event B within event A, and also equals the measure of event B times the proportion of event A within event B.</p> <p>So Bayes' Theorem is just a rearrangement: $$\raise{1.5ex}{\mathsf P(B\mid A)} ~=~ \dfrac{\mathsf P(A\mid B)~\mathsf P(B)}{\qquad\qquad\mathsf P(A)}$$</p> <p>The proportion of event B within event A is equal to the proportion of event A within event B times the measure of event B divided by the measure of event A.</p> Answer 8: <p>When I watch Peter Falk play Columbo on TV, I think of Bayes' Theorem.</p> <p>Suppose you are a detective and your focus in on the most likely suspect, Mr. Brooks. As coincidences pile up, not even hard evidence, you 'know' he is your guy.</p> <p>Let $B$ represent that Mr. Brooks is guilty of murder, the cause for finding a dead body. Suppose as you investigate some new information comes to light, event $A$ and it seems relevant. As a detective, you know right away that</p> <p>$P(B|A)=\lambda_A P(B) \le 1$ and $\lambda_A \ge 0$</p> <p>You estimate $P(A|B)$ and from your experience in the world you have a good idea about $P(A)$.</p> <p>If these two numbers are equal, then $A$ is not providing any useful information, it is status quo experience. As a detective, you know about Bayes' Formula, but due to your coarse estimates, you are on the alert just for the really strange stuff.</p> <p>if $P(A|B) \gt P(A)$ by a big magnitude, then the odds that Mr. Brooks is guilty really climbs.</p> <p>For example, you know if you ask certain questions 90% of the time you will hear the same type of answer from your suspects. Suppose you ask Mr. Brooks and he has a very strange response. You let $A$ denote <strong>strange answer event</strong>. As usual you don't worry too much about $P(A|B)$, but figure that a guilty person would have that strange response at least 30% of the time.</p> <p>The chances that Mr. Brooks is guilty just went up 3 fold. If $P(B|A)$ is now greater than $1$ don't worry. All that remains is gathering evidence that will hold up in Court.</p> <p>Detective Columbo: Uhh, Sir! Just one more thing - one more question.</p> Answer 9: <p>I can give you a similar interpretation. Write it as $$\Pr(B|A)=\frac{\Pr(A|B)\Pr(B)}{\Pr(A)}=\frac{\Pr(B\cap A)}{\Pr(A)}$$ </p> <p>First notice the difference between the two events $B$ and $B|A$. The former is an event in the sample space $\Omega$ whose probability is one. We can write $\Pr(B)$ as $$\frac{\Pr(B\cap\Omega)}{\Pr(\Omega)}=\frac{\Pr(B)}{\Pr(\Omega)}=\frac{\Pr(B)}{1}$$</p> <p>However, $B|A$ means that for event $B$ the sample space is <em>reduced</em> from $\Omega$ to $A$, hence we have $\Pr(A)$ in denomerator instead of $\Pr(\Omega)$. Similarly, in the numerator we have $\Pr(B\cap A)$ since we only consider $B$ in a subset of the sample space that intersects with $A$.</p> Answer 10: <p>What is $P(A|B)$ intuitively? One way to think about probabilities is as proportions. For example, $P(A)$ is the proportion of the event $A$ happening out of every possible outcome, or if you prefer a more geometric intuition, imagine that $A$ is some shape on a piece of paper, $P(A)$ signifies the proportion of the total area that $A$ covers of the piece of paper.</p> <p>What then does the conditional probability $P(A|B)$ correspond to? If we had two shapes $A$ and $B$ on our piece of paper, the $P(A|B)$ corresponds to the proportion that $A$ covers of the shape $B$. In a way we are restricting our space to the shape $B$ and asking what the proportion of $A$ is out of the total space. In terms of probability, we are restricting all the possible event to those in which the event $B$ happens and asking what proportion does $A$ happen out of all possible events.</p> <p>The ratio $\frac{P(A|B)}{P(A)}$ tells us how the proportion of $A$ changes after we restrict the total space to $B$. The interesting thing about this ratio is that it is symmetric in $A$ and $B$, which is one way to interpret Baye's Theorem. That is, given neither $P(A)$ or $P(B)$ are zero, $$\frac{P(A|B)}{P(A)}=\frac{P(B|A)}{P(B)}.$$</p> <p>In terms of the formula above, How would one figure out $P(B|A)$? Well this is the proportion of $B$ in $A$. Trivially, we have that $$P(B|A)=\frac{P(B|A)}{P(B)}P(B). $$ So this is the proportion of $B$ in the total space times the ratio of how much it changes upon restriction of $A$, but Bayes theorem tell us that this ratio is exactly the same as $\frac{P(A|B)}{P(A)}$, which gives the usual form of Bayes'.</p> Answer 11: <p>Well, you have a grasp of the intuition behind the following:$$\begin{align}\mathsf P(A\cap B)~&amp;=~\mathsf P(B\mid A)~\mathsf P(A) \\[1ex] &amp;=~\mathsf P(A\mid B)~\mathsf P(B)\end{align}$$</p> <p>The measure of the intersection of A and B, equals the measure of event A times the proportion of event B within event A, and also equals the measure of event B times the proportion of event A within event B.</p> <p>So Bayes' Theorem is just a rearrangement: $$\raise{1.5ex}{\mathsf P(B\mid A)} ~=~ \dfrac{\mathsf P(A\mid B)~\mathsf P(B)}{\qquad\qquad\mathsf P(A)}$$</p> <p>The proportion of event B within event A is equal to the proportion of event A within event B times the measure of event B divided by the measure of event A.</p> Answer 12: <p>When I watch Peter Falk play Columbo on TV, I think of Bayes' Theorem.</p> <p>Suppose you are a detective and your focus in on the most likely suspect, Mr. Brooks. As coincidences pile up, not even hard evidence, you 'know' he is your guy.</p> <p>Let $B$ represent that Mr. Brooks is guilty of murder, the cause for finding a dead body. Suppose as you investigate some new information comes to light, event $A$ and it seems relevant. As a detective, you know right away that</p> <p>$P(B|A)=\lambda_A P(B) \le 1$ and $\lambda_A \ge 0$</p> <p>You estimate $P(A|B)$ and from your experience in the world you have a good idea about $P(A)$.</p> <p>If these two numbers are equal, then $A$ is not providing any useful information, it is status quo experience. As a detective, you know about Bayes' Formula, but due to your coarse estimates, you are on the alert just for the really strange stuff.</p> <p>if $P(A|B) \gt P(A)$ by a big magnitude, then the odds that Mr. Brooks is guilty really climbs.</p> <p>For example, you know if you ask certain questions 90% of the time you will hear the same type of answer from your suspects. Suppose you ask Mr. Brooks and he has a very strange response. You let $A$ denote <strong>strange answer event</strong>. As usual you don't worry too much about $P(A|B)$, but figure that a guilty person would have that strange response at least 30% of the time.</p> <p>The chances that Mr. Brooks is guilty just went up 3 fold. If $P(B|A)$ is now greater than $1$ don't worry. All that remains is gathering evidence that will hold up in Court.</p> <p>Detective Columbo: Uhh, Sir! Just one more thing - one more question.</p> Answer 13: <p>I can give you a similar interpretation. Write it as $$\Pr(B|A)=\frac{\Pr(A|B)\Pr(B)}{\Pr(A)}=\frac{\Pr(B\cap A)}{\Pr(A)}$$ </p> <p>First notice the difference between the two events $B$ and $B|A$. The former is an event in the sample space $\Omega$ whose probability is one. We can write $\Pr(B)$ as $$\frac{\Pr(B\cap\Omega)}{\Pr(\Omega)}=\frac{\Pr(B)}{\Pr(\Omega)}=\frac{\Pr(B)}{1}$$</p> <p>However, $B|A$ means that for event $B$ the sample space is <em>reduced</em> from $\Omega$ to $A$, hence we have $\Pr(A)$ in denomerator instead of $\Pr(\Omega)$. Similarly, in the numerator we have $\Pr(B\cap A)$ since we only consider $B$ in a subset of the sample space that intersects with $A$.</p> Answer 14: <p>What is $P(A|B)$ intuitively? One way to think about probabilities is as proportions. For example, $P(A)$ is the proportion of the event $A$ happening out of every possible outcome, or if you prefer a more geometric intuition, imagine that $A$ is some shape on a piece of paper, $P(A)$ signifies the proportion of the total area that $A$ covers of the piece of paper.</p> <p>What then does the conditional probability $P(A|B)$ correspond to? If we had two shapes $A$ and $B$ on our piece of paper, the $P(A|B)$ corresponds to the proportion that $A$ covers of the shape $B$. In a way we are restricting our space to the shape $B$ and asking what the proportion of $A$ is out of the total space. In terms of probability, we are restricting all the possible event to those in which the event $B$ happens and asking what proportion does $A$ happen out of all possible events.</p> <p>The ratio $\frac{P(A|B)}{P(A)}$ tells us how the proportion of $A$ changes after we restrict the total space to $B$. The interesting thing about this ratio is that it is symmetric in $A$ and $B$, which is one way to interpret Baye's Theorem. That is, given neither $P(A)$ or $P(B)$ are zero, $$\frac{P(A|B)}{P(A)}=\frac{P(B|A)}{P(B)}.$$</p> <p>In terms of the formula above, How would one figure out $P(B|A)$? Well this is the proportion of $B$ in $A$. Trivially, we have that $$P(B|A)=\frac{P(B|A)}{P(B)}P(B). $$ So this is the proportion of $B$ in the total space times the ratio of how much it changes upon restriction of $A$, but Bayes theorem tell us that this ratio is exactly the same as $\frac{P(A|B)}{P(A)}$, which gives the usual form of Bayes'.</p> Answer 15: <p>Well, you have a grasp of the intuition behind the following:$$\begin{align}\mathsf P(A\cap B)~&amp;=~\mathsf P(B\mid A)~\mathsf P(A) \\[1ex] &amp;=~\mathsf P(A\mid B)~\mathsf P(B)\end{align}$$</p> <p>The measure of the intersection of A and B, equals the measure of event A times the proportion of event B within event A, and also equals the measure of event B times the proportion of event A within event B.</p> <p>So Bayes' Theorem is just a rearrangement: $$\raise{1.5ex}{\mathsf P(B\mid A)} ~=~ \dfrac{\mathsf P(A\mid B)~\mathsf P(B)}{\qquad\qquad\mathsf P(A)}$$</p> <p>The proportion of event B within event A is equal to the proportion of event A within event B times the measure of event B divided by the measure of event A.</p> Answer 16: <p>When I watch Peter Falk play Columbo on TV, I think of Bayes' Theorem.</p> <p>Suppose you are a detective and your focus in on the most likely suspect, Mr. Brooks. As coincidences pile up, not even hard evidence, you 'know' he is your guy.</p> <p>Let $B$ represent that Mr. Brooks is guilty of murder, the cause for finding a dead body. Suppose as you investigate some new information comes to light, event $A$ and it seems relevant. As a detective, you know right away that</p> <p>$P(B|A)=\lambda_A P(B) \le 1$ and $\lambda_A \ge 0$</p> <p>You estimate $P(A|B)$ and from your experience in the world you have a good idea about $P(A)$.</p> <p>If these two numbers are equal, then $A$ is not providing any useful information, it is status quo experience. As a detective, you know about Bayes' Formula, but due to your coarse estimates, you are on the alert just for the really strange stuff.</p> <p>if $P(A|B) \gt P(A)$ by a big magnitude, then the odds that Mr. Brooks is guilty really climbs.</p> <p>For example, you know if you ask certain questions 90% of the time you will hear the same type of answer from your suspects. Suppose you ask Mr. Brooks and he has a very strange response. You let $A$ denote <strong>strange answer event</strong>. As usual you don't worry too much about $P(A|B)$, but figure that a guilty person would have that strange response at least 30% of the time.</p> <p>The chances that Mr. Brooks is guilty just went up 3 fold. If $P(B|A)$ is now greater than $1$ don't worry. All that remains is gathering evidence that will hold up in Court.</p> <p>Detective Columbo: Uhh, Sir! Just one more thing - one more question.</p> Answer 17: <p>I can give you a similar interpretation. Write it as $$\Pr(B|A)=\frac{\Pr(A|B)\Pr(B)}{\Pr(A)}=\frac{\Pr(B\cap A)}{\Pr(A)}$$ </p> <p>First notice the difference between the two events $B$ and $B|A$. The former is an event in the sample space $\Omega$ whose probability is one. We can write $\Pr(B)$ as $$\frac{\Pr(B\cap\Omega)}{\Pr(\Omega)}=\frac{\Pr(B)}{\Pr(\Omega)}=\frac{\Pr(B)}{1}$$</p> <p>However, $B|A$ means that for event $B$ the sample space is <em>reduced</em> from $\Omega$ to $A$, hence we have $\Pr(A)$ in denomerator instead of $\Pr(\Omega)$. Similarly, in the numerator we have $\Pr(B\cap A)$ since we only consider $B$ in a subset of the sample space that intersects with $A$.</p> Answer 18: <p>What is $P(A|B)$ intuitively? One way to think about probabilities is as proportions. For example, $P(A)$ is the proportion of the event $A$ happening out of every possible outcome, or if you prefer a more geometric intuition, imagine that $A$ is some shape on a piece of paper, $P(A)$ signifies the proportion of the total area that $A$ covers of the piece of paper.</p> <p>What then does the conditional probability $P(A|B)$ correspond to? If we had two shapes $A$ and $B$ on our piece of paper, the $P(A|B)$ corresponds to the proportion that $A$ covers of the shape $B$. In a way we are restricting our space to the shape $B$ and asking what the proportion of $A$ is out of the total space. In terms of probability, we are restricting all the possible event to those in which the event $B$ happens and asking what proportion does $A$ happen out of all possible events.</p> <p>The ratio $\frac{P(A|B)}{P(A)}$ tells us how the proportion of $A$ changes after we restrict the total space to $B$. The interesting thing about this ratio is that it is symmetric in $A$ and $B$, which is one way to interpret Baye's Theorem. That is, given neither $P(A)$ or $P(B)$ are zero, $$\frac{P(A|B)}{P(A)}=\frac{P(B|A)}{P(B)}.$$</p> <p>In terms of the formula above, How would one figure out $P(B|A)$? Well this is the proportion of $B$ in $A$. Trivially, we have that $$P(B|A)=\frac{P(B|A)}{P(B)}P(B). $$ So this is the proportion of $B$ in the total space times the ratio of how much it changes upon restriction of $A$, but Bayes theorem tell us that this ratio is exactly the same as $\frac{P(A|B)}{P(A)}$, which gives the usual form of Bayes'.</p> Answer 19: <p>Well, you have a grasp of the intuition behind the following:$$\begin{align}\mathsf P(A\cap B)~&amp;=~\mathsf P(B\mid A)~\mathsf P(A) \\[1ex] &amp;=~\mathsf P(A\mid B)~\mathsf P(B)\end{align}$$</p> <p>The measure of the intersection of A and B, equals the measure of event A times the proportion of event B within event A, and also equals the measure of event B times the proportion of event A within event B.</p> <p>So Bayes' Theorem is just a rearrangement: $$\raise{1.5ex}{\mathsf P(B\mid A)} ~=~ \dfrac{\mathsf P(A\mid B)~\mathsf P(B)}{\qquad\qquad\mathsf P(A)}$$</p> <p>The proportion of event B within event A is equal to the proportion of event A within event B times the measure of event B divided by the measure of event A.</p> Answer 20: <p>When I watch Peter Falk play Columbo on TV, I think of Bayes' Theorem.</p> <p>Suppose you are a detective and your focus in on the most likely suspect, Mr. Brooks. As coincidences pile up, not even hard evidence, you 'know' he is your guy.</p> <p>Let $B$ represent that Mr. Brooks is guilty of murder, the cause for finding a dead body. Suppose as you investigate some new information comes to light, event $A$ and it seems relevant. As a detective, you know right away that</p> <p>$P(B|A)=\lambda_A P(B) \le 1$ and $\lambda_A \ge 0$</p> <p>You estimate $P(A|B)$ and from your experience in the world you have a good idea about $P(A)$.</p> <p>If these two numbers are equal, then $A$ is not providing any useful information, it is status quo experience. As a detective, you know about Bayes' Formula, but due to your coarse estimates, you are on the alert just for the really strange stuff.</p> <p>if $P(A|B) \gt P(A)$ by a big magnitude, then the odds that Mr. Brooks is guilty really climbs.</p> <p>For example, you know if you ask certain questions 90% of the time you will hear the same type of answer from your suspects. Suppose you ask Mr. Brooks and he has a very strange response. You let $A$ denote <strong>strange answer event</strong>. As usual you don't worry too much about $P(A|B)$, but figure that a guilty person would have that strange response at least 30% of the time.</p> <p>The chances that Mr. Brooks is guilty just went up 3 fold. If $P(B|A)$ is now greater than $1$ don't worry. All that remains is gathering evidence that will hold up in Court.</p> <p>Detective Columbo: Uhh, Sir! Just one more thing - one more question.</p>
Find the solution of the pde $xu_y-yu_x=u$.
2,080,115
4
https://math.stackexchange.com/questions/2080115/find-the-solution-of-the-pde-xu-y-yu-x-u
<p>I am trying to find the equation of characteristic curves and solution of the partial differential equation $$x\frac{\partial u}{\partial y}-y\frac{\partial u}{\partial x}=u$$ $u(x,0)=\sin(\frac{\pi}{4}x)$ . </p> <p><strong>What I did:</strong> $$\frac{dx}{-y}=\frac{dy}{x}=\frac{du}{u}$$ with From first two equalities $$\frac{dx}{-y}=\frac{dy}{x}$$ by solving we get $x^2+y^2=c_1$ for some constant $c_1$. Now I am stuck with the third and rest of the equalities to come up with a suitable manipulation to solve the problem. How can I do this? Any help would be great. Thanks.</p>
[ { "answer_id": 2080219, "score": 4, "text": "<p>In polar coordinates:\n$$x=r\\cos\\phi \\\\ y=r\\sin\\phi \\\\\n\\frac{\\partial u}{\\partial x}=\\cos \\phi \\frac{\\partial u}{\\partial r}-\\frac{1}{r}\\sin \\phi \\frac{\\partial u}{\\partial \\phi}\\\\\n\\frac{\\partial u}{\\partial y}=\\sin \\phi \\frac{\\partial u}{\\partial r}+\\frac{1}{r}\\cos \\phi \\frac{\\partial u}{\\partial \\phi}\\\\\n$$\n$x\\frac{\\partial u}{\\partial y}-y\\frac{\\partial u}{\\partial x}=u$ becomes: $$\\frac{\\partial u}{\\partial \\phi}=u$$.<br>\nSo:\n$$\nu=f(r)e^\\phi \\\\\n=f(\\sqrt{x^2+y^2})e^{atan2(x,y)} \n$$</p>\n" }, { "answer_id": 2080239, "score": 3, "text": "<p>Do you know the characteristics method? Have a look at <a href=\"https://en.wikipedia.org/wiki/Method_of_characteristics\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Method_of_characteristics</a> .</p>\n\n<p>You form the PDE $x u_y - y u_x = u$ into the three ODEs </p>\n\n<p>$$y_s=x, \\ x_s=-y,\\ u_s=u$$</p>\n\n<p>with initial conditions (i.e. $s=0$) $~x(0,\\eta)=\\eta, y(0,\\eta)=0, u(0,\\eta)=\\sin(\\eta \\pi/4)$. Here you have a non-characteristic Cauchy problem for $\\eta \\neq 0$ since \n$$ \\begin{vmatrix} -y(0,\\eta) &amp; x(0,\\eta)' \\\\ x(0,\\eta) &amp;  y(0,\\eta)'  \\end{vmatrix}=\\begin{vmatrix} 0 &amp; 1 \\\\ \\eta &amp;  0  \\end{vmatrix}=-\\eta \\neq 0$$</p>\n\n<p>Solving these ODEs you get</p>\n\n<p>$$x=A(\\eta) \\cos(s)-B(\\eta)\\sin(s), \\ y=A(\\eta) \\sin(s)+B(\\eta)\\cos(s), \\ u=C(\\eta) e^s=\\sin(\\eta \\pi/4) e^s $$</p>\n\n<p>Now you have to plug in your initial conditions in $x$ &amp; $y$, solve for $\\eta$ &amp; $s$ and plug this into $u$ to get your solution dependent on $x$ &amp; $y$.</p>\n\n<p>By the initial conditions we get $A(\\eta)=\\eta$ and $B(\\eta)=0$. Therefore $x=\\eta \\cos(s), y=\\eta \\sin(s)$. By squaring and adding we get $x^2+y^2=\\eta^2$ (note that $(x,y)\\neq(0,0)$ by assumption) and $s=\\arccos(\\frac{x}{\\sqrt{x^2+y^2}})=\\arcsin(\\frac{y}{\\sqrt{x^2+y^2}})=\\arctan(\\frac{y}{x})$. Just choose one of them since all three forms are equivalent.</p>\n\n<p>So our solution reads</p>\n\n<p>$$u=\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right) \\exp\\left[\\arccos\\left(\\frac{x}{\\sqrt{x^2+y^2}}\\right)\\right]$$</p>\n\n<p>in a solution area with $(x,y)\\neq (0,0)$, for example $G=\\{(x,y) \\in \\mathbb{R}^2 | x&gt;0,y&gt;0 \\}$.</p>\n\n<p>Checking that the initial condition is fulfilled: $$u(x,0)=\\sin\\left(\\frac{\\pi}{4}x \\right) \\exp[\\arccos(1)]=\\sin\\left(\\frac{\\pi}{4}x \\right) \\exp(0)=\\sin\\left(\\frac{\\pi}{4}x \\right)$$</p>\n" }, { "answer_id": 2080925, "score": 2, "text": "<p>$$xu_y-yu_x=u$$\nWhat you did is correct. The set of differential equations is :\n$$\\frac{dx}{-y}=\\frac{dy}{x}=\\frac{du}{u}$$ \nThe equation of a first family of characteristic curves is : \n$$x^2+y^2=c_1$$\nAt this point, we have to consider either $\\frac{dx}{-y}=\\frac{du}{u}$ or $\\frac{dy}{x}=\\frac{du}{u}$</p>\n\n<p>For example, with the first one : $-\\frac{dx}{\\sqrt{c_1-x^2 }}=\\frac{du}{u}$ \n$$\\ln|u|+\\tan^{-1}\\left(\\frac{x}{\\sqrt{c_1-x^2 }} \\right)=\\ln|u|+\\tan^{-1}\\left(\\frac{x}{y} \\right)=c_2$$\nany constant $c_1$ and $c_2$ on the characteristic curves. Elsewhere, $c_1$ and $c_2$ are related :\n$$\\ln|u|-\\tan^{-1}\\left(\\frac{x}{y} \\right) =f(x^2+y^2)$$\nany differentiable function $f$. Or, with $F=e^f$, a form for the general solution of the PDE is :\n$$u(x,y)=\\exp\\left(-\\tan^{-1}\\left(\\frac{x}{y} \\right) \\right)F(x^2+y^2)$$\nNote : Considering the negative square instead of the positive, leads to $u=\\exp\\left(\\tan^{-1}\\left(\\frac{x}{y} \\right) \\right)F(x^2+y^2)$ which is solution of $xu_y-yu_x=-u$.</p>\n\n<p>Boundary condition : $u(x,0)=\\sin(\\frac{\\pi}{4}x) = \\exp\\left(-\\frac{\\pi}{2} \\right)F(x^2)$</p>\n\n<p>so, the function $F(X)$ is determined : $F(X)=\\exp\\left(\\frac{\\pi}{2}\\right)\\sin(\\frac{\\pi}{4}\\sqrt{X})$</p>\n\n<p>With $X=x^2+y^2$, bringing it back into the above general solution leads to the particular solution according to the boundary condition :\n$$u(x,y)=\\exp\\left(-\\tan^{-1}\\left(\\frac{x}{y}\\right) +\\frac{\\pi}{2}\\right)\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right)$$\n$$u(x,y)=\\exp\\left(\\tan^{-1}\\left(\\frac{y}{x}\\right)\\right)\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right)$$</p>\n" }, { "answer_id": 2942497, "score": 1, "text": "<p>The most general solution to this equation is \n<span class=\"math-container\">$$C(y^2 + x^2)^\\frac{\\alpha}{2}\\bigg{(}\\frac{y-ix}{y+ix}\\bigg{)}^\\frac{i}{2}$$</span> where <span class=\"math-container\">$\\alpha$</span> is any real number. The solution involves group theory and is rather lengthy, but the gist of it is that you assume the manifold of the solution is partitioned by the stabilizers of an invariant family of infinite groups: <span class=\"math-container\">$x'=\\lambda x$</span>, <span class=\"math-container\">$y'=\\lambda ^ \\beta y$</span>, and <span class=\"math-container\">$u'=\\lambda ^ \\alpha u$</span>. Apply this family of groups to the PDE and it becomes evident that, for invariance, <span class=\"math-container\">$\\beta =1$</span>. There is no restriction on <span class=\"math-container\">$\\alpha$</span>. Using the characteristic equation <span class=\"math-container\">$\\frac{dx}{x}=\\frac{dy}{y}=\\frac{du}{\\alpha u}$</span> we can pull out two independent integrals, <span class=\"math-container\">$\\mu = \\frac{y}{x}$</span> and <span class=\"math-container\">$\\nu = \\frac{u}{x^ \\alpha}$</span>. Because these stabilizers partition the manifold, the solution must be of the form <span class=\"math-container\">$\\nu = f(\\mu)$</span> or <span class=\"math-container\">$u=x^\\alpha f(\\frac{y}{x})$</span> where <span class=\"math-container\">$\\mu = \\frac{y}{x}$</span>\nis the argument for the function f. Now take partial derivatives of this function, plug them back into the original PDE, and find function f. (this takes some work) Multiply this function by <span class=\"math-container\">$x^\\alpha$</span> and you are done.</p>\n" }, { "answer_id": 2080219, "score": 4, "text": "<p>In polar coordinates:\n$$x=r\\cos\\phi \\\\ y=r\\sin\\phi \\\\\n\\frac{\\partial u}{\\partial x}=\\cos \\phi \\frac{\\partial u}{\\partial r}-\\frac{1}{r}\\sin \\phi \\frac{\\partial u}{\\partial \\phi}\\\\\n\\frac{\\partial u}{\\partial y}=\\sin \\phi \\frac{\\partial u}{\\partial r}+\\frac{1}{r}\\cos \\phi \\frac{\\partial u}{\\partial \\phi}\\\\\n$$\n$x\\frac{\\partial u}{\\partial y}-y\\frac{\\partial u}{\\partial x}=u$ becomes: $$\\frac{\\partial u}{\\partial \\phi}=u$$.<br>\nSo:\n$$\nu=f(r)e^\\phi \\\\\n=f(\\sqrt{x^2+y^2})e^{atan2(x,y)} \n$$</p>\n" }, { "answer_id": 2080239, "score": 3, "text": "<p>Do you know the characteristics method? Have a look at <a href=\"https://en.wikipedia.org/wiki/Method_of_characteristics\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Method_of_characteristics</a> .</p>\n\n<p>You form the PDE $x u_y - y u_x = u$ into the three ODEs </p>\n\n<p>$$y_s=x, \\ x_s=-y,\\ u_s=u$$</p>\n\n<p>with initial conditions (i.e. $s=0$) $~x(0,\\eta)=\\eta, y(0,\\eta)=0, u(0,\\eta)=\\sin(\\eta \\pi/4)$. Here you have a non-characteristic Cauchy problem for $\\eta \\neq 0$ since \n$$ \\begin{vmatrix} -y(0,\\eta) &amp; x(0,\\eta)' \\\\ x(0,\\eta) &amp;  y(0,\\eta)'  \\end{vmatrix}=\\begin{vmatrix} 0 &amp; 1 \\\\ \\eta &amp;  0  \\end{vmatrix}=-\\eta \\neq 0$$</p>\n\n<p>Solving these ODEs you get</p>\n\n<p>$$x=A(\\eta) \\cos(s)-B(\\eta)\\sin(s), \\ y=A(\\eta) \\sin(s)+B(\\eta)\\cos(s), \\ u=C(\\eta) e^s=\\sin(\\eta \\pi/4) e^s $$</p>\n\n<p>Now you have to plug in your initial conditions in $x$ &amp; $y$, solve for $\\eta$ &amp; $s$ and plug this into $u$ to get your solution dependent on $x$ &amp; $y$.</p>\n\n<p>By the initial conditions we get $A(\\eta)=\\eta$ and $B(\\eta)=0$. Therefore $x=\\eta \\cos(s), y=\\eta \\sin(s)$. By squaring and adding we get $x^2+y^2=\\eta^2$ (note that $(x,y)\\neq(0,0)$ by assumption) and $s=\\arccos(\\frac{x}{\\sqrt{x^2+y^2}})=\\arcsin(\\frac{y}{\\sqrt{x^2+y^2}})=\\arctan(\\frac{y}{x})$. Just choose one of them since all three forms are equivalent.</p>\n\n<p>So our solution reads</p>\n\n<p>$$u=\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right) \\exp\\left[\\arccos\\left(\\frac{x}{\\sqrt{x^2+y^2}}\\right)\\right]$$</p>\n\n<p>in a solution area with $(x,y)\\neq (0,0)$, for example $G=\\{(x,y) \\in \\mathbb{R}^2 | x&gt;0,y&gt;0 \\}$.</p>\n\n<p>Checking that the initial condition is fulfilled: $$u(x,0)=\\sin\\left(\\frac{\\pi}{4}x \\right) \\exp[\\arccos(1)]=\\sin\\left(\\frac{\\pi}{4}x \\right) \\exp(0)=\\sin\\left(\\frac{\\pi}{4}x \\right)$$</p>\n" }, { "answer_id": 2080925, "score": 2, "text": "<p>$$xu_y-yu_x=u$$\nWhat you did is correct. The set of differential equations is :\n$$\\frac{dx}{-y}=\\frac{dy}{x}=\\frac{du}{u}$$ \nThe equation of a first family of characteristic curves is : \n$$x^2+y^2=c_1$$\nAt this point, we have to consider either $\\frac{dx}{-y}=\\frac{du}{u}$ or $\\frac{dy}{x}=\\frac{du}{u}$</p>\n\n<p>For example, with the first one : $-\\frac{dx}{\\sqrt{c_1-x^2 }}=\\frac{du}{u}$ \n$$\\ln|u|+\\tan^{-1}\\left(\\frac{x}{\\sqrt{c_1-x^2 }} \\right)=\\ln|u|+\\tan^{-1}\\left(\\frac{x}{y} \\right)=c_2$$\nany constant $c_1$ and $c_2$ on the characteristic curves. Elsewhere, $c_1$ and $c_2$ are related :\n$$\\ln|u|-\\tan^{-1}\\left(\\frac{x}{y} \\right) =f(x^2+y^2)$$\nany differentiable function $f$. Or, with $F=e^f$, a form for the general solution of the PDE is :\n$$u(x,y)=\\exp\\left(-\\tan^{-1}\\left(\\frac{x}{y} \\right) \\right)F(x^2+y^2)$$\nNote : Considering the negative square instead of the positive, leads to $u=\\exp\\left(\\tan^{-1}\\left(\\frac{x}{y} \\right) \\right)F(x^2+y^2)$ which is solution of $xu_y-yu_x=-u$.</p>\n\n<p>Boundary condition : $u(x,0)=\\sin(\\frac{\\pi}{4}x) = \\exp\\left(-\\frac{\\pi}{2} \\right)F(x^2)$</p>\n\n<p>so, the function $F(X)$ is determined : $F(X)=\\exp\\left(\\frac{\\pi}{2}\\right)\\sin(\\frac{\\pi}{4}\\sqrt{X})$</p>\n\n<p>With $X=x^2+y^2$, bringing it back into the above general solution leads to the particular solution according to the boundary condition :\n$$u(x,y)=\\exp\\left(-\\tan^{-1}\\left(\\frac{x}{y}\\right) +\\frac{\\pi}{2}\\right)\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right)$$\n$$u(x,y)=\\exp\\left(\\tan^{-1}\\left(\\frac{y}{x}\\right)\\right)\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right)$$</p>\n" }, { "answer_id": 2942497, "score": 1, "text": "<p>The most general solution to this equation is \n<span class=\"math-container\">$$C(y^2 + x^2)^\\frac{\\alpha}{2}\\bigg{(}\\frac{y-ix}{y+ix}\\bigg{)}^\\frac{i}{2}$$</span> where <span class=\"math-container\">$\\alpha$</span> is any real number. The solution involves group theory and is rather lengthy, but the gist of it is that you assume the manifold of the solution is partitioned by the stabilizers of an invariant family of infinite groups: <span class=\"math-container\">$x'=\\lambda x$</span>, <span class=\"math-container\">$y'=\\lambda ^ \\beta y$</span>, and <span class=\"math-container\">$u'=\\lambda ^ \\alpha u$</span>. Apply this family of groups to the PDE and it becomes evident that, for invariance, <span class=\"math-container\">$\\beta =1$</span>. There is no restriction on <span class=\"math-container\">$\\alpha$</span>. Using the characteristic equation <span class=\"math-container\">$\\frac{dx}{x}=\\frac{dy}{y}=\\frac{du}{\\alpha u}$</span> we can pull out two independent integrals, <span class=\"math-container\">$\\mu = \\frac{y}{x}$</span> and <span class=\"math-container\">$\\nu = \\frac{u}{x^ \\alpha}$</span>. Because these stabilizers partition the manifold, the solution must be of the form <span class=\"math-container\">$\\nu = f(\\mu)$</span> or <span class=\"math-container\">$u=x^\\alpha f(\\frac{y}{x})$</span> where <span class=\"math-container\">$\\mu = \\frac{y}{x}$</span>\nis the argument for the function f. Now take partial derivatives of this function, plug them back into the original PDE, and find function f. (this takes some work) Multiply this function by <span class=\"math-container\">$x^\\alpha$</span> and you are done.</p>\n" }, { "answer_id": 2080219, "score": 4, "text": "<p>In polar coordinates:\n$$x=r\\cos\\phi \\\\ y=r\\sin\\phi \\\\\n\\frac{\\partial u}{\\partial x}=\\cos \\phi \\frac{\\partial u}{\\partial r}-\\frac{1}{r}\\sin \\phi \\frac{\\partial u}{\\partial \\phi}\\\\\n\\frac{\\partial u}{\\partial y}=\\sin \\phi \\frac{\\partial u}{\\partial r}+\\frac{1}{r}\\cos \\phi \\frac{\\partial u}{\\partial \\phi}\\\\\n$$\n$x\\frac{\\partial u}{\\partial y}-y\\frac{\\partial u}{\\partial x}=u$ becomes: $$\\frac{\\partial u}{\\partial \\phi}=u$$.<br>\nSo:\n$$\nu=f(r)e^\\phi \\\\\n=f(\\sqrt{x^2+y^2})e^{atan2(x,y)} \n$$</p>\n" }, { "answer_id": 2080239, "score": 3, "text": "<p>Do you know the characteristics method? Have a look at <a href=\"https://en.wikipedia.org/wiki/Method_of_characteristics\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Method_of_characteristics</a> .</p>\n\n<p>You form the PDE $x u_y - y u_x = u$ into the three ODEs </p>\n\n<p>$$y_s=x, \\ x_s=-y,\\ u_s=u$$</p>\n\n<p>with initial conditions (i.e. $s=0$) $~x(0,\\eta)=\\eta, y(0,\\eta)=0, u(0,\\eta)=\\sin(\\eta \\pi/4)$. Here you have a non-characteristic Cauchy problem for $\\eta \\neq 0$ since \n$$ \\begin{vmatrix} -y(0,\\eta) &amp; x(0,\\eta)' \\\\ x(0,\\eta) &amp;  y(0,\\eta)'  \\end{vmatrix}=\\begin{vmatrix} 0 &amp; 1 \\\\ \\eta &amp;  0  \\end{vmatrix}=-\\eta \\neq 0$$</p>\n\n<p>Solving these ODEs you get</p>\n\n<p>$$x=A(\\eta) \\cos(s)-B(\\eta)\\sin(s), \\ y=A(\\eta) \\sin(s)+B(\\eta)\\cos(s), \\ u=C(\\eta) e^s=\\sin(\\eta \\pi/4) e^s $$</p>\n\n<p>Now you have to plug in your initial conditions in $x$ &amp; $y$, solve for $\\eta$ &amp; $s$ and plug this into $u$ to get your solution dependent on $x$ &amp; $y$.</p>\n\n<p>By the initial conditions we get $A(\\eta)=\\eta$ and $B(\\eta)=0$. Therefore $x=\\eta \\cos(s), y=\\eta \\sin(s)$. By squaring and adding we get $x^2+y^2=\\eta^2$ (note that $(x,y)\\neq(0,0)$ by assumption) and $s=\\arccos(\\frac{x}{\\sqrt{x^2+y^2}})=\\arcsin(\\frac{y}{\\sqrt{x^2+y^2}})=\\arctan(\\frac{y}{x})$. Just choose one of them since all three forms are equivalent.</p>\n\n<p>So our solution reads</p>\n\n<p>$$u=\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right) \\exp\\left[\\arccos\\left(\\frac{x}{\\sqrt{x^2+y^2}}\\right)\\right]$$</p>\n\n<p>in a solution area with $(x,y)\\neq (0,0)$, for example $G=\\{(x,y) \\in \\mathbb{R}^2 | x&gt;0,y&gt;0 \\}$.</p>\n\n<p>Checking that the initial condition is fulfilled: $$u(x,0)=\\sin\\left(\\frac{\\pi}{4}x \\right) \\exp[\\arccos(1)]=\\sin\\left(\\frac{\\pi}{4}x \\right) \\exp(0)=\\sin\\left(\\frac{\\pi}{4}x \\right)$$</p>\n" }, { "answer_id": 2080925, "score": 2, "text": "<p>$$xu_y-yu_x=u$$\nWhat you did is correct. The set of differential equations is :\n$$\\frac{dx}{-y}=\\frac{dy}{x}=\\frac{du}{u}$$ \nThe equation of a first family of characteristic curves is : \n$$x^2+y^2=c_1$$\nAt this point, we have to consider either $\\frac{dx}{-y}=\\frac{du}{u}$ or $\\frac{dy}{x}=\\frac{du}{u}$</p>\n\n<p>For example, with the first one : $-\\frac{dx}{\\sqrt{c_1-x^2 }}=\\frac{du}{u}$ \n$$\\ln|u|+\\tan^{-1}\\left(\\frac{x}{\\sqrt{c_1-x^2 }} \\right)=\\ln|u|+\\tan^{-1}\\left(\\frac{x}{y} \\right)=c_2$$\nany constant $c_1$ and $c_2$ on the characteristic curves. Elsewhere, $c_1$ and $c_2$ are related :\n$$\\ln|u|-\\tan^{-1}\\left(\\frac{x}{y} \\right) =f(x^2+y^2)$$\nany differentiable function $f$. Or, with $F=e^f$, a form for the general solution of the PDE is :\n$$u(x,y)=\\exp\\left(-\\tan^{-1}\\left(\\frac{x}{y} \\right) \\right)F(x^2+y^2)$$\nNote : Considering the negative square instead of the positive, leads to $u=\\exp\\left(\\tan^{-1}\\left(\\frac{x}{y} \\right) \\right)F(x^2+y^2)$ which is solution of $xu_y-yu_x=-u$.</p>\n\n<p>Boundary condition : $u(x,0)=\\sin(\\frac{\\pi}{4}x) = \\exp\\left(-\\frac{\\pi}{2} \\right)F(x^2)$</p>\n\n<p>so, the function $F(X)$ is determined : $F(X)=\\exp\\left(\\frac{\\pi}{2}\\right)\\sin(\\frac{\\pi}{4}\\sqrt{X})$</p>\n\n<p>With $X=x^2+y^2$, bringing it back into the above general solution leads to the particular solution according to the boundary condition :\n$$u(x,y)=\\exp\\left(-\\tan^{-1}\\left(\\frac{x}{y}\\right) +\\frac{\\pi}{2}\\right)\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right)$$\n$$u(x,y)=\\exp\\left(\\tan^{-1}\\left(\\frac{y}{x}\\right)\\right)\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right)$$</p>\n" }, { "answer_id": 2942497, "score": 1, "text": "<p>The most general solution to this equation is \n<span class=\"math-container\">$$C(y^2 + x^2)^\\frac{\\alpha}{2}\\bigg{(}\\frac{y-ix}{y+ix}\\bigg{)}^\\frac{i}{2}$$</span> where <span class=\"math-container\">$\\alpha$</span> is any real number. The solution involves group theory and is rather lengthy, but the gist of it is that you assume the manifold of the solution is partitioned by the stabilizers of an invariant family of infinite groups: <span class=\"math-container\">$x'=\\lambda x$</span>, <span class=\"math-container\">$y'=\\lambda ^ \\beta y$</span>, and <span class=\"math-container\">$u'=\\lambda ^ \\alpha u$</span>. Apply this family of groups to the PDE and it becomes evident that, for invariance, <span class=\"math-container\">$\\beta =1$</span>. There is no restriction on <span class=\"math-container\">$\\alpha$</span>. Using the characteristic equation <span class=\"math-container\">$\\frac{dx}{x}=\\frac{dy}{y}=\\frac{du}{\\alpha u}$</span> we can pull out two independent integrals, <span class=\"math-container\">$\\mu = \\frac{y}{x}$</span> and <span class=\"math-container\">$\\nu = \\frac{u}{x^ \\alpha}$</span>. Because these stabilizers partition the manifold, the solution must be of the form <span class=\"math-container\">$\\nu = f(\\mu)$</span> or <span class=\"math-container\">$u=x^\\alpha f(\\frac{y}{x})$</span> where <span class=\"math-container\">$\\mu = \\frac{y}{x}$</span>\nis the argument for the function f. Now take partial derivatives of this function, plug them back into the original PDE, and find function f. (this takes some work) Multiply this function by <span class=\"math-container\">$x^\\alpha$</span> and you are done.</p>\n" }, { "answer_id": 2080219, "score": 4, "text": "<p>In polar coordinates:\n$$x=r\\cos\\phi \\\\ y=r\\sin\\phi \\\\\n\\frac{\\partial u}{\\partial x}=\\cos \\phi \\frac{\\partial u}{\\partial r}-\\frac{1}{r}\\sin \\phi \\frac{\\partial u}{\\partial \\phi}\\\\\n\\frac{\\partial u}{\\partial y}=\\sin \\phi \\frac{\\partial u}{\\partial r}+\\frac{1}{r}\\cos \\phi \\frac{\\partial u}{\\partial \\phi}\\\\\n$$\n$x\\frac{\\partial u}{\\partial y}-y\\frac{\\partial u}{\\partial x}=u$ becomes: $$\\frac{\\partial u}{\\partial \\phi}=u$$.<br>\nSo:\n$$\nu=f(r)e^\\phi \\\\\n=f(\\sqrt{x^2+y^2})e^{atan2(x,y)} \n$$</p>\n" }, { "answer_id": 2080239, "score": 3, "text": "<p>Do you know the characteristics method? Have a look at <a href=\"https://en.wikipedia.org/wiki/Method_of_characteristics\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Method_of_characteristics</a> .</p>\n\n<p>You form the PDE $x u_y - y u_x = u$ into the three ODEs </p>\n\n<p>$$y_s=x, \\ x_s=-y,\\ u_s=u$$</p>\n\n<p>with initial conditions (i.e. $s=0$) $~x(0,\\eta)=\\eta, y(0,\\eta)=0, u(0,\\eta)=\\sin(\\eta \\pi/4)$. Here you have a non-characteristic Cauchy problem for $\\eta \\neq 0$ since \n$$ \\begin{vmatrix} -y(0,\\eta) &amp; x(0,\\eta)' \\\\ x(0,\\eta) &amp;  y(0,\\eta)'  \\end{vmatrix}=\\begin{vmatrix} 0 &amp; 1 \\\\ \\eta &amp;  0  \\end{vmatrix}=-\\eta \\neq 0$$</p>\n\n<p>Solving these ODEs you get</p>\n\n<p>$$x=A(\\eta) \\cos(s)-B(\\eta)\\sin(s), \\ y=A(\\eta) \\sin(s)+B(\\eta)\\cos(s), \\ u=C(\\eta) e^s=\\sin(\\eta \\pi/4) e^s $$</p>\n\n<p>Now you have to plug in your initial conditions in $x$ &amp; $y$, solve for $\\eta$ &amp; $s$ and plug this into $u$ to get your solution dependent on $x$ &amp; $y$.</p>\n\n<p>By the initial conditions we get $A(\\eta)=\\eta$ and $B(\\eta)=0$. Therefore $x=\\eta \\cos(s), y=\\eta \\sin(s)$. By squaring and adding we get $x^2+y^2=\\eta^2$ (note that $(x,y)\\neq(0,0)$ by assumption) and $s=\\arccos(\\frac{x}{\\sqrt{x^2+y^2}})=\\arcsin(\\frac{y}{\\sqrt{x^2+y^2}})=\\arctan(\\frac{y}{x})$. Just choose one of them since all three forms are equivalent.</p>\n\n<p>So our solution reads</p>\n\n<p>$$u=\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right) \\exp\\left[\\arccos\\left(\\frac{x}{\\sqrt{x^2+y^2}}\\right)\\right]$$</p>\n\n<p>in a solution area with $(x,y)\\neq (0,0)$, for example $G=\\{(x,y) \\in \\mathbb{R}^2 | x&gt;0,y&gt;0 \\}$.</p>\n\n<p>Checking that the initial condition is fulfilled: $$u(x,0)=\\sin\\left(\\frac{\\pi}{4}x \\right) \\exp[\\arccos(1)]=\\sin\\left(\\frac{\\pi}{4}x \\right) \\exp(0)=\\sin\\left(\\frac{\\pi}{4}x \\right)$$</p>\n" }, { "answer_id": 2080925, "score": 2, "text": "<p>$$xu_y-yu_x=u$$\nWhat you did is correct. The set of differential equations is :\n$$\\frac{dx}{-y}=\\frac{dy}{x}=\\frac{du}{u}$$ \nThe equation of a first family of characteristic curves is : \n$$x^2+y^2=c_1$$\nAt this point, we have to consider either $\\frac{dx}{-y}=\\frac{du}{u}$ or $\\frac{dy}{x}=\\frac{du}{u}$</p>\n\n<p>For example, with the first one : $-\\frac{dx}{\\sqrt{c_1-x^2 }}=\\frac{du}{u}$ \n$$\\ln|u|+\\tan^{-1}\\left(\\frac{x}{\\sqrt{c_1-x^2 }} \\right)=\\ln|u|+\\tan^{-1}\\left(\\frac{x}{y} \\right)=c_2$$\nany constant $c_1$ and $c_2$ on the characteristic curves. Elsewhere, $c_1$ and $c_2$ are related :\n$$\\ln|u|-\\tan^{-1}\\left(\\frac{x}{y} \\right) =f(x^2+y^2)$$\nany differentiable function $f$. Or, with $F=e^f$, a form for the general solution of the PDE is :\n$$u(x,y)=\\exp\\left(-\\tan^{-1}\\left(\\frac{x}{y} \\right) \\right)F(x^2+y^2)$$\nNote : Considering the negative square instead of the positive, leads to $u=\\exp\\left(\\tan^{-1}\\left(\\frac{x}{y} \\right) \\right)F(x^2+y^2)$ which is solution of $xu_y-yu_x=-u$.</p>\n\n<p>Boundary condition : $u(x,0)=\\sin(\\frac{\\pi}{4}x) = \\exp\\left(-\\frac{\\pi}{2} \\right)F(x^2)$</p>\n\n<p>so, the function $F(X)$ is determined : $F(X)=\\exp\\left(\\frac{\\pi}{2}\\right)\\sin(\\frac{\\pi}{4}\\sqrt{X})$</p>\n\n<p>With $X=x^2+y^2$, bringing it back into the above general solution leads to the particular solution according to the boundary condition :\n$$u(x,y)=\\exp\\left(-\\tan^{-1}\\left(\\frac{x}{y}\\right) +\\frac{\\pi}{2}\\right)\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right)$$\n$$u(x,y)=\\exp\\left(\\tan^{-1}\\left(\\frac{y}{x}\\right)\\right)\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right)$$</p>\n" }, { "answer_id": 2942497, "score": 1, "text": "<p>The most general solution to this equation is \n<span class=\"math-container\">$$C(y^2 + x^2)^\\frac{\\alpha}{2}\\bigg{(}\\frac{y-ix}{y+ix}\\bigg{)}^\\frac{i}{2}$$</span> where <span class=\"math-container\">$\\alpha$</span> is any real number. The solution involves group theory and is rather lengthy, but the gist of it is that you assume the manifold of the solution is partitioned by the stabilizers of an invariant family of infinite groups: <span class=\"math-container\">$x'=\\lambda x$</span>, <span class=\"math-container\">$y'=\\lambda ^ \\beta y$</span>, and <span class=\"math-container\">$u'=\\lambda ^ \\alpha u$</span>. Apply this family of groups to the PDE and it becomes evident that, for invariance, <span class=\"math-container\">$\\beta =1$</span>. There is no restriction on <span class=\"math-container\">$\\alpha$</span>. Using the characteristic equation <span class=\"math-container\">$\\frac{dx}{x}=\\frac{dy}{y}=\\frac{du}{\\alpha u}$</span> we can pull out two independent integrals, <span class=\"math-container\">$\\mu = \\frac{y}{x}$</span> and <span class=\"math-container\">$\\nu = \\frac{u}{x^ \\alpha}$</span>. Because these stabilizers partition the manifold, the solution must be of the form <span class=\"math-container\">$\\nu = f(\\mu)$</span> or <span class=\"math-container\">$u=x^\\alpha f(\\frac{y}{x})$</span> where <span class=\"math-container\">$\\mu = \\frac{y}{x}$</span>\nis the argument for the function f. Now take partial derivatives of this function, plug them back into the original PDE, and find function f. (this takes some work) Multiply this function by <span class=\"math-container\">$x^\\alpha$</span> and you are done.</p>\n" }, { "answer_id": 2080219, "score": 4, "text": "<p>In polar coordinates:\n$$x=r\\cos\\phi \\\\ y=r\\sin\\phi \\\\\n\\frac{\\partial u}{\\partial x}=\\cos \\phi \\frac{\\partial u}{\\partial r}-\\frac{1}{r}\\sin \\phi \\frac{\\partial u}{\\partial \\phi}\\\\\n\\frac{\\partial u}{\\partial y}=\\sin \\phi \\frac{\\partial u}{\\partial r}+\\frac{1}{r}\\cos \\phi \\frac{\\partial u}{\\partial \\phi}\\\\\n$$\n$x\\frac{\\partial u}{\\partial y}-y\\frac{\\partial u}{\\partial x}=u$ becomes: $$\\frac{\\partial u}{\\partial \\phi}=u$$.<br>\nSo:\n$$\nu=f(r)e^\\phi \\\\\n=f(\\sqrt{x^2+y^2})e^{atan2(x,y)} \n$$</p>\n" }, { "answer_id": 2080239, "score": 3, "text": "<p>Do you know the characteristics method? Have a look at <a href=\"https://en.wikipedia.org/wiki/Method_of_characteristics\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Method_of_characteristics</a> .</p>\n\n<p>You form the PDE $x u_y - y u_x = u$ into the three ODEs </p>\n\n<p>$$y_s=x, \\ x_s=-y,\\ u_s=u$$</p>\n\n<p>with initial conditions (i.e. $s=0$) $~x(0,\\eta)=\\eta, y(0,\\eta)=0, u(0,\\eta)=\\sin(\\eta \\pi/4)$. Here you have a non-characteristic Cauchy problem for $\\eta \\neq 0$ since \n$$ \\begin{vmatrix} -y(0,\\eta) &amp; x(0,\\eta)' \\\\ x(0,\\eta) &amp;  y(0,\\eta)'  \\end{vmatrix}=\\begin{vmatrix} 0 &amp; 1 \\\\ \\eta &amp;  0  \\end{vmatrix}=-\\eta \\neq 0$$</p>\n\n<p>Solving these ODEs you get</p>\n\n<p>$$x=A(\\eta) \\cos(s)-B(\\eta)\\sin(s), \\ y=A(\\eta) \\sin(s)+B(\\eta)\\cos(s), \\ u=C(\\eta) e^s=\\sin(\\eta \\pi/4) e^s $$</p>\n\n<p>Now you have to plug in your initial conditions in $x$ &amp; $y$, solve for $\\eta$ &amp; $s$ and plug this into $u$ to get your solution dependent on $x$ &amp; $y$.</p>\n\n<p>By the initial conditions we get $A(\\eta)=\\eta$ and $B(\\eta)=0$. Therefore $x=\\eta \\cos(s), y=\\eta \\sin(s)$. By squaring and adding we get $x^2+y^2=\\eta^2$ (note that $(x,y)\\neq(0,0)$ by assumption) and $s=\\arccos(\\frac{x}{\\sqrt{x^2+y^2}})=\\arcsin(\\frac{y}{\\sqrt{x^2+y^2}})=\\arctan(\\frac{y}{x})$. Just choose one of them since all three forms are equivalent.</p>\n\n<p>So our solution reads</p>\n\n<p>$$u=\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right) \\exp\\left[\\arccos\\left(\\frac{x}{\\sqrt{x^2+y^2}}\\right)\\right]$$</p>\n\n<p>in a solution area with $(x,y)\\neq (0,0)$, for example $G=\\{(x,y) \\in \\mathbb{R}^2 | x&gt;0,y&gt;0 \\}$.</p>\n\n<p>Checking that the initial condition is fulfilled: $$u(x,0)=\\sin\\left(\\frac{\\pi}{4}x \\right) \\exp[\\arccos(1)]=\\sin\\left(\\frac{\\pi}{4}x \\right) \\exp(0)=\\sin\\left(\\frac{\\pi}{4}x \\right)$$</p>\n" }, { "answer_id": 2080925, "score": 2, "text": "<p>$$xu_y-yu_x=u$$\nWhat you did is correct. The set of differential equations is :\n$$\\frac{dx}{-y}=\\frac{dy}{x}=\\frac{du}{u}$$ \nThe equation of a first family of characteristic curves is : \n$$x^2+y^2=c_1$$\nAt this point, we have to consider either $\\frac{dx}{-y}=\\frac{du}{u}$ or $\\frac{dy}{x}=\\frac{du}{u}$</p>\n\n<p>For example, with the first one : $-\\frac{dx}{\\sqrt{c_1-x^2 }}=\\frac{du}{u}$ \n$$\\ln|u|+\\tan^{-1}\\left(\\frac{x}{\\sqrt{c_1-x^2 }} \\right)=\\ln|u|+\\tan^{-1}\\left(\\frac{x}{y} \\right)=c_2$$\nany constant $c_1$ and $c_2$ on the characteristic curves. Elsewhere, $c_1$ and $c_2$ are related :\n$$\\ln|u|-\\tan^{-1}\\left(\\frac{x}{y} \\right) =f(x^2+y^2)$$\nany differentiable function $f$. Or, with $F=e^f$, a form for the general solution of the PDE is :\n$$u(x,y)=\\exp\\left(-\\tan^{-1}\\left(\\frac{x}{y} \\right) \\right)F(x^2+y^2)$$\nNote : Considering the negative square instead of the positive, leads to $u=\\exp\\left(\\tan^{-1}\\left(\\frac{x}{y} \\right) \\right)F(x^2+y^2)$ which is solution of $xu_y-yu_x=-u$.</p>\n\n<p>Boundary condition : $u(x,0)=\\sin(\\frac{\\pi}{4}x) = \\exp\\left(-\\frac{\\pi}{2} \\right)F(x^2)$</p>\n\n<p>so, the function $F(X)$ is determined : $F(X)=\\exp\\left(\\frac{\\pi}{2}\\right)\\sin(\\frac{\\pi}{4}\\sqrt{X})$</p>\n\n<p>With $X=x^2+y^2$, bringing it back into the above general solution leads to the particular solution according to the boundary condition :\n$$u(x,y)=\\exp\\left(-\\tan^{-1}\\left(\\frac{x}{y}\\right) +\\frac{\\pi}{2}\\right)\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right)$$\n$$u(x,y)=\\exp\\left(\\tan^{-1}\\left(\\frac{y}{x}\\right)\\right)\\sin\\left(\\frac{\\pi}{4}\\sqrt{x^2+y^2}\\right)$$</p>\n" }, { "answer_id": 2942497, "score": 1, "text": "<p>The most general solution to this equation is \n<span class=\"math-container\">$$C(y^2 + x^2)^\\frac{\\alpha}{2}\\bigg{(}\\frac{y-ix}{y+ix}\\bigg{)}^\\frac{i}{2}$$</span> where <span class=\"math-container\">$\\alpha$</span> is any real number. The solution involves group theory and is rather lengthy, but the gist of it is that you assume the manifold of the solution is partitioned by the stabilizers of an invariant family of infinite groups: <span class=\"math-container\">$x'=\\lambda x$</span>, <span class=\"math-container\">$y'=\\lambda ^ \\beta y$</span>, and <span class=\"math-container\">$u'=\\lambda ^ \\alpha u$</span>. Apply this family of groups to the PDE and it becomes evident that, for invariance, <span class=\"math-container\">$\\beta =1$</span>. There is no restriction on <span class=\"math-container\">$\\alpha$</span>. Using the characteristic equation <span class=\"math-container\">$\\frac{dx}{x}=\\frac{dy}{y}=\\frac{du}{\\alpha u}$</span> we can pull out two independent integrals, <span class=\"math-container\">$\\mu = \\frac{y}{x}$</span> and <span class=\"math-container\">$\\nu = \\frac{u}{x^ \\alpha}$</span>. Because these stabilizers partition the manifold, the solution must be of the form <span class=\"math-container\">$\\nu = f(\\mu)$</span> or <span class=\"math-container\">$u=x^\\alpha f(\\frac{y}{x})$</span> where <span class=\"math-container\">$\\mu = \\frac{y}{x}$</span>\nis the argument for the function f. Now take partial derivatives of this function, plug them back into the original PDE, and find function f. (this takes some work) Multiply this function by <span class=\"math-container\">$x^\\alpha$</span> and you are done.</p>\n" } ]
Question: Find the solution of the pde $xu_y-yu_x=u$. Answer 1: <p>In polar coordinates: $$x=r\cos\phi \\ y=r\sin\phi \\ \frac{\partial u}{\partial x}=\cos \phi \frac{\partial u}{\partial r}-\frac{1}{r}\sin \phi \frac{\partial u}{\partial \phi}\\ \frac{\partial u}{\partial y}=\sin \phi \frac{\partial u}{\partial r}+\frac{1}{r}\cos \phi \frac{\partial u}{\partial \phi}\\ $$ $x\frac{\partial u}{\partial y}-y\frac{\partial u}{\partial x}=u$ becomes: $$\frac{\partial u}{\partial \phi}=u$$.<br> So: $$ u=f(r)e^\phi \\ =f(\sqrt{x^2+y^2})e^{atan2(x,y)} $$</p> Answer 2: <p>Do you know the characteristics method? Have a look at <a href="https://en.wikipedia.org/wiki/Method_of_characteristics" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Method_of_characteristics</a> .</p> <p>You form the PDE $x u_y - y u_x = u$ into the three ODEs </p> <p>$$y_s=x, \ x_s=-y,\ u_s=u$$</p> <p>with initial conditions (i.e. $s=0$) $~x(0,\eta)=\eta, y(0,\eta)=0, u(0,\eta)=\sin(\eta \pi/4)$. Here you have a non-characteristic Cauchy problem for $\eta \neq 0$ since $$ \begin{vmatrix} -y(0,\eta) &amp; x(0,\eta)' \\ x(0,\eta) &amp;  y(0,\eta)'  \end{vmatrix}=\begin{vmatrix} 0 &amp; 1 \\ \eta &amp;  0  \end{vmatrix}=-\eta \neq 0$$</p> <p>Solving these ODEs you get</p> <p>$$x=A(\eta) \cos(s)-B(\eta)\sin(s), \ y=A(\eta) \sin(s)+B(\eta)\cos(s), \ u=C(\eta) e^s=\sin(\eta \pi/4) e^s $$</p> <p>Now you have to plug in your initial conditions in $x$ &amp; $y$, solve for $\eta$ &amp; $s$ and plug this into $u$ to get your solution dependent on $x$ &amp; $y$.</p> <p>By the initial conditions we get $A(\eta)=\eta$ and $B(\eta)=0$. Therefore $x=\eta \cos(s), y=\eta \sin(s)$. By squaring and adding we get $x^2+y^2=\eta^2$ (note that $(x,y)\neq(0,0)$ by assumption) and $s=\arccos(\frac{x}{\sqrt{x^2+y^2}})=\arcsin(\frac{y}{\sqrt{x^2+y^2}})=\arctan(\frac{y}{x})$. Just choose one of them since all three forms are equivalent.</p> <p>So our solution reads</p> <p>$$u=\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right) \exp\left[\arccos\left(\frac{x}{\sqrt{x^2+y^2}}\right)\right]$$</p> <p>in a solution area with $(x,y)\neq (0,0)$, for example $G=\{(x,y) \in \mathbb{R}^2 | x&gt;0,y&gt;0 \}$.</p> <p>Checking that the initial condition is fulfilled: $$u(x,0)=\sin\left(\frac{\pi}{4}x \right) \exp[\arccos(1)]=\sin\left(\frac{\pi}{4}x \right) \exp(0)=\sin\left(\frac{\pi}{4}x \right)$$</p> Answer 3: <p>$$xu_y-yu_x=u$$ What you did is correct. The set of differential equations is : $$\frac{dx}{-y}=\frac{dy}{x}=\frac{du}{u}$$ The equation of a first family of characteristic curves is : $$x^2+y^2=c_1$$ At this point, we have to consider either $\frac{dx}{-y}=\frac{du}{u}$ or $\frac{dy}{x}=\frac{du}{u}$</p> <p>For example, with the first one : $-\frac{dx}{\sqrt{c_1-x^2 }}=\frac{du}{u}$ $$\ln|u|+\tan^{-1}\left(\frac{x}{\sqrt{c_1-x^2 }} \right)=\ln|u|+\tan^{-1}\left(\frac{x}{y} \right)=c_2$$ any constant $c_1$ and $c_2$ on the characteristic curves. Elsewhere, $c_1$ and $c_2$ are related : $$\ln|u|-\tan^{-1}\left(\frac{x}{y} \right) =f(x^2+y^2)$$ any differentiable function $f$. Or, with $F=e^f$, a form for the general solution of the PDE is : $$u(x,y)=\exp\left(-\tan^{-1}\left(\frac{x}{y} \right) \right)F(x^2+y^2)$$ Note : Considering the negative square instead of the positive, leads to $u=\exp\left(\tan^{-1}\left(\frac{x}{y} \right) \right)F(x^2+y^2)$ which is solution of $xu_y-yu_x=-u$.</p> <p>Boundary condition : $u(x,0)=\sin(\frac{\pi}{4}x) = \exp\left(-\frac{\pi}{2} \right)F(x^2)$</p> <p>so, the function $F(X)$ is determined : $F(X)=\exp\left(\frac{\pi}{2}\right)\sin(\frac{\pi}{4}\sqrt{X})$</p> <p>With $X=x^2+y^2$, bringing it back into the above general solution leads to the particular solution according to the boundary condition : $$u(x,y)=\exp\left(-\tan^{-1}\left(\frac{x}{y}\right) +\frac{\pi}{2}\right)\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right)$$ $$u(x,y)=\exp\left(\tan^{-1}\left(\frac{y}{x}\right)\right)\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right)$$</p> Answer 4: <p>The most general solution to this equation is <span class="math-container">$$C(y^2 + x^2)^\frac{\alpha}{2}\bigg{(}\frac{y-ix}{y+ix}\bigg{)}^\frac{i}{2}$$</span> where <span class="math-container">$\alpha$</span> is any real number. The solution involves group theory and is rather lengthy, but the gist of it is that you assume the manifold of the solution is partitioned by the stabilizers of an invariant family of infinite groups: <span class="math-container">$x'=\lambda x$</span>, <span class="math-container">$y'=\lambda ^ \beta y$</span>, and <span class="math-container">$u'=\lambda ^ \alpha u$</span>. Apply this family of groups to the PDE and it becomes evident that, for invariance, <span class="math-container">$\beta =1$</span>. There is no restriction on <span class="math-container">$\alpha$</span>. Using the characteristic equation <span class="math-container">$\frac{dx}{x}=\frac{dy}{y}=\frac{du}{\alpha u}$</span> we can pull out two independent integrals, <span class="math-container">$\mu = \frac{y}{x}$</span> and <span class="math-container">$\nu = \frac{u}{x^ \alpha}$</span>. Because these stabilizers partition the manifold, the solution must be of the form <span class="math-container">$\nu = f(\mu)$</span> or <span class="math-container">$u=x^\alpha f(\frac{y}{x})$</span> where <span class="math-container">$\mu = \frac{y}{x}$</span> is the argument for the function f. Now take partial derivatives of this function, plug them back into the original PDE, and find function f. (this takes some work) Multiply this function by <span class="math-container">$x^\alpha$</span> and you are done.</p> Answer 5: <p>In polar coordinates: $$x=r\cos\phi \\ y=r\sin\phi \\ \frac{\partial u}{\partial x}=\cos \phi \frac{\partial u}{\partial r}-\frac{1}{r}\sin \phi \frac{\partial u}{\partial \phi}\\ \frac{\partial u}{\partial y}=\sin \phi \frac{\partial u}{\partial r}+\frac{1}{r}\cos \phi \frac{\partial u}{\partial \phi}\\ $$ $x\frac{\partial u}{\partial y}-y\frac{\partial u}{\partial x}=u$ becomes: $$\frac{\partial u}{\partial \phi}=u$$.<br> So: $$ u=f(r)e^\phi \\ =f(\sqrt{x^2+y^2})e^{atan2(x,y)} $$</p> Answer 6: <p>Do you know the characteristics method? Have a look at <a href="https://en.wikipedia.org/wiki/Method_of_characteristics" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Method_of_characteristics</a> .</p> <p>You form the PDE $x u_y - y u_x = u$ into the three ODEs </p> <p>$$y_s=x, \ x_s=-y,\ u_s=u$$</p> <p>with initial conditions (i.e. $s=0$) $~x(0,\eta)=\eta, y(0,\eta)=0, u(0,\eta)=\sin(\eta \pi/4)$. Here you have a non-characteristic Cauchy problem for $\eta \neq 0$ since $$ \begin{vmatrix} -y(0,\eta) &amp; x(0,\eta)' \\ x(0,\eta) &amp;  y(0,\eta)'  \end{vmatrix}=\begin{vmatrix} 0 &amp; 1 \\ \eta &amp;  0  \end{vmatrix}=-\eta \neq 0$$</p> <p>Solving these ODEs you get</p> <p>$$x=A(\eta) \cos(s)-B(\eta)\sin(s), \ y=A(\eta) \sin(s)+B(\eta)\cos(s), \ u=C(\eta) e^s=\sin(\eta \pi/4) e^s $$</p> <p>Now you have to plug in your initial conditions in $x$ &amp; $y$, solve for $\eta$ &amp; $s$ and plug this into $u$ to get your solution dependent on $x$ &amp; $y$.</p> <p>By the initial conditions we get $A(\eta)=\eta$ and $B(\eta)=0$. Therefore $x=\eta \cos(s), y=\eta \sin(s)$. By squaring and adding we get $x^2+y^2=\eta^2$ (note that $(x,y)\neq(0,0)$ by assumption) and $s=\arccos(\frac{x}{\sqrt{x^2+y^2}})=\arcsin(\frac{y}{\sqrt{x^2+y^2}})=\arctan(\frac{y}{x})$. Just choose one of them since all three forms are equivalent.</p> <p>So our solution reads</p> <p>$$u=\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right) \exp\left[\arccos\left(\frac{x}{\sqrt{x^2+y^2}}\right)\right]$$</p> <p>in a solution area with $(x,y)\neq (0,0)$, for example $G=\{(x,y) \in \mathbb{R}^2 | x&gt;0,y&gt;0 \}$.</p> <p>Checking that the initial condition is fulfilled: $$u(x,0)=\sin\left(\frac{\pi}{4}x \right) \exp[\arccos(1)]=\sin\left(\frac{\pi}{4}x \right) \exp(0)=\sin\left(\frac{\pi}{4}x \right)$$</p> Answer 7: <p>$$xu_y-yu_x=u$$ What you did is correct. The set of differential equations is : $$\frac{dx}{-y}=\frac{dy}{x}=\frac{du}{u}$$ The equation of a first family of characteristic curves is : $$x^2+y^2=c_1$$ At this point, we have to consider either $\frac{dx}{-y}=\frac{du}{u}$ or $\frac{dy}{x}=\frac{du}{u}$</p> <p>For example, with the first one : $-\frac{dx}{\sqrt{c_1-x^2 }}=\frac{du}{u}$ $$\ln|u|+\tan^{-1}\left(\frac{x}{\sqrt{c_1-x^2 }} \right)=\ln|u|+\tan^{-1}\left(\frac{x}{y} \right)=c_2$$ any constant $c_1$ and $c_2$ on the characteristic curves. Elsewhere, $c_1$ and $c_2$ are related : $$\ln|u|-\tan^{-1}\left(\frac{x}{y} \right) =f(x^2+y^2)$$ any differentiable function $f$. Or, with $F=e^f$, a form for the general solution of the PDE is : $$u(x,y)=\exp\left(-\tan^{-1}\left(\frac{x}{y} \right) \right)F(x^2+y^2)$$ Note : Considering the negative square instead of the positive, leads to $u=\exp\left(\tan^{-1}\left(\frac{x}{y} \right) \right)F(x^2+y^2)$ which is solution of $xu_y-yu_x=-u$.</p> <p>Boundary condition : $u(x,0)=\sin(\frac{\pi}{4}x) = \exp\left(-\frac{\pi}{2} \right)F(x^2)$</p> <p>so, the function $F(X)$ is determined : $F(X)=\exp\left(\frac{\pi}{2}\right)\sin(\frac{\pi}{4}\sqrt{X})$</p> <p>With $X=x^2+y^2$, bringing it back into the above general solution leads to the particular solution according to the boundary condition : $$u(x,y)=\exp\left(-\tan^{-1}\left(\frac{x}{y}\right) +\frac{\pi}{2}\right)\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right)$$ $$u(x,y)=\exp\left(\tan^{-1}\left(\frac{y}{x}\right)\right)\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right)$$</p> Answer 8: <p>The most general solution to this equation is <span class="math-container">$$C(y^2 + x^2)^\frac{\alpha}{2}\bigg{(}\frac{y-ix}{y+ix}\bigg{)}^\frac{i}{2}$$</span> where <span class="math-container">$\alpha$</span> is any real number. The solution involves group theory and is rather lengthy, but the gist of it is that you assume the manifold of the solution is partitioned by the stabilizers of an invariant family of infinite groups: <span class="math-container">$x'=\lambda x$</span>, <span class="math-container">$y'=\lambda ^ \beta y$</span>, and <span class="math-container">$u'=\lambda ^ \alpha u$</span>. Apply this family of groups to the PDE and it becomes evident that, for invariance, <span class="math-container">$\beta =1$</span>. There is no restriction on <span class="math-container">$\alpha$</span>. Using the characteristic equation <span class="math-container">$\frac{dx}{x}=\frac{dy}{y}=\frac{du}{\alpha u}$</span> we can pull out two independent integrals, <span class="math-container">$\mu = \frac{y}{x}$</span> and <span class="math-container">$\nu = \frac{u}{x^ \alpha}$</span>. Because these stabilizers partition the manifold, the solution must be of the form <span class="math-container">$\nu = f(\mu)$</span> or <span class="math-container">$u=x^\alpha f(\frac{y}{x})$</span> where <span class="math-container">$\mu = \frac{y}{x}$</span> is the argument for the function f. Now take partial derivatives of this function, plug them back into the original PDE, and find function f. (this takes some work) Multiply this function by <span class="math-container">$x^\alpha$</span> and you are done.</p> Answer 9: <p>In polar coordinates: $$x=r\cos\phi \\ y=r\sin\phi \\ \frac{\partial u}{\partial x}=\cos \phi \frac{\partial u}{\partial r}-\frac{1}{r}\sin \phi \frac{\partial u}{\partial \phi}\\ \frac{\partial u}{\partial y}=\sin \phi \frac{\partial u}{\partial r}+\frac{1}{r}\cos \phi \frac{\partial u}{\partial \phi}\\ $$ $x\frac{\partial u}{\partial y}-y\frac{\partial u}{\partial x}=u$ becomes: $$\frac{\partial u}{\partial \phi}=u$$.<br> So: $$ u=f(r)e^\phi \\ =f(\sqrt{x^2+y^2})e^{atan2(x,y)} $$</p> Answer 10: <p>Do you know the characteristics method? Have a look at <a href="https://en.wikipedia.org/wiki/Method_of_characteristics" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Method_of_characteristics</a> .</p> <p>You form the PDE $x u_y - y u_x = u$ into the three ODEs </p> <p>$$y_s=x, \ x_s=-y,\ u_s=u$$</p> <p>with initial conditions (i.e. $s=0$) $~x(0,\eta)=\eta, y(0,\eta)=0, u(0,\eta)=\sin(\eta \pi/4)$. Here you have a non-characteristic Cauchy problem for $\eta \neq 0$ since $$ \begin{vmatrix} -y(0,\eta) &amp; x(0,\eta)' \\ x(0,\eta) &amp;  y(0,\eta)'  \end{vmatrix}=\begin{vmatrix} 0 &amp; 1 \\ \eta &amp;  0  \end{vmatrix}=-\eta \neq 0$$</p> <p>Solving these ODEs you get</p> <p>$$x=A(\eta) \cos(s)-B(\eta)\sin(s), \ y=A(\eta) \sin(s)+B(\eta)\cos(s), \ u=C(\eta) e^s=\sin(\eta \pi/4) e^s $$</p> <p>Now you have to plug in your initial conditions in $x$ &amp; $y$, solve for $\eta$ &amp; $s$ and plug this into $u$ to get your solution dependent on $x$ &amp; $y$.</p> <p>By the initial conditions we get $A(\eta)=\eta$ and $B(\eta)=0$. Therefore $x=\eta \cos(s), y=\eta \sin(s)$. By squaring and adding we get $x^2+y^2=\eta^2$ (note that $(x,y)\neq(0,0)$ by assumption) and $s=\arccos(\frac{x}{\sqrt{x^2+y^2}})=\arcsin(\frac{y}{\sqrt{x^2+y^2}})=\arctan(\frac{y}{x})$. Just choose one of them since all three forms are equivalent.</p> <p>So our solution reads</p> <p>$$u=\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right) \exp\left[\arccos\left(\frac{x}{\sqrt{x^2+y^2}}\right)\right]$$</p> <p>in a solution area with $(x,y)\neq (0,0)$, for example $G=\{(x,y) \in \mathbb{R}^2 | x&gt;0,y&gt;0 \}$.</p> <p>Checking that the initial condition is fulfilled: $$u(x,0)=\sin\left(\frac{\pi}{4}x \right) \exp[\arccos(1)]=\sin\left(\frac{\pi}{4}x \right) \exp(0)=\sin\left(\frac{\pi}{4}x \right)$$</p> Answer 11: <p>$$xu_y-yu_x=u$$ What you did is correct. The set of differential equations is : $$\frac{dx}{-y}=\frac{dy}{x}=\frac{du}{u}$$ The equation of a first family of characteristic curves is : $$x^2+y^2=c_1$$ At this point, we have to consider either $\frac{dx}{-y}=\frac{du}{u}$ or $\frac{dy}{x}=\frac{du}{u}$</p> <p>For example, with the first one : $-\frac{dx}{\sqrt{c_1-x^2 }}=\frac{du}{u}$ $$\ln|u|+\tan^{-1}\left(\frac{x}{\sqrt{c_1-x^2 }} \right)=\ln|u|+\tan^{-1}\left(\frac{x}{y} \right)=c_2$$ any constant $c_1$ and $c_2$ on the characteristic curves. Elsewhere, $c_1$ and $c_2$ are related : $$\ln|u|-\tan^{-1}\left(\frac{x}{y} \right) =f(x^2+y^2)$$ any differentiable function $f$. Or, with $F=e^f$, a form for the general solution of the PDE is : $$u(x,y)=\exp\left(-\tan^{-1}\left(\frac{x}{y} \right) \right)F(x^2+y^2)$$ Note : Considering the negative square instead of the positive, leads to $u=\exp\left(\tan^{-1}\left(\frac{x}{y} \right) \right)F(x^2+y^2)$ which is solution of $xu_y-yu_x=-u$.</p> <p>Boundary condition : $u(x,0)=\sin(\frac{\pi}{4}x) = \exp\left(-\frac{\pi}{2} \right)F(x^2)$</p> <p>so, the function $F(X)$ is determined : $F(X)=\exp\left(\frac{\pi}{2}\right)\sin(\frac{\pi}{4}\sqrt{X})$</p> <p>With $X=x^2+y^2$, bringing it back into the above general solution leads to the particular solution according to the boundary condition : $$u(x,y)=\exp\left(-\tan^{-1}\left(\frac{x}{y}\right) +\frac{\pi}{2}\right)\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right)$$ $$u(x,y)=\exp\left(\tan^{-1}\left(\frac{y}{x}\right)\right)\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right)$$</p> Answer 12: <p>The most general solution to this equation is <span class="math-container">$$C(y^2 + x^2)^\frac{\alpha}{2}\bigg{(}\frac{y-ix}{y+ix}\bigg{)}^\frac{i}{2}$$</span> where <span class="math-container">$\alpha$</span> is any real number. The solution involves group theory and is rather lengthy, but the gist of it is that you assume the manifold of the solution is partitioned by the stabilizers of an invariant family of infinite groups: <span class="math-container">$x'=\lambda x$</span>, <span class="math-container">$y'=\lambda ^ \beta y$</span>, and <span class="math-container">$u'=\lambda ^ \alpha u$</span>. Apply this family of groups to the PDE and it becomes evident that, for invariance, <span class="math-container">$\beta =1$</span>. There is no restriction on <span class="math-container">$\alpha$</span>. Using the characteristic equation <span class="math-container">$\frac{dx}{x}=\frac{dy}{y}=\frac{du}{\alpha u}$</span> we can pull out two independent integrals, <span class="math-container">$\mu = \frac{y}{x}$</span> and <span class="math-container">$\nu = \frac{u}{x^ \alpha}$</span>. Because these stabilizers partition the manifold, the solution must be of the form <span class="math-container">$\nu = f(\mu)$</span> or <span class="math-container">$u=x^\alpha f(\frac{y}{x})$</span> where <span class="math-container">$\mu = \frac{y}{x}$</span> is the argument for the function f. Now take partial derivatives of this function, plug them back into the original PDE, and find function f. (this takes some work) Multiply this function by <span class="math-container">$x^\alpha$</span> and you are done.</p> Answer 13: <p>In polar coordinates: $$x=r\cos\phi \\ y=r\sin\phi \\ \frac{\partial u}{\partial x}=\cos \phi \frac{\partial u}{\partial r}-\frac{1}{r}\sin \phi \frac{\partial u}{\partial \phi}\\ \frac{\partial u}{\partial y}=\sin \phi \frac{\partial u}{\partial r}+\frac{1}{r}\cos \phi \frac{\partial u}{\partial \phi}\\ $$ $x\frac{\partial u}{\partial y}-y\frac{\partial u}{\partial x}=u$ becomes: $$\frac{\partial u}{\partial \phi}=u$$.<br> So: $$ u=f(r)e^\phi \\ =f(\sqrt{x^2+y^2})e^{atan2(x,y)} $$</p> Answer 14: <p>Do you know the characteristics method? Have a look at <a href="https://en.wikipedia.org/wiki/Method_of_characteristics" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Method_of_characteristics</a> .</p> <p>You form the PDE $x u_y - y u_x = u$ into the three ODEs </p> <p>$$y_s=x, \ x_s=-y,\ u_s=u$$</p> <p>with initial conditions (i.e. $s=0$) $~x(0,\eta)=\eta, y(0,\eta)=0, u(0,\eta)=\sin(\eta \pi/4)$. Here you have a non-characteristic Cauchy problem for $\eta \neq 0$ since $$ \begin{vmatrix} -y(0,\eta) &amp; x(0,\eta)' \\ x(0,\eta) &amp;  y(0,\eta)'  \end{vmatrix}=\begin{vmatrix} 0 &amp; 1 \\ \eta &amp;  0  \end{vmatrix}=-\eta \neq 0$$</p> <p>Solving these ODEs you get</p> <p>$$x=A(\eta) \cos(s)-B(\eta)\sin(s), \ y=A(\eta) \sin(s)+B(\eta)\cos(s), \ u=C(\eta) e^s=\sin(\eta \pi/4) e^s $$</p> <p>Now you have to plug in your initial conditions in $x$ &amp; $y$, solve for $\eta$ &amp; $s$ and plug this into $u$ to get your solution dependent on $x$ &amp; $y$.</p> <p>By the initial conditions we get $A(\eta)=\eta$ and $B(\eta)=0$. Therefore $x=\eta \cos(s), y=\eta \sin(s)$. By squaring and adding we get $x^2+y^2=\eta^2$ (note that $(x,y)\neq(0,0)$ by assumption) and $s=\arccos(\frac{x}{\sqrt{x^2+y^2}})=\arcsin(\frac{y}{\sqrt{x^2+y^2}})=\arctan(\frac{y}{x})$. Just choose one of them since all three forms are equivalent.</p> <p>So our solution reads</p> <p>$$u=\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right) \exp\left[\arccos\left(\frac{x}{\sqrt{x^2+y^2}}\right)\right]$$</p> <p>in a solution area with $(x,y)\neq (0,0)$, for example $G=\{(x,y) \in \mathbb{R}^2 | x&gt;0,y&gt;0 \}$.</p> <p>Checking that the initial condition is fulfilled: $$u(x,0)=\sin\left(\frac{\pi}{4}x \right) \exp[\arccos(1)]=\sin\left(\frac{\pi}{4}x \right) \exp(0)=\sin\left(\frac{\pi}{4}x \right)$$</p> Answer 15: <p>$$xu_y-yu_x=u$$ What you did is correct. The set of differential equations is : $$\frac{dx}{-y}=\frac{dy}{x}=\frac{du}{u}$$ The equation of a first family of characteristic curves is : $$x^2+y^2=c_1$$ At this point, we have to consider either $\frac{dx}{-y}=\frac{du}{u}$ or $\frac{dy}{x}=\frac{du}{u}$</p> <p>For example, with the first one : $-\frac{dx}{\sqrt{c_1-x^2 }}=\frac{du}{u}$ $$\ln|u|+\tan^{-1}\left(\frac{x}{\sqrt{c_1-x^2 }} \right)=\ln|u|+\tan^{-1}\left(\frac{x}{y} \right)=c_2$$ any constant $c_1$ and $c_2$ on the characteristic curves. Elsewhere, $c_1$ and $c_2$ are related : $$\ln|u|-\tan^{-1}\left(\frac{x}{y} \right) =f(x^2+y^2)$$ any differentiable function $f$. Or, with $F=e^f$, a form for the general solution of the PDE is : $$u(x,y)=\exp\left(-\tan^{-1}\left(\frac{x}{y} \right) \right)F(x^2+y^2)$$ Note : Considering the negative square instead of the positive, leads to $u=\exp\left(\tan^{-1}\left(\frac{x}{y} \right) \right)F(x^2+y^2)$ which is solution of $xu_y-yu_x=-u$.</p> <p>Boundary condition : $u(x,0)=\sin(\frac{\pi}{4}x) = \exp\left(-\frac{\pi}{2} \right)F(x^2)$</p> <p>so, the function $F(X)$ is determined : $F(X)=\exp\left(\frac{\pi}{2}\right)\sin(\frac{\pi}{4}\sqrt{X})$</p> <p>With $X=x^2+y^2$, bringing it back into the above general solution leads to the particular solution according to the boundary condition : $$u(x,y)=\exp\left(-\tan^{-1}\left(\frac{x}{y}\right) +\frac{\pi}{2}\right)\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right)$$ $$u(x,y)=\exp\left(\tan^{-1}\left(\frac{y}{x}\right)\right)\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right)$$</p> Answer 16: <p>The most general solution to this equation is <span class="math-container">$$C(y^2 + x^2)^\frac{\alpha}{2}\bigg{(}\frac{y-ix}{y+ix}\bigg{)}^\frac{i}{2}$$</span> where <span class="math-container">$\alpha$</span> is any real number. The solution involves group theory and is rather lengthy, but the gist of it is that you assume the manifold of the solution is partitioned by the stabilizers of an invariant family of infinite groups: <span class="math-container">$x'=\lambda x$</span>, <span class="math-container">$y'=\lambda ^ \beta y$</span>, and <span class="math-container">$u'=\lambda ^ \alpha u$</span>. Apply this family of groups to the PDE and it becomes evident that, for invariance, <span class="math-container">$\beta =1$</span>. There is no restriction on <span class="math-container">$\alpha$</span>. Using the characteristic equation <span class="math-container">$\frac{dx}{x}=\frac{dy}{y}=\frac{du}{\alpha u}$</span> we can pull out two independent integrals, <span class="math-container">$\mu = \frac{y}{x}$</span> and <span class="math-container">$\nu = \frac{u}{x^ \alpha}$</span>. Because these stabilizers partition the manifold, the solution must be of the form <span class="math-container">$\nu = f(\mu)$</span> or <span class="math-container">$u=x^\alpha f(\frac{y}{x})$</span> where <span class="math-container">$\mu = \frac{y}{x}$</span> is the argument for the function f. Now take partial derivatives of this function, plug them back into the original PDE, and find function f. (this takes some work) Multiply this function by <span class="math-container">$x^\alpha$</span> and you are done.</p> Answer 17: <p>In polar coordinates: $$x=r\cos\phi \\ y=r\sin\phi \\ \frac{\partial u}{\partial x}=\cos \phi \frac{\partial u}{\partial r}-\frac{1}{r}\sin \phi \frac{\partial u}{\partial \phi}\\ \frac{\partial u}{\partial y}=\sin \phi \frac{\partial u}{\partial r}+\frac{1}{r}\cos \phi \frac{\partial u}{\partial \phi}\\ $$ $x\frac{\partial u}{\partial y}-y\frac{\partial u}{\partial x}=u$ becomes: $$\frac{\partial u}{\partial \phi}=u$$.<br> So: $$ u=f(r)e^\phi \\ =f(\sqrt{x^2+y^2})e^{atan2(x,y)} $$</p> Answer 18: <p>Do you know the characteristics method? Have a look at <a href="https://en.wikipedia.org/wiki/Method_of_characteristics" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Method_of_characteristics</a> .</p> <p>You form the PDE $x u_y - y u_x = u$ into the three ODEs </p> <p>$$y_s=x, \ x_s=-y,\ u_s=u$$</p> <p>with initial conditions (i.e. $s=0$) $~x(0,\eta)=\eta, y(0,\eta)=0, u(0,\eta)=\sin(\eta \pi/4)$. Here you have a non-characteristic Cauchy problem for $\eta \neq 0$ since $$ \begin{vmatrix} -y(0,\eta) &amp; x(0,\eta)' \\ x(0,\eta) &amp;  y(0,\eta)'  \end{vmatrix}=\begin{vmatrix} 0 &amp; 1 \\ \eta &amp;  0  \end{vmatrix}=-\eta \neq 0$$</p> <p>Solving these ODEs you get</p> <p>$$x=A(\eta) \cos(s)-B(\eta)\sin(s), \ y=A(\eta) \sin(s)+B(\eta)\cos(s), \ u=C(\eta) e^s=\sin(\eta \pi/4) e^s $$</p> <p>Now you have to plug in your initial conditions in $x$ &amp; $y$, solve for $\eta$ &amp; $s$ and plug this into $u$ to get your solution dependent on $x$ &amp; $y$.</p> <p>By the initial conditions we get $A(\eta)=\eta$ and $B(\eta)=0$. Therefore $x=\eta \cos(s), y=\eta \sin(s)$. By squaring and adding we get $x^2+y^2=\eta^2$ (note that $(x,y)\neq(0,0)$ by assumption) and $s=\arccos(\frac{x}{\sqrt{x^2+y^2}})=\arcsin(\frac{y}{\sqrt{x^2+y^2}})=\arctan(\frac{y}{x})$. Just choose one of them since all three forms are equivalent.</p> <p>So our solution reads</p> <p>$$u=\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right) \exp\left[\arccos\left(\frac{x}{\sqrt{x^2+y^2}}\right)\right]$$</p> <p>in a solution area with $(x,y)\neq (0,0)$, for example $G=\{(x,y) \in \mathbb{R}^2 | x&gt;0,y&gt;0 \}$.</p> <p>Checking that the initial condition is fulfilled: $$u(x,0)=\sin\left(\frac{\pi}{4}x \right) \exp[\arccos(1)]=\sin\left(\frac{\pi}{4}x \right) \exp(0)=\sin\left(\frac{\pi}{4}x \right)$$</p> Answer 19: <p>$$xu_y-yu_x=u$$ What you did is correct. The set of differential equations is : $$\frac{dx}{-y}=\frac{dy}{x}=\frac{du}{u}$$ The equation of a first family of characteristic curves is : $$x^2+y^2=c_1$$ At this point, we have to consider either $\frac{dx}{-y}=\frac{du}{u}$ or $\frac{dy}{x}=\frac{du}{u}$</p> <p>For example, with the first one : $-\frac{dx}{\sqrt{c_1-x^2 }}=\frac{du}{u}$ $$\ln|u|+\tan^{-1}\left(\frac{x}{\sqrt{c_1-x^2 }} \right)=\ln|u|+\tan^{-1}\left(\frac{x}{y} \right)=c_2$$ any constant $c_1$ and $c_2$ on the characteristic curves. Elsewhere, $c_1$ and $c_2$ are related : $$\ln|u|-\tan^{-1}\left(\frac{x}{y} \right) =f(x^2+y^2)$$ any differentiable function $f$. Or, with $F=e^f$, a form for the general solution of the PDE is : $$u(x,y)=\exp\left(-\tan^{-1}\left(\frac{x}{y} \right) \right)F(x^2+y^2)$$ Note : Considering the negative square instead of the positive, leads to $u=\exp\left(\tan^{-1}\left(\frac{x}{y} \right) \right)F(x^2+y^2)$ which is solution of $xu_y-yu_x=-u$.</p> <p>Boundary condition : $u(x,0)=\sin(\frac{\pi}{4}x) = \exp\left(-\frac{\pi}{2} \right)F(x^2)$</p> <p>so, the function $F(X)$ is determined : $F(X)=\exp\left(\frac{\pi}{2}\right)\sin(\frac{\pi}{4}\sqrt{X})$</p> <p>With $X=x^2+y^2$, bringing it back into the above general solution leads to the particular solution according to the boundary condition : $$u(x,y)=\exp\left(-\tan^{-1}\left(\frac{x}{y}\right) +\frac{\pi}{2}\right)\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right)$$ $$u(x,y)=\exp\left(\tan^{-1}\left(\frac{y}{x}\right)\right)\sin\left(\frac{\pi}{4}\sqrt{x^2+y^2}\right)$$</p> Answer 20: <p>The most general solution to this equation is <span class="math-container">$$C(y^2 + x^2)^\frac{\alpha}{2}\bigg{(}\frac{y-ix}{y+ix}\bigg{)}^\frac{i}{2}$$</span> where <span class="math-container">$\alpha$</span> is any real number. The solution involves group theory and is rather lengthy, but the gist of it is that you assume the manifold of the solution is partitioned by the stabilizers of an invariant family of infinite groups: <span class="math-container">$x'=\lambda x$</span>, <span class="math-container">$y'=\lambda ^ \beta y$</span>, and <span class="math-container">$u'=\lambda ^ \alpha u$</span>. Apply this family of groups to the PDE and it becomes evident that, for invariance, <span class="math-container">$\beta =1$</span>. There is no restriction on <span class="math-container">$\alpha$</span>. Using the characteristic equation <span class="math-container">$\frac{dx}{x}=\frac{dy}{y}=\frac{du}{\alpha u}$</span> we can pull out two independent integrals, <span class="math-container">$\mu = \frac{y}{x}$</span> and <span class="math-container">$\nu = \frac{u}{x^ \alpha}$</span>. Because these stabilizers partition the manifold, the solution must be of the form <span class="math-container">$\nu = f(\mu)$</span> or <span class="math-container">$u=x^\alpha f(\frac{y}{x})$</span> where <span class="math-container">$\mu = \frac{y}{x}$</span> is the argument for the function f. Now take partial derivatives of this function, plug them back into the original PDE, and find function f. (this takes some work) Multiply this function by <span class="math-container">$x^\alpha$</span> and you are done.</p>
A question on dense set of a topological space with countable tightness.
2,080,251
4
https://math.stackexchange.com/questions/2080251/a-question-on-dense-set-of-a-topological-space-with-countable-tightness
<p>If $X$ has a dense subset $A$ with $|A| &lt; 2^\omega$ and $t(X) = \omega$, then does $|X| \le 2^\omega$ hold? </p> <blockquote> <p>$X$ has countable tightness if $x\in \overline{A}$ for any $A$ of $X$, then there exists a countable subset $A_0$ of $A$ such that $x\in\overline{A_0}$; it is denoted by $t(X)=\omega$.</p> </blockquote> <p>Thanks ahead.</p>
[ { "answer_id": 2080352, "score": 4, "text": "<p>I don’t recall the details, if indeed I ever knew them, but in <em>Fully closed mappings and the consistency of some theorems of general topology with the axioms of set theory</em>, Math. USSR-Sb. $\\mathbf{28}$ $(1976)$, $1$-$26$, Vitaly Fedorchuk showed that the combinatorial principle $\\diamondsuit$, which follows from $\\mathsf{V}=\\mathsf{L}$ and is therefore consistent with $\\mathsf{ZFC}$, implies that there is a hereditarily separable, hereditarily normal, compact Hausdorff space $X$ of cardinality greater than $\\mathfrak{c}=2^\\omega$. A hereditarily separable space is countably tight, so $t(X)=d(X)=\\omega$. Thus, it is at least consistent that there be a very nice counterexample to your conjecture.</p>\n" }, { "answer_id": 2080888, "score": 4, "text": "<p>This need not be true, as Brian astutely noted, Fedorchuk's compact $S$-space is a consistent counterexample. </p>\n\n<p>We can say a little bit: $A$ is dense in $X$, so every $x \\in X$ is in $\\overline{A}$, and by countable tightness there exists some countable subset $A_x \\subseteq A$ such that $x \\in \\overline{A_x}$. One would like to conclude that there are at most as many points as there are countable subsets of $A$, and if $A$ has size at most $2^\\omega$, it has at most $(2^\\omega)^\\omega = 2^\\omega$ many countable subsets. This argument would work if $x \\rightarrow A_x$ could somehow be made to be 1-1, but the consistent example kills all such hopes. If however $X$ were sequential and Hausdorff, we could pick a sequence from $A$ to converge to $x$, and as limits of sequences are then unique, the map from $X$ to the set of sequences (which has the same size as the countable subsets) would be 1-1, and the argument would work.</p>\n\n<p>So the statement does hold for sequential (much stronger than $t(X) = \\omega$) Hausdorff spaces $X$. Fedorchuk's space was constructed to have no non-trivial convergent sequences at all. </p>\n" }, { "answer_id": 2080352, "score": 4, "text": "<p>I don’t recall the details, if indeed I ever knew them, but in <em>Fully closed mappings and the consistency of some theorems of general topology with the axioms of set theory</em>, Math. USSR-Sb. $\\mathbf{28}$ $(1976)$, $1$-$26$, Vitaly Fedorchuk showed that the combinatorial principle $\\diamondsuit$, which follows from $\\mathsf{V}=\\mathsf{L}$ and is therefore consistent with $\\mathsf{ZFC}$, implies that there is a hereditarily separable, hereditarily normal, compact Hausdorff space $X$ of cardinality greater than $\\mathfrak{c}=2^\\omega$. A hereditarily separable space is countably tight, so $t(X)=d(X)=\\omega$. Thus, it is at least consistent that there be a very nice counterexample to your conjecture.</p>\n" }, { "answer_id": 2080888, "score": 4, "text": "<p>This need not be true, as Brian astutely noted, Fedorchuk's compact $S$-space is a consistent counterexample. </p>\n\n<p>We can say a little bit: $A$ is dense in $X$, so every $x \\in X$ is in $\\overline{A}$, and by countable tightness there exists some countable subset $A_x \\subseteq A$ such that $x \\in \\overline{A_x}$. One would like to conclude that there are at most as many points as there are countable subsets of $A$, and if $A$ has size at most $2^\\omega$, it has at most $(2^\\omega)^\\omega = 2^\\omega$ many countable subsets. This argument would work if $x \\rightarrow A_x$ could somehow be made to be 1-1, but the consistent example kills all such hopes. If however $X$ were sequential and Hausdorff, we could pick a sequence from $A$ to converge to $x$, and as limits of sequences are then unique, the map from $X$ to the set of sequences (which has the same size as the countable subsets) would be 1-1, and the argument would work.</p>\n\n<p>So the statement does hold for sequential (much stronger than $t(X) = \\omega$) Hausdorff spaces $X$. Fedorchuk's space was constructed to have no non-trivial convergent sequences at all. </p>\n" }, { "answer_id": 2080352, "score": 4, "text": "<p>I don’t recall the details, if indeed I ever knew them, but in <em>Fully closed mappings and the consistency of some theorems of general topology with the axioms of set theory</em>, Math. USSR-Sb. $\\mathbf{28}$ $(1976)$, $1$-$26$, Vitaly Fedorchuk showed that the combinatorial principle $\\diamondsuit$, which follows from $\\mathsf{V}=\\mathsf{L}$ and is therefore consistent with $\\mathsf{ZFC}$, implies that there is a hereditarily separable, hereditarily normal, compact Hausdorff space $X$ of cardinality greater than $\\mathfrak{c}=2^\\omega$. A hereditarily separable space is countably tight, so $t(X)=d(X)=\\omega$. Thus, it is at least consistent that there be a very nice counterexample to your conjecture.</p>\n" }, { "answer_id": 2080888, "score": 4, "text": "<p>This need not be true, as Brian astutely noted, Fedorchuk's compact $S$-space is a consistent counterexample. </p>\n\n<p>We can say a little bit: $A$ is dense in $X$, so every $x \\in X$ is in $\\overline{A}$, and by countable tightness there exists some countable subset $A_x \\subseteq A$ such that $x \\in \\overline{A_x}$. One would like to conclude that there are at most as many points as there are countable subsets of $A$, and if $A$ has size at most $2^\\omega$, it has at most $(2^\\omega)^\\omega = 2^\\omega$ many countable subsets. This argument would work if $x \\rightarrow A_x$ could somehow be made to be 1-1, but the consistent example kills all such hopes. If however $X$ were sequential and Hausdorff, we could pick a sequence from $A$ to converge to $x$, and as limits of sequences are then unique, the map from $X$ to the set of sequences (which has the same size as the countable subsets) would be 1-1, and the argument would work.</p>\n\n<p>So the statement does hold for sequential (much stronger than $t(X) = \\omega$) Hausdorff spaces $X$. Fedorchuk's space was constructed to have no non-trivial convergent sequences at all. </p>\n" }, { "answer_id": 2080352, "score": 4, "text": "<p>I don’t recall the details, if indeed I ever knew them, but in <em>Fully closed mappings and the consistency of some theorems of general topology with the axioms of set theory</em>, Math. USSR-Sb. $\\mathbf{28}$ $(1976)$, $1$-$26$, Vitaly Fedorchuk showed that the combinatorial principle $\\diamondsuit$, which follows from $\\mathsf{V}=\\mathsf{L}$ and is therefore consistent with $\\mathsf{ZFC}$, implies that there is a hereditarily separable, hereditarily normal, compact Hausdorff space $X$ of cardinality greater than $\\mathfrak{c}=2^\\omega$. A hereditarily separable space is countably tight, so $t(X)=d(X)=\\omega$. Thus, it is at least consistent that there be a very nice counterexample to your conjecture.</p>\n" }, { "answer_id": 2080888, "score": 4, "text": "<p>This need not be true, as Brian astutely noted, Fedorchuk's compact $S$-space is a consistent counterexample. </p>\n\n<p>We can say a little bit: $A$ is dense in $X$, so every $x \\in X$ is in $\\overline{A}$, and by countable tightness there exists some countable subset $A_x \\subseteq A$ such that $x \\in \\overline{A_x}$. One would like to conclude that there are at most as many points as there are countable subsets of $A$, and if $A$ has size at most $2^\\omega$, it has at most $(2^\\omega)^\\omega = 2^\\omega$ many countable subsets. This argument would work if $x \\rightarrow A_x$ could somehow be made to be 1-1, but the consistent example kills all such hopes. If however $X$ were sequential and Hausdorff, we could pick a sequence from $A$ to converge to $x$, and as limits of sequences are then unique, the map from $X$ to the set of sequences (which has the same size as the countable subsets) would be 1-1, and the argument would work.</p>\n\n<p>So the statement does hold for sequential (much stronger than $t(X) = \\omega$) Hausdorff spaces $X$. Fedorchuk's space was constructed to have no non-trivial convergent sequences at all. </p>\n" }, { "answer_id": 2080352, "score": 4, "text": "<p>I don’t recall the details, if indeed I ever knew them, but in <em>Fully closed mappings and the consistency of some theorems of general topology with the axioms of set theory</em>, Math. USSR-Sb. $\\mathbf{28}$ $(1976)$, $1$-$26$, Vitaly Fedorchuk showed that the combinatorial principle $\\diamondsuit$, which follows from $\\mathsf{V}=\\mathsf{L}$ and is therefore consistent with $\\mathsf{ZFC}$, implies that there is a hereditarily separable, hereditarily normal, compact Hausdorff space $X$ of cardinality greater than $\\mathfrak{c}=2^\\omega$. A hereditarily separable space is countably tight, so $t(X)=d(X)=\\omega$. Thus, it is at least consistent that there be a very nice counterexample to your conjecture.</p>\n" }, { "answer_id": 2080888, "score": 4, "text": "<p>This need not be true, as Brian astutely noted, Fedorchuk's compact $S$-space is a consistent counterexample. </p>\n\n<p>We can say a little bit: $A$ is dense in $X$, so every $x \\in X$ is in $\\overline{A}$, and by countable tightness there exists some countable subset $A_x \\subseteq A$ such that $x \\in \\overline{A_x}$. One would like to conclude that there are at most as many points as there are countable subsets of $A$, and if $A$ has size at most $2^\\omega$, it has at most $(2^\\omega)^\\omega = 2^\\omega$ many countable subsets. This argument would work if $x \\rightarrow A_x$ could somehow be made to be 1-1, but the consistent example kills all such hopes. If however $X$ were sequential and Hausdorff, we could pick a sequence from $A$ to converge to $x$, and as limits of sequences are then unique, the map from $X$ to the set of sequences (which has the same size as the countable subsets) would be 1-1, and the argument would work.</p>\n\n<p>So the statement does hold for sequential (much stronger than $t(X) = \\omega$) Hausdorff spaces $X$. Fedorchuk's space was constructed to have no non-trivial convergent sequences at all. </p>\n" } ]
Question: A question on dense set of a topological space with countable tightness. Answer 1: <p>I don’t recall the details, if indeed I ever knew them, but in <em>Fully closed mappings and the consistency of some theorems of general topology with the axioms of set theory</em>, Math. USSR-Sb. $\mathbf{28}$ $(1976)$, $1$-$26$, Vitaly Fedorchuk showed that the combinatorial principle $\diamondsuit$, which follows from $\mathsf{V}=\mathsf{L}$ and is therefore consistent with $\mathsf{ZFC}$, implies that there is a hereditarily separable, hereditarily normal, compact Hausdorff space $X$ of cardinality greater than $\mathfrak{c}=2^\omega$. A hereditarily separable space is countably tight, so $t(X)=d(X)=\omega$. Thus, it is at least consistent that there be a very nice counterexample to your conjecture.</p> Answer 2: <p>This need not be true, as Brian astutely noted, Fedorchuk's compact $S$-space is a consistent counterexample. </p> <p>We can say a little bit: $A$ is dense in $X$, so every $x \in X$ is in $\overline{A}$, and by countable tightness there exists some countable subset $A_x \subseteq A$ such that $x \in \overline{A_x}$. One would like to conclude that there are at most as many points as there are countable subsets of $A$, and if $A$ has size at most $2^\omega$, it has at most $(2^\omega)^\omega = 2^\omega$ many countable subsets. This argument would work if $x \rightarrow A_x$ could somehow be made to be 1-1, but the consistent example kills all such hopes. If however $X$ were sequential and Hausdorff, we could pick a sequence from $A$ to converge to $x$, and as limits of sequences are then unique, the map from $X$ to the set of sequences (which has the same size as the countable subsets) would be 1-1, and the argument would work.</p> <p>So the statement does hold for sequential (much stronger than $t(X) = \omega$) Hausdorff spaces $X$. Fedorchuk's space was constructed to have no non-trivial convergent sequences at all. </p> Answer 3: <p>I don’t recall the details, if indeed I ever knew them, but in <em>Fully closed mappings and the consistency of some theorems of general topology with the axioms of set theory</em>, Math. USSR-Sb. $\mathbf{28}$ $(1976)$, $1$-$26$, Vitaly Fedorchuk showed that the combinatorial principle $\diamondsuit$, which follows from $\mathsf{V}=\mathsf{L}$ and is therefore consistent with $\mathsf{ZFC}$, implies that there is a hereditarily separable, hereditarily normal, compact Hausdorff space $X$ of cardinality greater than $\mathfrak{c}=2^\omega$. A hereditarily separable space is countably tight, so $t(X)=d(X)=\omega$. Thus, it is at least consistent that there be a very nice counterexample to your conjecture.</p> Answer 4: <p>This need not be true, as Brian astutely noted, Fedorchuk's compact $S$-space is a consistent counterexample. </p> <p>We can say a little bit: $A$ is dense in $X$, so every $x \in X$ is in $\overline{A}$, and by countable tightness there exists some countable subset $A_x \subseteq A$ such that $x \in \overline{A_x}$. One would like to conclude that there are at most as many points as there are countable subsets of $A$, and if $A$ has size at most $2^\omega$, it has at most $(2^\omega)^\omega = 2^\omega$ many countable subsets. This argument would work if $x \rightarrow A_x$ could somehow be made to be 1-1, but the consistent example kills all such hopes. If however $X$ were sequential and Hausdorff, we could pick a sequence from $A$ to converge to $x$, and as limits of sequences are then unique, the map from $X$ to the set of sequences (which has the same size as the countable subsets) would be 1-1, and the argument would work.</p> <p>So the statement does hold for sequential (much stronger than $t(X) = \omega$) Hausdorff spaces $X$. Fedorchuk's space was constructed to have no non-trivial convergent sequences at all. </p> Answer 5: <p>I don’t recall the details, if indeed I ever knew them, but in <em>Fully closed mappings and the consistency of some theorems of general topology with the axioms of set theory</em>, Math. USSR-Sb. $\mathbf{28}$ $(1976)$, $1$-$26$, Vitaly Fedorchuk showed that the combinatorial principle $\diamondsuit$, which follows from $\mathsf{V}=\mathsf{L}$ and is therefore consistent with $\mathsf{ZFC}$, implies that there is a hereditarily separable, hereditarily normal, compact Hausdorff space $X$ of cardinality greater than $\mathfrak{c}=2^\omega$. A hereditarily separable space is countably tight, so $t(X)=d(X)=\omega$. Thus, it is at least consistent that there be a very nice counterexample to your conjecture.</p> Answer 6: <p>This need not be true, as Brian astutely noted, Fedorchuk's compact $S$-space is a consistent counterexample. </p> <p>We can say a little bit: $A$ is dense in $X$, so every $x \in X$ is in $\overline{A}$, and by countable tightness there exists some countable subset $A_x \subseteq A$ such that $x \in \overline{A_x}$. One would like to conclude that there are at most as many points as there are countable subsets of $A$, and if $A$ has size at most $2^\omega$, it has at most $(2^\omega)^\omega = 2^\omega$ many countable subsets. This argument would work if $x \rightarrow A_x$ could somehow be made to be 1-1, but the consistent example kills all such hopes. If however $X$ were sequential and Hausdorff, we could pick a sequence from $A$ to converge to $x$, and as limits of sequences are then unique, the map from $X$ to the set of sequences (which has the same size as the countable subsets) would be 1-1, and the argument would work.</p> <p>So the statement does hold for sequential (much stronger than $t(X) = \omega$) Hausdorff spaces $X$. Fedorchuk's space was constructed to have no non-trivial convergent sequences at all. </p> Answer 7: <p>I don’t recall the details, if indeed I ever knew them, but in <em>Fully closed mappings and the consistency of some theorems of general topology with the axioms of set theory</em>, Math. USSR-Sb. $\mathbf{28}$ $(1976)$, $1$-$26$, Vitaly Fedorchuk showed that the combinatorial principle $\diamondsuit$, which follows from $\mathsf{V}=\mathsf{L}$ and is therefore consistent with $\mathsf{ZFC}$, implies that there is a hereditarily separable, hereditarily normal, compact Hausdorff space $X$ of cardinality greater than $\mathfrak{c}=2^\omega$. A hereditarily separable space is countably tight, so $t(X)=d(X)=\omega$. Thus, it is at least consistent that there be a very nice counterexample to your conjecture.</p> Answer 8: <p>This need not be true, as Brian astutely noted, Fedorchuk's compact $S$-space is a consistent counterexample. </p> <p>We can say a little bit: $A$ is dense in $X$, so every $x \in X$ is in $\overline{A}$, and by countable tightness there exists some countable subset $A_x \subseteq A$ such that $x \in \overline{A_x}$. One would like to conclude that there are at most as many points as there are countable subsets of $A$, and if $A$ has size at most $2^\omega$, it has at most $(2^\omega)^\omega = 2^\omega$ many countable subsets. This argument would work if $x \rightarrow A_x$ could somehow be made to be 1-1, but the consistent example kills all such hopes. If however $X$ were sequential and Hausdorff, we could pick a sequence from $A$ to converge to $x$, and as limits of sequences are then unique, the map from $X$ to the set of sequences (which has the same size as the countable subsets) would be 1-1, and the argument would work.</p> <p>So the statement does hold for sequential (much stronger than $t(X) = \omega$) Hausdorff spaces $X$. Fedorchuk's space was constructed to have no non-trivial convergent sequences at all. </p> Answer 9: <p>I don’t recall the details, if indeed I ever knew them, but in <em>Fully closed mappings and the consistency of some theorems of general topology with the axioms of set theory</em>, Math. USSR-Sb. $\mathbf{28}$ $(1976)$, $1$-$26$, Vitaly Fedorchuk showed that the combinatorial principle $\diamondsuit$, which follows from $\mathsf{V}=\mathsf{L}$ and is therefore consistent with $\mathsf{ZFC}$, implies that there is a hereditarily separable, hereditarily normal, compact Hausdorff space $X$ of cardinality greater than $\mathfrak{c}=2^\omega$. A hereditarily separable space is countably tight, so $t(X)=d(X)=\omega$. Thus, it is at least consistent that there be a very nice counterexample to your conjecture.</p> Answer 10: <p>This need not be true, as Brian astutely noted, Fedorchuk's compact $S$-space is a consistent counterexample. </p> <p>We can say a little bit: $A$ is dense in $X$, so every $x \in X$ is in $\overline{A}$, and by countable tightness there exists some countable subset $A_x \subseteq A$ such that $x \in \overline{A_x}$. One would like to conclude that there are at most as many points as there are countable subsets of $A$, and if $A$ has size at most $2^\omega$, it has at most $(2^\omega)^\omega = 2^\omega$ many countable subsets. This argument would work if $x \rightarrow A_x$ could somehow be made to be 1-1, but the consistent example kills all such hopes. If however $X$ were sequential and Hausdorff, we could pick a sequence from $A$ to converge to $x$, and as limits of sequences are then unique, the map from $X$ to the set of sequences (which has the same size as the countable subsets) would be 1-1, and the argument would work.</p> <p>So the statement does hold for sequential (much stronger than $t(X) = \omega$) Hausdorff spaces $X$. Fedorchuk's space was constructed to have no non-trivial convergent sequences at all. </p>
Property of 2016
2,080,253
4
https://math.stackexchange.com/questions/2080253/property-of-2016
<p>What is the least multiple of $2016$ whose sum of digits is $2016$?</p> <p>I am totally broke on this. What I only know is that the least number of digits should be greater than $224$ which is clearly not a practical hint. The prime decomposition may give some hint but am unsure. Any ideas. Thanks beforehand.</p>
[ { "answer_id": 2080755, "score": 4, "text": "<p>Looking at the last three digits, you are not going to get more than $24$ digit-sum from them, as the largest digit-sum avilable from multiples of $8$ below $1000$ is at $888$. So you will need $225$ digits in total, which is likely to be enough.</p>\n\n<p>I'll start by finding out the value of $10^{225} \\bmod 2016$</p>\n\n<p>$2016=2^5\\cdot3^2\\cdot7 = 32\\cdot 63$</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">$\\lambda(63)$</a> $= \\text{lcm}(\\lambda(3^2),\\lambda(7)) = \\text{lcm}(6,6) = 6 \\implies 10^6 \\equiv 1 \\bmod 63$ </p>\n\n<p>$10^{225} \\equiv 10^{222}\\cdot 10^3 \\equiv 10^3\\equiv 55 \\bmod 63$<br>\n$10^{225} \\equiv 0 \\bmod 32$ trivially</p>\n\n<p>Now <a href=\"https://en.wikipedia.org/wiki/Chinese_remainder_theorem\" rel=\"nofollow noreferrer\">Chinese Remainder Theorem</a> says we can find a value that is $ 55 \\bmod 63 $ and $0 \\bmod 32$ - which translates to looking for a multiple of $32$ that is $\\equiv 55 \\bmod 63$.</p>\n\n<p>$2\\cdot 32 \\equiv 1 \\bmod 63 \\implies (1+2(55-32))\\cdot 32 \\equiv 47\\cdot 32$ is the multiple required, and thus $47\\cdot 32 \\equiv 1504 \\equiv 10^{255} \\bmod 2016$ </p>\n\n<p>The greatest multiple of $2016$ less than $10^{225}$ is thus $10^{225}-1504 = \\overbrace{9\\ldots9}^{\\text{221 9s}}8496$, which actually does have the digit-sum required. So we already have an upper limit on the minimum value.</p>\n\n<p>Since we have this cycle of $6$ values for $10^k\\bmod 2016$ and $10^6 \\equiv 1 \\bmod 63$ $\\implies$ $999999 \\equiv 0 \\bmod 63$ we can insert \"$999999$\" into a valid shorter value at any point clear of the last five digits (the factor of $2^5$ being determined by those last five digits). Specifically we can multiply the higher digits by $10^6$ (which doesn't change their value $\\bmod 63$) and then add $999999\\cdot 10^k$ which of course idivisible by $63$.</p>\n\n<p>Thus when we find the minimum multiple in $9$ digits with a digit-sum of $72$ (which is $598989888$), we can extend that to $225$ digits using insertions of $999999$, keeping $5989$ as the high-value digits. </p>\n\n<p>This gives us $598\\overbrace{9\\ldots9}^{\\text{217 9s}}89888$ as the minimum multiple of $2016$ with a digit sum of $2016$.</p>\n" }, { "answer_id": 2080278, "score": 2, "text": "<p>HINT: The sum of the digits of $2016$ is $9$ - can you see a way of exploiting this fact to form a multiple of $2016$ which has some digits equal to $9$? If you can find a multiple with lots of digits equal to $9$ then you may be close to an answer. If you think about this the right way, you should find a way of getting a string of digits equal to $9$.</p>\n" }, { "answer_id": 2080755, "score": 4, "text": "<p>Looking at the last three digits, you are not going to get more than $24$ digit-sum from them, as the largest digit-sum avilable from multiples of $8$ below $1000$ is at $888$. So you will need $225$ digits in total, which is likely to be enough.</p>\n\n<p>I'll start by finding out the value of $10^{225} \\bmod 2016$</p>\n\n<p>$2016=2^5\\cdot3^2\\cdot7 = 32\\cdot 63$</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">$\\lambda(63)$</a> $= \\text{lcm}(\\lambda(3^2),\\lambda(7)) = \\text{lcm}(6,6) = 6 \\implies 10^6 \\equiv 1 \\bmod 63$ </p>\n\n<p>$10^{225} \\equiv 10^{222}\\cdot 10^3 \\equiv 10^3\\equiv 55 \\bmod 63$<br>\n$10^{225} \\equiv 0 \\bmod 32$ trivially</p>\n\n<p>Now <a href=\"https://en.wikipedia.org/wiki/Chinese_remainder_theorem\" rel=\"nofollow noreferrer\">Chinese Remainder Theorem</a> says we can find a value that is $ 55 \\bmod 63 $ and $0 \\bmod 32$ - which translates to looking for a multiple of $32$ that is $\\equiv 55 \\bmod 63$.</p>\n\n<p>$2\\cdot 32 \\equiv 1 \\bmod 63 \\implies (1+2(55-32))\\cdot 32 \\equiv 47\\cdot 32$ is the multiple required, and thus $47\\cdot 32 \\equiv 1504 \\equiv 10^{255} \\bmod 2016$ </p>\n\n<p>The greatest multiple of $2016$ less than $10^{225}$ is thus $10^{225}-1504 = \\overbrace{9\\ldots9}^{\\text{221 9s}}8496$, which actually does have the digit-sum required. So we already have an upper limit on the minimum value.</p>\n\n<p>Since we have this cycle of $6$ values for $10^k\\bmod 2016$ and $10^6 \\equiv 1 \\bmod 63$ $\\implies$ $999999 \\equiv 0 \\bmod 63$ we can insert \"$999999$\" into a valid shorter value at any point clear of the last five digits (the factor of $2^5$ being determined by those last five digits). Specifically we can multiply the higher digits by $10^6$ (which doesn't change their value $\\bmod 63$) and then add $999999\\cdot 10^k$ which of course idivisible by $63$.</p>\n\n<p>Thus when we find the minimum multiple in $9$ digits with a digit-sum of $72$ (which is $598989888$), we can extend that to $225$ digits using insertions of $999999$, keeping $5989$ as the high-value digits. </p>\n\n<p>This gives us $598\\overbrace{9\\ldots9}^{\\text{217 9s}}89888$ as the minimum multiple of $2016$ with a digit sum of $2016$.</p>\n" }, { "answer_id": 2080278, "score": 2, "text": "<p>HINT: The sum of the digits of $2016$ is $9$ - can you see a way of exploiting this fact to form a multiple of $2016$ which has some digits equal to $9$? If you can find a multiple with lots of digits equal to $9$ then you may be close to an answer. If you think about this the right way, you should find a way of getting a string of digits equal to $9$.</p>\n" }, { "answer_id": 2080755, "score": 4, "text": "<p>Looking at the last three digits, you are not going to get more than $24$ digit-sum from them, as the largest digit-sum avilable from multiples of $8$ below $1000$ is at $888$. So you will need $225$ digits in total, which is likely to be enough.</p>\n\n<p>I'll start by finding out the value of $10^{225} \\bmod 2016$</p>\n\n<p>$2016=2^5\\cdot3^2\\cdot7 = 32\\cdot 63$</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">$\\lambda(63)$</a> $= \\text{lcm}(\\lambda(3^2),\\lambda(7)) = \\text{lcm}(6,6) = 6 \\implies 10^6 \\equiv 1 \\bmod 63$ </p>\n\n<p>$10^{225} \\equiv 10^{222}\\cdot 10^3 \\equiv 10^3\\equiv 55 \\bmod 63$<br>\n$10^{225} \\equiv 0 \\bmod 32$ trivially</p>\n\n<p>Now <a href=\"https://en.wikipedia.org/wiki/Chinese_remainder_theorem\" rel=\"nofollow noreferrer\">Chinese Remainder Theorem</a> says we can find a value that is $ 55 \\bmod 63 $ and $0 \\bmod 32$ - which translates to looking for a multiple of $32$ that is $\\equiv 55 \\bmod 63$.</p>\n\n<p>$2\\cdot 32 \\equiv 1 \\bmod 63 \\implies (1+2(55-32))\\cdot 32 \\equiv 47\\cdot 32$ is the multiple required, and thus $47\\cdot 32 \\equiv 1504 \\equiv 10^{255} \\bmod 2016$ </p>\n\n<p>The greatest multiple of $2016$ less than $10^{225}$ is thus $10^{225}-1504 = \\overbrace{9\\ldots9}^{\\text{221 9s}}8496$, which actually does have the digit-sum required. So we already have an upper limit on the minimum value.</p>\n\n<p>Since we have this cycle of $6$ values for $10^k\\bmod 2016$ and $10^6 \\equiv 1 \\bmod 63$ $\\implies$ $999999 \\equiv 0 \\bmod 63$ we can insert \"$999999$\" into a valid shorter value at any point clear of the last five digits (the factor of $2^5$ being determined by those last five digits). Specifically we can multiply the higher digits by $10^6$ (which doesn't change their value $\\bmod 63$) and then add $999999\\cdot 10^k$ which of course idivisible by $63$.</p>\n\n<p>Thus when we find the minimum multiple in $9$ digits with a digit-sum of $72$ (which is $598989888$), we can extend that to $225$ digits using insertions of $999999$, keeping $5989$ as the high-value digits. </p>\n\n<p>This gives us $598\\overbrace{9\\ldots9}^{\\text{217 9s}}89888$ as the minimum multiple of $2016$ with a digit sum of $2016$.</p>\n" }, { "answer_id": 2080278, "score": 2, "text": "<p>HINT: The sum of the digits of $2016$ is $9$ - can you see a way of exploiting this fact to form a multiple of $2016$ which has some digits equal to $9$? If you can find a multiple with lots of digits equal to $9$ then you may be close to an answer. If you think about this the right way, you should find a way of getting a string of digits equal to $9$.</p>\n" }, { "answer_id": 2080755, "score": 4, "text": "<p>Looking at the last three digits, you are not going to get more than $24$ digit-sum from them, as the largest digit-sum avilable from multiples of $8$ below $1000$ is at $888$. So you will need $225$ digits in total, which is likely to be enough.</p>\n\n<p>I'll start by finding out the value of $10^{225} \\bmod 2016$</p>\n\n<p>$2016=2^5\\cdot3^2\\cdot7 = 32\\cdot 63$</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">$\\lambda(63)$</a> $= \\text{lcm}(\\lambda(3^2),\\lambda(7)) = \\text{lcm}(6,6) = 6 \\implies 10^6 \\equiv 1 \\bmod 63$ </p>\n\n<p>$10^{225} \\equiv 10^{222}\\cdot 10^3 \\equiv 10^3\\equiv 55 \\bmod 63$<br>\n$10^{225} \\equiv 0 \\bmod 32$ trivially</p>\n\n<p>Now <a href=\"https://en.wikipedia.org/wiki/Chinese_remainder_theorem\" rel=\"nofollow noreferrer\">Chinese Remainder Theorem</a> says we can find a value that is $ 55 \\bmod 63 $ and $0 \\bmod 32$ - which translates to looking for a multiple of $32$ that is $\\equiv 55 \\bmod 63$.</p>\n\n<p>$2\\cdot 32 \\equiv 1 \\bmod 63 \\implies (1+2(55-32))\\cdot 32 \\equiv 47\\cdot 32$ is the multiple required, and thus $47\\cdot 32 \\equiv 1504 \\equiv 10^{255} \\bmod 2016$ </p>\n\n<p>The greatest multiple of $2016$ less than $10^{225}$ is thus $10^{225}-1504 = \\overbrace{9\\ldots9}^{\\text{221 9s}}8496$, which actually does have the digit-sum required. So we already have an upper limit on the minimum value.</p>\n\n<p>Since we have this cycle of $6$ values for $10^k\\bmod 2016$ and $10^6 \\equiv 1 \\bmod 63$ $\\implies$ $999999 \\equiv 0 \\bmod 63$ we can insert \"$999999$\" into a valid shorter value at any point clear of the last five digits (the factor of $2^5$ being determined by those last five digits). Specifically we can multiply the higher digits by $10^6$ (which doesn't change their value $\\bmod 63$) and then add $999999\\cdot 10^k$ which of course idivisible by $63$.</p>\n\n<p>Thus when we find the minimum multiple in $9$ digits with a digit-sum of $72$ (which is $598989888$), we can extend that to $225$ digits using insertions of $999999$, keeping $5989$ as the high-value digits. </p>\n\n<p>This gives us $598\\overbrace{9\\ldots9}^{\\text{217 9s}}89888$ as the minimum multiple of $2016$ with a digit sum of $2016$.</p>\n" }, { "answer_id": 2080278, "score": 2, "text": "<p>HINT: The sum of the digits of $2016$ is $9$ - can you see a way of exploiting this fact to form a multiple of $2016$ which has some digits equal to $9$? If you can find a multiple with lots of digits equal to $9$ then you may be close to an answer. If you think about this the right way, you should find a way of getting a string of digits equal to $9$.</p>\n" }, { "answer_id": 2080755, "score": 4, "text": "<p>Looking at the last three digits, you are not going to get more than $24$ digit-sum from them, as the largest digit-sum avilable from multiples of $8$ below $1000$ is at $888$. So you will need $225$ digits in total, which is likely to be enough.</p>\n\n<p>I'll start by finding out the value of $10^{225} \\bmod 2016$</p>\n\n<p>$2016=2^5\\cdot3^2\\cdot7 = 32\\cdot 63$</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">$\\lambda(63)$</a> $= \\text{lcm}(\\lambda(3^2),\\lambda(7)) = \\text{lcm}(6,6) = 6 \\implies 10^6 \\equiv 1 \\bmod 63$ </p>\n\n<p>$10^{225} \\equiv 10^{222}\\cdot 10^3 \\equiv 10^3\\equiv 55 \\bmod 63$<br>\n$10^{225} \\equiv 0 \\bmod 32$ trivially</p>\n\n<p>Now <a href=\"https://en.wikipedia.org/wiki/Chinese_remainder_theorem\" rel=\"nofollow noreferrer\">Chinese Remainder Theorem</a> says we can find a value that is $ 55 \\bmod 63 $ and $0 \\bmod 32$ - which translates to looking for a multiple of $32$ that is $\\equiv 55 \\bmod 63$.</p>\n\n<p>$2\\cdot 32 \\equiv 1 \\bmod 63 \\implies (1+2(55-32))\\cdot 32 \\equiv 47\\cdot 32$ is the multiple required, and thus $47\\cdot 32 \\equiv 1504 \\equiv 10^{255} \\bmod 2016$ </p>\n\n<p>The greatest multiple of $2016$ less than $10^{225}$ is thus $10^{225}-1504 = \\overbrace{9\\ldots9}^{\\text{221 9s}}8496$, which actually does have the digit-sum required. So we already have an upper limit on the minimum value.</p>\n\n<p>Since we have this cycle of $6$ values for $10^k\\bmod 2016$ and $10^6 \\equiv 1 \\bmod 63$ $\\implies$ $999999 \\equiv 0 \\bmod 63$ we can insert \"$999999$\" into a valid shorter value at any point clear of the last five digits (the factor of $2^5$ being determined by those last five digits). Specifically we can multiply the higher digits by $10^6$ (which doesn't change their value $\\bmod 63$) and then add $999999\\cdot 10^k$ which of course idivisible by $63$.</p>\n\n<p>Thus when we find the minimum multiple in $9$ digits with a digit-sum of $72$ (which is $598989888$), we can extend that to $225$ digits using insertions of $999999$, keeping $5989$ as the high-value digits. </p>\n\n<p>This gives us $598\\overbrace{9\\ldots9}^{\\text{217 9s}}89888$ as the minimum multiple of $2016$ with a digit sum of $2016$.</p>\n" }, { "answer_id": 2080278, "score": 2, "text": "<p>HINT: The sum of the digits of $2016$ is $9$ - can you see a way of exploiting this fact to form a multiple of $2016$ which has some digits equal to $9$? If you can find a multiple with lots of digits equal to $9$ then you may be close to an answer. If you think about this the right way, you should find a way of getting a string of digits equal to $9$.</p>\n" } ]
Question: Property of 2016 Answer 1: <p>Looking at the last three digits, you are not going to get more than $24$ digit-sum from them, as the largest digit-sum avilable from multiples of $8$ below $1000$ is at $888$. So you will need $225$ digits in total, which is likely to be enough.</p> <p>I'll start by finding out the value of $10^{225} \bmod 2016$</p> <p>$2016=2^5\cdot3^2\cdot7 = 32\cdot 63$</p> <p><a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">$\lambda(63)$</a> $= \text{lcm}(\lambda(3^2),\lambda(7)) = \text{lcm}(6,6) = 6 \implies 10^6 \equiv 1 \bmod 63$ </p> <p>$10^{225} \equiv 10^{222}\cdot 10^3 \equiv 10^3\equiv 55 \bmod 63$<br> $10^{225} \equiv 0 \bmod 32$ trivially</p> <p>Now <a href="https://en.wikipedia.org/wiki/Chinese_remainder_theorem" rel="nofollow noreferrer">Chinese Remainder Theorem</a> says we can find a value that is $ 55 \bmod 63 $ and $0 \bmod 32$ - which translates to looking for a multiple of $32$ that is $\equiv 55 \bmod 63$.</p> <p>$2\cdot 32 \equiv 1 \bmod 63 \implies (1+2(55-32))\cdot 32 \equiv 47\cdot 32$ is the multiple required, and thus $47\cdot 32 \equiv 1504 \equiv 10^{255} \bmod 2016$ </p> <p>The greatest multiple of $2016$ less than $10^{225}$ is thus $10^{225}-1504 = \overbrace{9\ldots9}^{\text{221 9s}}8496$, which actually does have the digit-sum required. So we already have an upper limit on the minimum value.</p> <p>Since we have this cycle of $6$ values for $10^k\bmod 2016$ and $10^6 \equiv 1 \bmod 63$ $\implies$ $999999 \equiv 0 \bmod 63$ we can insert "$999999$" into a valid shorter value at any point clear of the last five digits (the factor of $2^5$ being determined by those last five digits). Specifically we can multiply the higher digits by $10^6$ (which doesn't change their value $\bmod 63$) and then add $999999\cdot 10^k$ which of course idivisible by $63$.</p> <p>Thus when we find the minimum multiple in $9$ digits with a digit-sum of $72$ (which is $598989888$), we can extend that to $225$ digits using insertions of $999999$, keeping $5989$ as the high-value digits. </p> <p>This gives us $598\overbrace{9\ldots9}^{\text{217 9s}}89888$ as the minimum multiple of $2016$ with a digit sum of $2016$.</p> Answer 2: <p>HINT: The sum of the digits of $2016$ is $9$ - can you see a way of exploiting this fact to form a multiple of $2016$ which has some digits equal to $9$? If you can find a multiple with lots of digits equal to $9$ then you may be close to an answer. If you think about this the right way, you should find a way of getting a string of digits equal to $9$.</p> Answer 3: <p>Looking at the last three digits, you are not going to get more than $24$ digit-sum from them, as the largest digit-sum avilable from multiples of $8$ below $1000$ is at $888$. So you will need $225$ digits in total, which is likely to be enough.</p> <p>I'll start by finding out the value of $10^{225} \bmod 2016$</p> <p>$2016=2^5\cdot3^2\cdot7 = 32\cdot 63$</p> <p><a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">$\lambda(63)$</a> $= \text{lcm}(\lambda(3^2),\lambda(7)) = \text{lcm}(6,6) = 6 \implies 10^6 \equiv 1 \bmod 63$ </p> <p>$10^{225} \equiv 10^{222}\cdot 10^3 \equiv 10^3\equiv 55 \bmod 63$<br> $10^{225} \equiv 0 \bmod 32$ trivially</p> <p>Now <a href="https://en.wikipedia.org/wiki/Chinese_remainder_theorem" rel="nofollow noreferrer">Chinese Remainder Theorem</a> says we can find a value that is $ 55 \bmod 63 $ and $0 \bmod 32$ - which translates to looking for a multiple of $32$ that is $\equiv 55 \bmod 63$.</p> <p>$2\cdot 32 \equiv 1 \bmod 63 \implies (1+2(55-32))\cdot 32 \equiv 47\cdot 32$ is the multiple required, and thus $47\cdot 32 \equiv 1504 \equiv 10^{255} \bmod 2016$ </p> <p>The greatest multiple of $2016$ less than $10^{225}$ is thus $10^{225}-1504 = \overbrace{9\ldots9}^{\text{221 9s}}8496$, which actually does have the digit-sum required. So we already have an upper limit on the minimum value.</p> <p>Since we have this cycle of $6$ values for $10^k\bmod 2016$ and $10^6 \equiv 1 \bmod 63$ $\implies$ $999999 \equiv 0 \bmod 63$ we can insert "$999999$" into a valid shorter value at any point clear of the last five digits (the factor of $2^5$ being determined by those last five digits). Specifically we can multiply the higher digits by $10^6$ (which doesn't change their value $\bmod 63$) and then add $999999\cdot 10^k$ which of course idivisible by $63$.</p> <p>Thus when we find the minimum multiple in $9$ digits with a digit-sum of $72$ (which is $598989888$), we can extend that to $225$ digits using insertions of $999999$, keeping $5989$ as the high-value digits. </p> <p>This gives us $598\overbrace{9\ldots9}^{\text{217 9s}}89888$ as the minimum multiple of $2016$ with a digit sum of $2016$.</p> Answer 4: <p>HINT: The sum of the digits of $2016$ is $9$ - can you see a way of exploiting this fact to form a multiple of $2016$ which has some digits equal to $9$? If you can find a multiple with lots of digits equal to $9$ then you may be close to an answer. If you think about this the right way, you should find a way of getting a string of digits equal to $9$.</p> Answer 5: <p>Looking at the last three digits, you are not going to get more than $24$ digit-sum from them, as the largest digit-sum avilable from multiples of $8$ below $1000$ is at $888$. So you will need $225$ digits in total, which is likely to be enough.</p> <p>I'll start by finding out the value of $10^{225} \bmod 2016$</p> <p>$2016=2^5\cdot3^2\cdot7 = 32\cdot 63$</p> <p><a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">$\lambda(63)$</a> $= \text{lcm}(\lambda(3^2),\lambda(7)) = \text{lcm}(6,6) = 6 \implies 10^6 \equiv 1 \bmod 63$ </p> <p>$10^{225} \equiv 10^{222}\cdot 10^3 \equiv 10^3\equiv 55 \bmod 63$<br> $10^{225} \equiv 0 \bmod 32$ trivially</p> <p>Now <a href="https://en.wikipedia.org/wiki/Chinese_remainder_theorem" rel="nofollow noreferrer">Chinese Remainder Theorem</a> says we can find a value that is $ 55 \bmod 63 $ and $0 \bmod 32$ - which translates to looking for a multiple of $32$ that is $\equiv 55 \bmod 63$.</p> <p>$2\cdot 32 \equiv 1 \bmod 63 \implies (1+2(55-32))\cdot 32 \equiv 47\cdot 32$ is the multiple required, and thus $47\cdot 32 \equiv 1504 \equiv 10^{255} \bmod 2016$ </p> <p>The greatest multiple of $2016$ less than $10^{225}$ is thus $10^{225}-1504 = \overbrace{9\ldots9}^{\text{221 9s}}8496$, which actually does have the digit-sum required. So we already have an upper limit on the minimum value.</p> <p>Since we have this cycle of $6$ values for $10^k\bmod 2016$ and $10^6 \equiv 1 \bmod 63$ $\implies$ $999999 \equiv 0 \bmod 63$ we can insert "$999999$" into a valid shorter value at any point clear of the last five digits (the factor of $2^5$ being determined by those last five digits). Specifically we can multiply the higher digits by $10^6$ (which doesn't change their value $\bmod 63$) and then add $999999\cdot 10^k$ which of course idivisible by $63$.</p> <p>Thus when we find the minimum multiple in $9$ digits with a digit-sum of $72$ (which is $598989888$), we can extend that to $225$ digits using insertions of $999999$, keeping $5989$ as the high-value digits. </p> <p>This gives us $598\overbrace{9\ldots9}^{\text{217 9s}}89888$ as the minimum multiple of $2016$ with a digit sum of $2016$.</p> Answer 6: <p>HINT: The sum of the digits of $2016$ is $9$ - can you see a way of exploiting this fact to form a multiple of $2016$ which has some digits equal to $9$? If you can find a multiple with lots of digits equal to $9$ then you may be close to an answer. If you think about this the right way, you should find a way of getting a string of digits equal to $9$.</p> Answer 7: <p>Looking at the last three digits, you are not going to get more than $24$ digit-sum from them, as the largest digit-sum avilable from multiples of $8$ below $1000$ is at $888$. So you will need $225$ digits in total, which is likely to be enough.</p> <p>I'll start by finding out the value of $10^{225} \bmod 2016$</p> <p>$2016=2^5\cdot3^2\cdot7 = 32\cdot 63$</p> <p><a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">$\lambda(63)$</a> $= \text{lcm}(\lambda(3^2),\lambda(7)) = \text{lcm}(6,6) = 6 \implies 10^6 \equiv 1 \bmod 63$ </p> <p>$10^{225} \equiv 10^{222}\cdot 10^3 \equiv 10^3\equiv 55 \bmod 63$<br> $10^{225} \equiv 0 \bmod 32$ trivially</p> <p>Now <a href="https://en.wikipedia.org/wiki/Chinese_remainder_theorem" rel="nofollow noreferrer">Chinese Remainder Theorem</a> says we can find a value that is $ 55 \bmod 63 $ and $0 \bmod 32$ - which translates to looking for a multiple of $32$ that is $\equiv 55 \bmod 63$.</p> <p>$2\cdot 32 \equiv 1 \bmod 63 \implies (1+2(55-32))\cdot 32 \equiv 47\cdot 32$ is the multiple required, and thus $47\cdot 32 \equiv 1504 \equiv 10^{255} \bmod 2016$ </p> <p>The greatest multiple of $2016$ less than $10^{225}$ is thus $10^{225}-1504 = \overbrace{9\ldots9}^{\text{221 9s}}8496$, which actually does have the digit-sum required. So we already have an upper limit on the minimum value.</p> <p>Since we have this cycle of $6$ values for $10^k\bmod 2016$ and $10^6 \equiv 1 \bmod 63$ $\implies$ $999999 \equiv 0 \bmod 63$ we can insert "$999999$" into a valid shorter value at any point clear of the last five digits (the factor of $2^5$ being determined by those last five digits). Specifically we can multiply the higher digits by $10^6$ (which doesn't change their value $\bmod 63$) and then add $999999\cdot 10^k$ which of course idivisible by $63$.</p> <p>Thus when we find the minimum multiple in $9$ digits with a digit-sum of $72$ (which is $598989888$), we can extend that to $225$ digits using insertions of $999999$, keeping $5989$ as the high-value digits. </p> <p>This gives us $598\overbrace{9\ldots9}^{\text{217 9s}}89888$ as the minimum multiple of $2016$ with a digit sum of $2016$.</p> Answer 8: <p>HINT: The sum of the digits of $2016$ is $9$ - can you see a way of exploiting this fact to form a multiple of $2016$ which has some digits equal to $9$? If you can find a multiple with lots of digits equal to $9$ then you may be close to an answer. If you think about this the right way, you should find a way of getting a string of digits equal to $9$.</p> Answer 9: <p>Looking at the last three digits, you are not going to get more than $24$ digit-sum from them, as the largest digit-sum avilable from multiples of $8$ below $1000$ is at $888$. So you will need $225$ digits in total, which is likely to be enough.</p> <p>I'll start by finding out the value of $10^{225} \bmod 2016$</p> <p>$2016=2^5\cdot3^2\cdot7 = 32\cdot 63$</p> <p><a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">$\lambda(63)$</a> $= \text{lcm}(\lambda(3^2),\lambda(7)) = \text{lcm}(6,6) = 6 \implies 10^6 \equiv 1 \bmod 63$ </p> <p>$10^{225} \equiv 10^{222}\cdot 10^3 \equiv 10^3\equiv 55 \bmod 63$<br> $10^{225} \equiv 0 \bmod 32$ trivially</p> <p>Now <a href="https://en.wikipedia.org/wiki/Chinese_remainder_theorem" rel="nofollow noreferrer">Chinese Remainder Theorem</a> says we can find a value that is $ 55 \bmod 63 $ and $0 \bmod 32$ - which translates to looking for a multiple of $32$ that is $\equiv 55 \bmod 63$.</p> <p>$2\cdot 32 \equiv 1 \bmod 63 \implies (1+2(55-32))\cdot 32 \equiv 47\cdot 32$ is the multiple required, and thus $47\cdot 32 \equiv 1504 \equiv 10^{255} \bmod 2016$ </p> <p>The greatest multiple of $2016$ less than $10^{225}$ is thus $10^{225}-1504 = \overbrace{9\ldots9}^{\text{221 9s}}8496$, which actually does have the digit-sum required. So we already have an upper limit on the minimum value.</p> <p>Since we have this cycle of $6$ values for $10^k\bmod 2016$ and $10^6 \equiv 1 \bmod 63$ $\implies$ $999999 \equiv 0 \bmod 63$ we can insert "$999999$" into a valid shorter value at any point clear of the last five digits (the factor of $2^5$ being determined by those last five digits). Specifically we can multiply the higher digits by $10^6$ (which doesn't change their value $\bmod 63$) and then add $999999\cdot 10^k$ which of course idivisible by $63$.</p> <p>Thus when we find the minimum multiple in $9$ digits with a digit-sum of $72$ (which is $598989888$), we can extend that to $225$ digits using insertions of $999999$, keeping $5989$ as the high-value digits. </p> <p>This gives us $598\overbrace{9\ldots9}^{\text{217 9s}}89888$ as the minimum multiple of $2016$ with a digit sum of $2016$.</p> Answer 10: <p>HINT: The sum of the digits of $2016$ is $9$ - can you see a way of exploiting this fact to form a multiple of $2016$ which has some digits equal to $9$? If you can find a multiple with lots of digits equal to $9$ then you may be close to an answer. If you think about this the right way, you should find a way of getting a string of digits equal to $9$.</p>
For which $n$ this inequality is always true $a^5+1\ge a^3+a^n$ for every $a&gt;0$?
2,080,262
4
https://math.stackexchange.com/questions/2080262/for-which-n-this-inequality-is-always-true-a51-ge-a3an-for-every-a0
<p>For which <span class="math-container">$n$</span> this inequality is always true <span class="math-container">$a^5+1\ge a^3+a^n$</span> for every <span class="math-container">$a&gt;0$</span>?</p> <blockquote> <p>1.<span class="math-container">$1$</span></p> <p>2.<span class="math-container">$2$</span></p> <p>3.<span class="math-container">$3$</span></p> <p>4.<span class="math-container">$4$</span></p> <p>5.<span class="math-container">$5$</span></p> </blockquote> <p><strong>My attempt</strong>:It's clear that it is true for <span class="math-container">$n=2$</span>.Because:</p> <p><span class="math-container">$(a^3-1)(a^2-1) \ge 0$</span></p> <p>Is true because <span class="math-container">$a^3-1$</span> and <span class="math-container">$a^2-1$</span> are both negative or both positive.But there is a problem:How can we make sure it doesn't hold for any other <span class="math-container">$n$</span>?</p>
[ { "answer_id": 2080276, "score": 1, "text": "<p>Note that equality always holds for $a = 1$.</p>\n\n<p>$\na^5 + 1 \\geq a^3 + a^n \\\\\n\\implies a^5 - a^3 \\geq a^n - 1 \\\\\n\\implies a^3(a + 1)(a - 1) \\geq a^n - 1\n$</p>\n\n<p><strong>Case 1:</strong> $a &gt; 1$. Then,</p>\n\n<p>$$\na^4 + a^3 \\geq 1 + a + a^2 + \\cdots + a^{n-1}\n$$</p>\n\n<p><strong>Case 2:</strong> $0 &lt; a &lt; 1$. Then,</p>\n\n<p>$$\na^4 + a^3 \\leq 1 + a + a^2 + \\cdots + a^{n-1}\n$$</p>\n\n<p>Now, both cases hold true for $n = 2$ and certainly for $a = 1$.</p>\n\n<p><strong>Case 2 doesn't hold for $n=1$:</strong></p>\n\n<p>$$\na^4+a^3 \\leq 1.\n$$</p>\n\n<p>Let $a = 1/x$ where $x&gt;1$. Then, the inequality becomes $f(x) = x^4 - x - 1 \\geq 0$. Now, $f(x)$ is a curve opening upwards which is negative at $x = 1$. Hence, $f(x)$ has a positive real root, say $\\alpha &gt; 1$. This means that $f(x) &lt; 0$ in the interval $(1, \\alpha)$, i.e. the inequality doesn't hold.</p>\n\n<p><strong>Case 1 doesn't hold for $n=3$:</strong></p>\n\n<p>$\na^4+a^3 \\geq 1 + a + a^2 \\\\\n\\implies a^4 + a^3 + a^2 + a + 1 \\geq 2(1 + a + a^2) \\\\\n\\implies a^5 - 1 \\geq 2(a^3 - 1) \\\\\n\\implies a^5 - 2a^3 + 1 \\geq 0.\n$</p>\n\n<p>Now, $p(a) = a^5 - 2a^3 + 1$ has a minima at $a_0 = \\sqrt{6/5}$, where $p(a_0) = -0.05162... &lt; 0$.</p>\n\n<p><strong>Case 1 doesn't hold for $n = 4$:</strong></p>\n\n<p>$\na^4 + a^3 &gt; 1 + a + a^2 + a^3 \\\\\n\\implies q(a) = a^3 - a^2 - 1 \\geq 0.\n$</p>\n\n<p>Now, $q(a) \\rightarrow \\infty$ as $a \\rightarrow \\infty$, and $q(1) = -1$. Hence, if a root of $q(a)$ be $a_0 &gt; 1$, then $q(a)$ is negative in the interval $(1, a_0)$.</p>\n\n<p>Case 1 clearly <strong>doesn't</strong> hold for $n&gt;4$. Thus the inequality holds for the entire $\\mathbb{R}^+$ only for $n=2$.</p>\n" }, { "answer_id": 2080277, "score": 1, "text": "<p>Clearly $\\;n=5\\;$ is out of question, as $\\;a^3\\le1\\;$ isn't true for <em>all</em> $\\;a&gt;0\\;$.</p>\n\n<p>Something similar with $\\;n=4\\;$ , as then </p>\n\n<p>$$a^5+1\\ge a^3+a^4\\iff (a+1)(a^4-a^3+a^2-a+1)\\ge a^3(a+1)$$</p>\n\n<p>which is false for values close to $\\;1\\;$ from the right, for example $\\;a=1.1\\;$ (check this).</p>\n\n<p>With $\\;n=3\\;$ we'd get</p>\n\n<p>$$a^5+1\\ge 2a^3\\;,\\;\\;\\text{ false, again for}\\;\\;a=1.1$$</p>\n\n<p>Finally, for $\\;n=1\\;$ we'd get</p>\n\n<p>$$a^5+1\\ge a^3+a\\;\\;,\\;\\;\\text{ false for}\\;\\;a=0.9 $$</p>\n" }, { "answer_id": 2080288, "score": 0, "text": "<p>$n = 1 \\implies a^5+1 - a^3-a = a^3(a^2 -1) - (a-1) = (a-1)(a^4+a^3-1)$. Observe that $a \\to 1^{-} \\implies a^4+a^3 - 1 \\to 1 $, thus for $\\epsilon = 0.01$ we can find a $\\delta &gt; 0$ such that $ -\\delta &lt; a - 1 &lt; 0$, and $|a^4+a^3-1 - 1| &lt; 0.01$ or $a^4+a^3-1 &gt; 0.99$, thus $(a-1)(a^4+a^3-1) &lt; 0$ on $(1-\\delta, 1)$.</p>\n\n<p>You did the case $n = 2$.</p>\n\n<p>$n = 3$, $a^5+1- 2a^3 \\ge 0$. For $0 &lt; a \\le 1\\implies a^5 \\ge a^6 \\implies a^5+1 - 2a^3\\ge a^6-2a^3 + 1 = (a^3-1)^2 \\ge 0$, and for $\\sqrt{\\dfrac{6}{5}} &gt; a &gt; 1, f(a) = a^5+1-2a^3\\implies f'(a) = a^2(5a^2-6) &lt; 0\\implies f(a) &lt; f(1) = 0$.</p>\n\n<p>$n = 5$, choose $a &gt; 1$ it does not hold.</p>\n\n<p>$n = 4$. $a^5+1 - a^3 - a^4 = a^3(a^2-1) - (a^2-1)(a^2+1) = (a^2-1)(a^3-a^2-1)$. You can choose $a$ slightly bigger than $1$, then it fails to be greater than $0$. </p>\n\n<p>Thus $n = 2$ is the only value of $n$.</p>\n" }, { "answer_id": 2080276, "score": 1, "text": "<p>Note that equality always holds for $a = 1$.</p>\n\n<p>$\na^5 + 1 \\geq a^3 + a^n \\\\\n\\implies a^5 - a^3 \\geq a^n - 1 \\\\\n\\implies a^3(a + 1)(a - 1) \\geq a^n - 1\n$</p>\n\n<p><strong>Case 1:</strong> $a &gt; 1$. Then,</p>\n\n<p>$$\na^4 + a^3 \\geq 1 + a + a^2 + \\cdots + a^{n-1}\n$$</p>\n\n<p><strong>Case 2:</strong> $0 &lt; a &lt; 1$. Then,</p>\n\n<p>$$\na^4 + a^3 \\leq 1 + a + a^2 + \\cdots + a^{n-1}\n$$</p>\n\n<p>Now, both cases hold true for $n = 2$ and certainly for $a = 1$.</p>\n\n<p><strong>Case 2 doesn't hold for $n=1$:</strong></p>\n\n<p>$$\na^4+a^3 \\leq 1.\n$$</p>\n\n<p>Let $a = 1/x$ where $x&gt;1$. Then, the inequality becomes $f(x) = x^4 - x - 1 \\geq 0$. Now, $f(x)$ is a curve opening upwards which is negative at $x = 1$. Hence, $f(x)$ has a positive real root, say $\\alpha &gt; 1$. This means that $f(x) &lt; 0$ in the interval $(1, \\alpha)$, i.e. the inequality doesn't hold.</p>\n\n<p><strong>Case 1 doesn't hold for $n=3$:</strong></p>\n\n<p>$\na^4+a^3 \\geq 1 + a + a^2 \\\\\n\\implies a^4 + a^3 + a^2 + a + 1 \\geq 2(1 + a + a^2) \\\\\n\\implies a^5 - 1 \\geq 2(a^3 - 1) \\\\\n\\implies a^5 - 2a^3 + 1 \\geq 0.\n$</p>\n\n<p>Now, $p(a) = a^5 - 2a^3 + 1$ has a minima at $a_0 = \\sqrt{6/5}$, where $p(a_0) = -0.05162... &lt; 0$.</p>\n\n<p><strong>Case 1 doesn't hold for $n = 4$:</strong></p>\n\n<p>$\na^4 + a^3 &gt; 1 + a + a^2 + a^3 \\\\\n\\implies q(a) = a^3 - a^2 - 1 \\geq 0.\n$</p>\n\n<p>Now, $q(a) \\rightarrow \\infty$ as $a \\rightarrow \\infty$, and $q(1) = -1$. Hence, if a root of $q(a)$ be $a_0 &gt; 1$, then $q(a)$ is negative in the interval $(1, a_0)$.</p>\n\n<p>Case 1 clearly <strong>doesn't</strong> hold for $n&gt;4$. Thus the inequality holds for the entire $\\mathbb{R}^+$ only for $n=2$.</p>\n" }, { "answer_id": 2080277, "score": 1, "text": "<p>Clearly $\\;n=5\\;$ is out of question, as $\\;a^3\\le1\\;$ isn't true for <em>all</em> $\\;a&gt;0\\;$.</p>\n\n<p>Something similar with $\\;n=4\\;$ , as then </p>\n\n<p>$$a^5+1\\ge a^3+a^4\\iff (a+1)(a^4-a^3+a^2-a+1)\\ge a^3(a+1)$$</p>\n\n<p>which is false for values close to $\\;1\\;$ from the right, for example $\\;a=1.1\\;$ (check this).</p>\n\n<p>With $\\;n=3\\;$ we'd get</p>\n\n<p>$$a^5+1\\ge 2a^3\\;,\\;\\;\\text{ false, again for}\\;\\;a=1.1$$</p>\n\n<p>Finally, for $\\;n=1\\;$ we'd get</p>\n\n<p>$$a^5+1\\ge a^3+a\\;\\;,\\;\\;\\text{ false for}\\;\\;a=0.9 $$</p>\n" }, { "answer_id": 2080288, "score": 0, "text": "<p>$n = 1 \\implies a^5+1 - a^3-a = a^3(a^2 -1) - (a-1) = (a-1)(a^4+a^3-1)$. Observe that $a \\to 1^{-} \\implies a^4+a^3 - 1 \\to 1 $, thus for $\\epsilon = 0.01$ we can find a $\\delta &gt; 0$ such that $ -\\delta &lt; a - 1 &lt; 0$, and $|a^4+a^3-1 - 1| &lt; 0.01$ or $a^4+a^3-1 &gt; 0.99$, thus $(a-1)(a^4+a^3-1) &lt; 0$ on $(1-\\delta, 1)$.</p>\n\n<p>You did the case $n = 2$.</p>\n\n<p>$n = 3$, $a^5+1- 2a^3 \\ge 0$. For $0 &lt; a \\le 1\\implies a^5 \\ge a^6 \\implies a^5+1 - 2a^3\\ge a^6-2a^3 + 1 = (a^3-1)^2 \\ge 0$, and for $\\sqrt{\\dfrac{6}{5}} &gt; a &gt; 1, f(a) = a^5+1-2a^3\\implies f'(a) = a^2(5a^2-6) &lt; 0\\implies f(a) &lt; f(1) = 0$.</p>\n\n<p>$n = 5$, choose $a &gt; 1$ it does not hold.</p>\n\n<p>$n = 4$. $a^5+1 - a^3 - a^4 = a^3(a^2-1) - (a^2-1)(a^2+1) = (a^2-1)(a^3-a^2-1)$. You can choose $a$ slightly bigger than $1$, then it fails to be greater than $0$. </p>\n\n<p>Thus $n = 2$ is the only value of $n$.</p>\n" }, { "answer_id": 2080276, "score": 1, "text": "<p>Note that equality always holds for $a = 1$.</p>\n\n<p>$\na^5 + 1 \\geq a^3 + a^n \\\\\n\\implies a^5 - a^3 \\geq a^n - 1 \\\\\n\\implies a^3(a + 1)(a - 1) \\geq a^n - 1\n$</p>\n\n<p><strong>Case 1:</strong> $a &gt; 1$. Then,</p>\n\n<p>$$\na^4 + a^3 \\geq 1 + a + a^2 + \\cdots + a^{n-1}\n$$</p>\n\n<p><strong>Case 2:</strong> $0 &lt; a &lt; 1$. Then,</p>\n\n<p>$$\na^4 + a^3 \\leq 1 + a + a^2 + \\cdots + a^{n-1}\n$$</p>\n\n<p>Now, both cases hold true for $n = 2$ and certainly for $a = 1$.</p>\n\n<p><strong>Case 2 doesn't hold for $n=1$:</strong></p>\n\n<p>$$\na^4+a^3 \\leq 1.\n$$</p>\n\n<p>Let $a = 1/x$ where $x&gt;1$. Then, the inequality becomes $f(x) = x^4 - x - 1 \\geq 0$. Now, $f(x)$ is a curve opening upwards which is negative at $x = 1$. Hence, $f(x)$ has a positive real root, say $\\alpha &gt; 1$. This means that $f(x) &lt; 0$ in the interval $(1, \\alpha)$, i.e. the inequality doesn't hold.</p>\n\n<p><strong>Case 1 doesn't hold for $n=3$:</strong></p>\n\n<p>$\na^4+a^3 \\geq 1 + a + a^2 \\\\\n\\implies a^4 + a^3 + a^2 + a + 1 \\geq 2(1 + a + a^2) \\\\\n\\implies a^5 - 1 \\geq 2(a^3 - 1) \\\\\n\\implies a^5 - 2a^3 + 1 \\geq 0.\n$</p>\n\n<p>Now, $p(a) = a^5 - 2a^3 + 1$ has a minima at $a_0 = \\sqrt{6/5}$, where $p(a_0) = -0.05162... &lt; 0$.</p>\n\n<p><strong>Case 1 doesn't hold for $n = 4$:</strong></p>\n\n<p>$\na^4 + a^3 &gt; 1 + a + a^2 + a^3 \\\\\n\\implies q(a) = a^3 - a^2 - 1 \\geq 0.\n$</p>\n\n<p>Now, $q(a) \\rightarrow \\infty$ as $a \\rightarrow \\infty$, and $q(1) = -1$. Hence, if a root of $q(a)$ be $a_0 &gt; 1$, then $q(a)$ is negative in the interval $(1, a_0)$.</p>\n\n<p>Case 1 clearly <strong>doesn't</strong> hold for $n&gt;4$. Thus the inequality holds for the entire $\\mathbb{R}^+$ only for $n=2$.</p>\n" }, { "answer_id": 2080277, "score": 1, "text": "<p>Clearly $\\;n=5\\;$ is out of question, as $\\;a^3\\le1\\;$ isn't true for <em>all</em> $\\;a&gt;0\\;$.</p>\n\n<p>Something similar with $\\;n=4\\;$ , as then </p>\n\n<p>$$a^5+1\\ge a^3+a^4\\iff (a+1)(a^4-a^3+a^2-a+1)\\ge a^3(a+1)$$</p>\n\n<p>which is false for values close to $\\;1\\;$ from the right, for example $\\;a=1.1\\;$ (check this).</p>\n\n<p>With $\\;n=3\\;$ we'd get</p>\n\n<p>$$a^5+1\\ge 2a^3\\;,\\;\\;\\text{ false, again for}\\;\\;a=1.1$$</p>\n\n<p>Finally, for $\\;n=1\\;$ we'd get</p>\n\n<p>$$a^5+1\\ge a^3+a\\;\\;,\\;\\;\\text{ false for}\\;\\;a=0.9 $$</p>\n" }, { "answer_id": 2080288, "score": 0, "text": "<p>$n = 1 \\implies a^5+1 - a^3-a = a^3(a^2 -1) - (a-1) = (a-1)(a^4+a^3-1)$. Observe that $a \\to 1^{-} \\implies a^4+a^3 - 1 \\to 1 $, thus for $\\epsilon = 0.01$ we can find a $\\delta &gt; 0$ such that $ -\\delta &lt; a - 1 &lt; 0$, and $|a^4+a^3-1 - 1| &lt; 0.01$ or $a^4+a^3-1 &gt; 0.99$, thus $(a-1)(a^4+a^3-1) &lt; 0$ on $(1-\\delta, 1)$.</p>\n\n<p>You did the case $n = 2$.</p>\n\n<p>$n = 3$, $a^5+1- 2a^3 \\ge 0$. For $0 &lt; a \\le 1\\implies a^5 \\ge a^6 \\implies a^5+1 - 2a^3\\ge a^6-2a^3 + 1 = (a^3-1)^2 \\ge 0$, and for $\\sqrt{\\dfrac{6}{5}} &gt; a &gt; 1, f(a) = a^5+1-2a^3\\implies f'(a) = a^2(5a^2-6) &lt; 0\\implies f(a) &lt; f(1) = 0$.</p>\n\n<p>$n = 5$, choose $a &gt; 1$ it does not hold.</p>\n\n<p>$n = 4$. $a^5+1 - a^3 - a^4 = a^3(a^2-1) - (a^2-1)(a^2+1) = (a^2-1)(a^3-a^2-1)$. You can choose $a$ slightly bigger than $1$, then it fails to be greater than $0$. </p>\n\n<p>Thus $n = 2$ is the only value of $n$.</p>\n" }, { "answer_id": 2080276, "score": 1, "text": "<p>Note that equality always holds for $a = 1$.</p>\n\n<p>$\na^5 + 1 \\geq a^3 + a^n \\\\\n\\implies a^5 - a^3 \\geq a^n - 1 \\\\\n\\implies a^3(a + 1)(a - 1) \\geq a^n - 1\n$</p>\n\n<p><strong>Case 1:</strong> $a &gt; 1$. Then,</p>\n\n<p>$$\na^4 + a^3 \\geq 1 + a + a^2 + \\cdots + a^{n-1}\n$$</p>\n\n<p><strong>Case 2:</strong> $0 &lt; a &lt; 1$. Then,</p>\n\n<p>$$\na^4 + a^3 \\leq 1 + a + a^2 + \\cdots + a^{n-1}\n$$</p>\n\n<p>Now, both cases hold true for $n = 2$ and certainly for $a = 1$.</p>\n\n<p><strong>Case 2 doesn't hold for $n=1$:</strong></p>\n\n<p>$$\na^4+a^3 \\leq 1.\n$$</p>\n\n<p>Let $a = 1/x$ where $x&gt;1$. Then, the inequality becomes $f(x) = x^4 - x - 1 \\geq 0$. Now, $f(x)$ is a curve opening upwards which is negative at $x = 1$. Hence, $f(x)$ has a positive real root, say $\\alpha &gt; 1$. This means that $f(x) &lt; 0$ in the interval $(1, \\alpha)$, i.e. the inequality doesn't hold.</p>\n\n<p><strong>Case 1 doesn't hold for $n=3$:</strong></p>\n\n<p>$\na^4+a^3 \\geq 1 + a + a^2 \\\\\n\\implies a^4 + a^3 + a^2 + a + 1 \\geq 2(1 + a + a^2) \\\\\n\\implies a^5 - 1 \\geq 2(a^3 - 1) \\\\\n\\implies a^5 - 2a^3 + 1 \\geq 0.\n$</p>\n\n<p>Now, $p(a) = a^5 - 2a^3 + 1$ has a minima at $a_0 = \\sqrt{6/5}$, where $p(a_0) = -0.05162... &lt; 0$.</p>\n\n<p><strong>Case 1 doesn't hold for $n = 4$:</strong></p>\n\n<p>$\na^4 + a^3 &gt; 1 + a + a^2 + a^3 \\\\\n\\implies q(a) = a^3 - a^2 - 1 \\geq 0.\n$</p>\n\n<p>Now, $q(a) \\rightarrow \\infty$ as $a \\rightarrow \\infty$, and $q(1) = -1$. Hence, if a root of $q(a)$ be $a_0 &gt; 1$, then $q(a)$ is negative in the interval $(1, a_0)$.</p>\n\n<p>Case 1 clearly <strong>doesn't</strong> hold for $n&gt;4$. Thus the inequality holds for the entire $\\mathbb{R}^+$ only for $n=2$.</p>\n" }, { "answer_id": 2080277, "score": 1, "text": "<p>Clearly $\\;n=5\\;$ is out of question, as $\\;a^3\\le1\\;$ isn't true for <em>all</em> $\\;a&gt;0\\;$.</p>\n\n<p>Something similar with $\\;n=4\\;$ , as then </p>\n\n<p>$$a^5+1\\ge a^3+a^4\\iff (a+1)(a^4-a^3+a^2-a+1)\\ge a^3(a+1)$$</p>\n\n<p>which is false for values close to $\\;1\\;$ from the right, for example $\\;a=1.1\\;$ (check this).</p>\n\n<p>With $\\;n=3\\;$ we'd get</p>\n\n<p>$$a^5+1\\ge 2a^3\\;,\\;\\;\\text{ false, again for}\\;\\;a=1.1$$</p>\n\n<p>Finally, for $\\;n=1\\;$ we'd get</p>\n\n<p>$$a^5+1\\ge a^3+a\\;\\;,\\;\\;\\text{ false for}\\;\\;a=0.9 $$</p>\n" }, { "answer_id": 2080288, "score": 0, "text": "<p>$n = 1 \\implies a^5+1 - a^3-a = a^3(a^2 -1) - (a-1) = (a-1)(a^4+a^3-1)$. Observe that $a \\to 1^{-} \\implies a^4+a^3 - 1 \\to 1 $, thus for $\\epsilon = 0.01$ we can find a $\\delta &gt; 0$ such that $ -\\delta &lt; a - 1 &lt; 0$, and $|a^4+a^3-1 - 1| &lt; 0.01$ or $a^4+a^3-1 &gt; 0.99$, thus $(a-1)(a^4+a^3-1) &lt; 0$ on $(1-\\delta, 1)$.</p>\n\n<p>You did the case $n = 2$.</p>\n\n<p>$n = 3$, $a^5+1- 2a^3 \\ge 0$. For $0 &lt; a \\le 1\\implies a^5 \\ge a^6 \\implies a^5+1 - 2a^3\\ge a^6-2a^3 + 1 = (a^3-1)^2 \\ge 0$, and for $\\sqrt{\\dfrac{6}{5}} &gt; a &gt; 1, f(a) = a^5+1-2a^3\\implies f'(a) = a^2(5a^2-6) &lt; 0\\implies f(a) &lt; f(1) = 0$.</p>\n\n<p>$n = 5$, choose $a &gt; 1$ it does not hold.</p>\n\n<p>$n = 4$. $a^5+1 - a^3 - a^4 = a^3(a^2-1) - (a^2-1)(a^2+1) = (a^2-1)(a^3-a^2-1)$. You can choose $a$ slightly bigger than $1$, then it fails to be greater than $0$. </p>\n\n<p>Thus $n = 2$ is the only value of $n$.</p>\n" } ]
Question: For which $n$ this inequality is always true $a^5+1\ge a^3+a^n$ for every $a&gt;0$? Answer 1: <p>Note that equality always holds for $a = 1$.</p> <p>$ a^5 + 1 \geq a^3 + a^n \\ \implies a^5 - a^3 \geq a^n - 1 \\ \implies a^3(a + 1)(a - 1) \geq a^n - 1 $</p> <p><strong>Case 1:</strong> $a &gt; 1$. Then,</p> <p>$$ a^4 + a^3 \geq 1 + a + a^2 + \cdots + a^{n-1} $$</p> <p><strong>Case 2:</strong> $0 &lt; a &lt; 1$. Then,</p> <p>$$ a^4 + a^3 \leq 1 + a + a^2 + \cdots + a^{n-1} $$</p> <p>Now, both cases hold true for $n = 2$ and certainly for $a = 1$.</p> <p><strong>Case 2 doesn't hold for $n=1$:</strong></p> <p>$$ a^4+a^3 \leq 1. $$</p> <p>Let $a = 1/x$ where $x&gt;1$. Then, the inequality becomes $f(x) = x^4 - x - 1 \geq 0$. Now, $f(x)$ is a curve opening upwards which is negative at $x = 1$. Hence, $f(x)$ has a positive real root, say $\alpha &gt; 1$. This means that $f(x) &lt; 0$ in the interval $(1, \alpha)$, i.e. the inequality doesn't hold.</p> <p><strong>Case 1 doesn't hold for $n=3$:</strong></p> <p>$ a^4+a^3 \geq 1 + a + a^2 \\ \implies a^4 + a^3 + a^2 + a + 1 \geq 2(1 + a + a^2) \\ \implies a^5 - 1 \geq 2(a^3 - 1) \\ \implies a^5 - 2a^3 + 1 \geq 0. $</p> <p>Now, $p(a) = a^5 - 2a^3 + 1$ has a minima at $a_0 = \sqrt{6/5}$, where $p(a_0) = -0.05162... &lt; 0$.</p> <p><strong>Case 1 doesn't hold for $n = 4$:</strong></p> <p>$ a^4 + a^3 &gt; 1 + a + a^2 + a^3 \\ \implies q(a) = a^3 - a^2 - 1 \geq 0. $</p> <p>Now, $q(a) \rightarrow \infty$ as $a \rightarrow \infty$, and $q(1) = -1$. Hence, if a root of $q(a)$ be $a_0 &gt; 1$, then $q(a)$ is negative in the interval $(1, a_0)$.</p> <p>Case 1 clearly <strong>doesn't</strong> hold for $n&gt;4$. Thus the inequality holds for the entire $\mathbb{R}^+$ only for $n=2$.</p> Answer 2: <p>Clearly $\;n=5\;$ is out of question, as $\;a^3\le1\;$ isn't true for <em>all</em> $\;a&gt;0\;$.</p> <p>Something similar with $\;n=4\;$ , as then </p> <p>$$a^5+1\ge a^3+a^4\iff (a+1)(a^4-a^3+a^2-a+1)\ge a^3(a+1)$$</p> <p>which is false for values close to $\;1\;$ from the right, for example $\;a=1.1\;$ (check this).</p> <p>With $\;n=3\;$ we'd get</p> <p>$$a^5+1\ge 2a^3\;,\;\;\text{ false, again for}\;\;a=1.1$$</p> <p>Finally, for $\;n=1\;$ we'd get</p> <p>$$a^5+1\ge a^3+a\;\;,\;\;\text{ false for}\;\;a=0.9 $$</p> Answer 3: <p>$n = 1 \implies a^5+1 - a^3-a = a^3(a^2 -1) - (a-1) = (a-1)(a^4+a^3-1)$. Observe that $a \to 1^{-} \implies a^4+a^3 - 1 \to 1 $, thus for $\epsilon = 0.01$ we can find a $\delta &gt; 0$ such that $ -\delta &lt; a - 1 &lt; 0$, and $|a^4+a^3-1 - 1| &lt; 0.01$ or $a^4+a^3-1 &gt; 0.99$, thus $(a-1)(a^4+a^3-1) &lt; 0$ on $(1-\delta, 1)$.</p> <p>You did the case $n = 2$.</p> <p>$n = 3$, $a^5+1- 2a^3 \ge 0$. For $0 &lt; a \le 1\implies a^5 \ge a^6 \implies a^5+1 - 2a^3\ge a^6-2a^3 + 1 = (a^3-1)^2 \ge 0$, and for $\sqrt{\dfrac{6}{5}} &gt; a &gt; 1, f(a) = a^5+1-2a^3\implies f'(a) = a^2(5a^2-6) &lt; 0\implies f(a) &lt; f(1) = 0$.</p> <p>$n = 5$, choose $a &gt; 1$ it does not hold.</p> <p>$n = 4$. $a^5+1 - a^3 - a^4 = a^3(a^2-1) - (a^2-1)(a^2+1) = (a^2-1)(a^3-a^2-1)$. You can choose $a$ slightly bigger than $1$, then it fails to be greater than $0$. </p> <p>Thus $n = 2$ is the only value of $n$.</p> Answer 4: <p>Note that equality always holds for $a = 1$.</p> <p>$ a^5 + 1 \geq a^3 + a^n \\ \implies a^5 - a^3 \geq a^n - 1 \\ \implies a^3(a + 1)(a - 1) \geq a^n - 1 $</p> <p><strong>Case 1:</strong> $a &gt; 1$. Then,</p> <p>$$ a^4 + a^3 \geq 1 + a + a^2 + \cdots + a^{n-1} $$</p> <p><strong>Case 2:</strong> $0 &lt; a &lt; 1$. Then,</p> <p>$$ a^4 + a^3 \leq 1 + a + a^2 + \cdots + a^{n-1} $$</p> <p>Now, both cases hold true for $n = 2$ and certainly for $a = 1$.</p> <p><strong>Case 2 doesn't hold for $n=1$:</strong></p> <p>$$ a^4+a^3 \leq 1. $$</p> <p>Let $a = 1/x$ where $x&gt;1$. Then, the inequality becomes $f(x) = x^4 - x - 1 \geq 0$. Now, $f(x)$ is a curve opening upwards which is negative at $x = 1$. Hence, $f(x)$ has a positive real root, say $\alpha &gt; 1$. This means that $f(x) &lt; 0$ in the interval $(1, \alpha)$, i.e. the inequality doesn't hold.</p> <p><strong>Case 1 doesn't hold for $n=3$:</strong></p> <p>$ a^4+a^3 \geq 1 + a + a^2 \\ \implies a^4 + a^3 + a^2 + a + 1 \geq 2(1 + a + a^2) \\ \implies a^5 - 1 \geq 2(a^3 - 1) \\ \implies a^5 - 2a^3 + 1 \geq 0. $</p> <p>Now, $p(a) = a^5 - 2a^3 + 1$ has a minima at $a_0 = \sqrt{6/5}$, where $p(a_0) = -0.05162... &lt; 0$.</p> <p><strong>Case 1 doesn't hold for $n = 4$:</strong></p> <p>$ a^4 + a^3 &gt; 1 + a + a^2 + a^3 \\ \implies q(a) = a^3 - a^2 - 1 \geq 0. $</p> <p>Now, $q(a) \rightarrow \infty$ as $a \rightarrow \infty$, and $q(1) = -1$. Hence, if a root of $q(a)$ be $a_0 &gt; 1$, then $q(a)$ is negative in the interval $(1, a_0)$.</p> <p>Case 1 clearly <strong>doesn't</strong> hold for $n&gt;4$. Thus the inequality holds for the entire $\mathbb{R}^+$ only for $n=2$.</p> Answer 5: <p>Clearly $\;n=5\;$ is out of question, as $\;a^3\le1\;$ isn't true for <em>all</em> $\;a&gt;0\;$.</p> <p>Something similar with $\;n=4\;$ , as then </p> <p>$$a^5+1\ge a^3+a^4\iff (a+1)(a^4-a^3+a^2-a+1)\ge a^3(a+1)$$</p> <p>which is false for values close to $\;1\;$ from the right, for example $\;a=1.1\;$ (check this).</p> <p>With $\;n=3\;$ we'd get</p> <p>$$a^5+1\ge 2a^3\;,\;\;\text{ false, again for}\;\;a=1.1$$</p> <p>Finally, for $\;n=1\;$ we'd get</p> <p>$$a^5+1\ge a^3+a\;\;,\;\;\text{ false for}\;\;a=0.9 $$</p> Answer 6: <p>$n = 1 \implies a^5+1 - a^3-a = a^3(a^2 -1) - (a-1) = (a-1)(a^4+a^3-1)$. Observe that $a \to 1^{-} \implies a^4+a^3 - 1 \to 1 $, thus for $\epsilon = 0.01$ we can find a $\delta &gt; 0$ such that $ -\delta &lt; a - 1 &lt; 0$, and $|a^4+a^3-1 - 1| &lt; 0.01$ or $a^4+a^3-1 &gt; 0.99$, thus $(a-1)(a^4+a^3-1) &lt; 0$ on $(1-\delta, 1)$.</p> <p>You did the case $n = 2$.</p> <p>$n = 3$, $a^5+1- 2a^3 \ge 0$. For $0 &lt; a \le 1\implies a^5 \ge a^6 \implies a^5+1 - 2a^3\ge a^6-2a^3 + 1 = (a^3-1)^2 \ge 0$, and for $\sqrt{\dfrac{6}{5}} &gt; a &gt; 1, f(a) = a^5+1-2a^3\implies f'(a) = a^2(5a^2-6) &lt; 0\implies f(a) &lt; f(1) = 0$.</p> <p>$n = 5$, choose $a &gt; 1$ it does not hold.</p> <p>$n = 4$. $a^5+1 - a^3 - a^4 = a^3(a^2-1) - (a^2-1)(a^2+1) = (a^2-1)(a^3-a^2-1)$. You can choose $a$ slightly bigger than $1$, then it fails to be greater than $0$. </p> <p>Thus $n = 2$ is the only value of $n$.</p> Answer 7: <p>Note that equality always holds for $a = 1$.</p> <p>$ a^5 + 1 \geq a^3 + a^n \\ \implies a^5 - a^3 \geq a^n - 1 \\ \implies a^3(a + 1)(a - 1) \geq a^n - 1 $</p> <p><strong>Case 1:</strong> $a &gt; 1$. Then,</p> <p>$$ a^4 + a^3 \geq 1 + a + a^2 + \cdots + a^{n-1} $$</p> <p><strong>Case 2:</strong> $0 &lt; a &lt; 1$. Then,</p> <p>$$ a^4 + a^3 \leq 1 + a + a^2 + \cdots + a^{n-1} $$</p> <p>Now, both cases hold true for $n = 2$ and certainly for $a = 1$.</p> <p><strong>Case 2 doesn't hold for $n=1$:</strong></p> <p>$$ a^4+a^3 \leq 1. $$</p> <p>Let $a = 1/x$ where $x&gt;1$. Then, the inequality becomes $f(x) = x^4 - x - 1 \geq 0$. Now, $f(x)$ is a curve opening upwards which is negative at $x = 1$. Hence, $f(x)$ has a positive real root, say $\alpha &gt; 1$. This means that $f(x) &lt; 0$ in the interval $(1, \alpha)$, i.e. the inequality doesn't hold.</p> <p><strong>Case 1 doesn't hold for $n=3$:</strong></p> <p>$ a^4+a^3 \geq 1 + a + a^2 \\ \implies a^4 + a^3 + a^2 + a + 1 \geq 2(1 + a + a^2) \\ \implies a^5 - 1 \geq 2(a^3 - 1) \\ \implies a^5 - 2a^3 + 1 \geq 0. $</p> <p>Now, $p(a) = a^5 - 2a^3 + 1$ has a minima at $a_0 = \sqrt{6/5}$, where $p(a_0) = -0.05162... &lt; 0$.</p> <p><strong>Case 1 doesn't hold for $n = 4$:</strong></p> <p>$ a^4 + a^3 &gt; 1 + a + a^2 + a^3 \\ \implies q(a) = a^3 - a^2 - 1 \geq 0. $</p> <p>Now, $q(a) \rightarrow \infty$ as $a \rightarrow \infty$, and $q(1) = -1$. Hence, if a root of $q(a)$ be $a_0 &gt; 1$, then $q(a)$ is negative in the interval $(1, a_0)$.</p> <p>Case 1 clearly <strong>doesn't</strong> hold for $n&gt;4$. Thus the inequality holds for the entire $\mathbb{R}^+$ only for $n=2$.</p> Answer 8: <p>Clearly $\;n=5\;$ is out of question, as $\;a^3\le1\;$ isn't true for <em>all</em> $\;a&gt;0\;$.</p> <p>Something similar with $\;n=4\;$ , as then </p> <p>$$a^5+1\ge a^3+a^4\iff (a+1)(a^4-a^3+a^2-a+1)\ge a^3(a+1)$$</p> <p>which is false for values close to $\;1\;$ from the right, for example $\;a=1.1\;$ (check this).</p> <p>With $\;n=3\;$ we'd get</p> <p>$$a^5+1\ge 2a^3\;,\;\;\text{ false, again for}\;\;a=1.1$$</p> <p>Finally, for $\;n=1\;$ we'd get</p> <p>$$a^5+1\ge a^3+a\;\;,\;\;\text{ false for}\;\;a=0.9 $$</p> Answer 9: <p>$n = 1 \implies a^5+1 - a^3-a = a^3(a^2 -1) - (a-1) = (a-1)(a^4+a^3-1)$. Observe that $a \to 1^{-} \implies a^4+a^3 - 1 \to 1 $, thus for $\epsilon = 0.01$ we can find a $\delta &gt; 0$ such that $ -\delta &lt; a - 1 &lt; 0$, and $|a^4+a^3-1 - 1| &lt; 0.01$ or $a^4+a^3-1 &gt; 0.99$, thus $(a-1)(a^4+a^3-1) &lt; 0$ on $(1-\delta, 1)$.</p> <p>You did the case $n = 2$.</p> <p>$n = 3$, $a^5+1- 2a^3 \ge 0$. For $0 &lt; a \le 1\implies a^5 \ge a^6 \implies a^5+1 - 2a^3\ge a^6-2a^3 + 1 = (a^3-1)^2 \ge 0$, and for $\sqrt{\dfrac{6}{5}} &gt; a &gt; 1, f(a) = a^5+1-2a^3\implies f'(a) = a^2(5a^2-6) &lt; 0\implies f(a) &lt; f(1) = 0$.</p> <p>$n = 5$, choose $a &gt; 1$ it does not hold.</p> <p>$n = 4$. $a^5+1 - a^3 - a^4 = a^3(a^2-1) - (a^2-1)(a^2+1) = (a^2-1)(a^3-a^2-1)$. You can choose $a$ slightly bigger than $1$, then it fails to be greater than $0$. </p> <p>Thus $n = 2$ is the only value of $n$.</p> Answer 10: <p>Note that equality always holds for $a = 1$.</p> <p>$ a^5 + 1 \geq a^3 + a^n \\ \implies a^5 - a^3 \geq a^n - 1 \\ \implies a^3(a + 1)(a - 1) \geq a^n - 1 $</p> <p><strong>Case 1:</strong> $a &gt; 1$. Then,</p> <p>$$ a^4 + a^3 \geq 1 + a + a^2 + \cdots + a^{n-1} $$</p> <p><strong>Case 2:</strong> $0 &lt; a &lt; 1$. Then,</p> <p>$$ a^4 + a^3 \leq 1 + a + a^2 + \cdots + a^{n-1} $$</p> <p>Now, both cases hold true for $n = 2$ and certainly for $a = 1$.</p> <p><strong>Case 2 doesn't hold for $n=1$:</strong></p> <p>$$ a^4+a^3 \leq 1. $$</p> <p>Let $a = 1/x$ where $x&gt;1$. Then, the inequality becomes $f(x) = x^4 - x - 1 \geq 0$. Now, $f(x)$ is a curve opening upwards which is negative at $x = 1$. Hence, $f(x)$ has a positive real root, say $\alpha &gt; 1$. This means that $f(x) &lt; 0$ in the interval $(1, \alpha)$, i.e. the inequality doesn't hold.</p> <p><strong>Case 1 doesn't hold for $n=3$:</strong></p> <p>$ a^4+a^3 \geq 1 + a + a^2 \\ \implies a^4 + a^3 + a^2 + a + 1 \geq 2(1 + a + a^2) \\ \implies a^5 - 1 \geq 2(a^3 - 1) \\ \implies a^5 - 2a^3 + 1 \geq 0. $</p> <p>Now, $p(a) = a^5 - 2a^3 + 1$ has a minima at $a_0 = \sqrt{6/5}$, where $p(a_0) = -0.05162... &lt; 0$.</p> <p><strong>Case 1 doesn't hold for $n = 4$:</strong></p> <p>$ a^4 + a^3 &gt; 1 + a + a^2 + a^3 \\ \implies q(a) = a^3 - a^2 - 1 \geq 0. $</p> <p>Now, $q(a) \rightarrow \infty$ as $a \rightarrow \infty$, and $q(1) = -1$. Hence, if a root of $q(a)$ be $a_0 &gt; 1$, then $q(a)$ is negative in the interval $(1, a_0)$.</p> <p>Case 1 clearly <strong>doesn't</strong> hold for $n&gt;4$. Thus the inequality holds for the entire $\mathbb{R}^+$ only for $n=2$.</p> Answer 11: <p>Clearly $\;n=5\;$ is out of question, as $\;a^3\le1\;$ isn't true for <em>all</em> $\;a&gt;0\;$.</p> <p>Something similar with $\;n=4\;$ , as then </p> <p>$$a^5+1\ge a^3+a^4\iff (a+1)(a^4-a^3+a^2-a+1)\ge a^3(a+1)$$</p> <p>which is false for values close to $\;1\;$ from the right, for example $\;a=1.1\;$ (check this).</p> <p>With $\;n=3\;$ we'd get</p> <p>$$a^5+1\ge 2a^3\;,\;\;\text{ false, again for}\;\;a=1.1$$</p> <p>Finally, for $\;n=1\;$ we'd get</p> <p>$$a^5+1\ge a^3+a\;\;,\;\;\text{ false for}\;\;a=0.9 $$</p> Answer 12: <p>$n = 1 \implies a^5+1 - a^3-a = a^3(a^2 -1) - (a-1) = (a-1)(a^4+a^3-1)$. Observe that $a \to 1^{-} \implies a^4+a^3 - 1 \to 1 $, thus for $\epsilon = 0.01$ we can find a $\delta &gt; 0$ such that $ -\delta &lt; a - 1 &lt; 0$, and $|a^4+a^3-1 - 1| &lt; 0.01$ or $a^4+a^3-1 &gt; 0.99$, thus $(a-1)(a^4+a^3-1) &lt; 0$ on $(1-\delta, 1)$.</p> <p>You did the case $n = 2$.</p> <p>$n = 3$, $a^5+1- 2a^3 \ge 0$. For $0 &lt; a \le 1\implies a^5 \ge a^6 \implies a^5+1 - 2a^3\ge a^6-2a^3 + 1 = (a^3-1)^2 \ge 0$, and for $\sqrt{\dfrac{6}{5}} &gt; a &gt; 1, f(a) = a^5+1-2a^3\implies f'(a) = a^2(5a^2-6) &lt; 0\implies f(a) &lt; f(1) = 0$.</p> <p>$n = 5$, choose $a &gt; 1$ it does not hold.</p> <p>$n = 4$. $a^5+1 - a^3 - a^4 = a^3(a^2-1) - (a^2-1)(a^2+1) = (a^2-1)(a^3-a^2-1)$. You can choose $a$ slightly bigger than $1$, then it fails to be greater than $0$. </p> <p>Thus $n = 2$ is the only value of $n$.</p>
Solving $\sin^2 x+ \sin^2 2x-\sin^2 3x-\sin^2 4x=0$
2,080,315
4
https://math.stackexchange.com/questions/2080315/solving-sin2-x-sin2-2x-sin2-3x-sin2-4x-0
<p>I am stuck on a trigonometric problem which is as follow:</p> <blockquote> <p>Solve $\sin^2 x+ \sin^2 2x-\sin^2 3x-\sin^2 4x=0$ for $x$.</p> </blockquote> <p>these kind of problems tease me always. Please help me. To get a rough idea of how can I approach such problems.</p>
[ { "answer_id": 2080349, "score": 7, "text": "<p>First linearise with:\n$$\\sin^2 u=\\frac{1-\\cos 2u}2.$$\nSimplifying yields the equation \n$$\\cos 2x+\\cos 4x=\\cos6x+\\cos 8x.$$\nThen the factorisation formula:\n$$\\cos p+\\cos q=2\\cos\\frac{p+q}2\\cos\\frac{p-q}2 $$\nlets you rewrite the equation, after some simplification, as\n$$\\cos x\\cos 3x=\\cos x\\cos 7x\\iff \\begin{cases}\\cos x=0\\quad\\text{or}\\\\\\cos 3x=\\cos 7x.\\end{cases}$$\nSolutions to the first equation: $\\qquad x\\equiv \\dfrac\\pi2\\mod\\pi$.</p>\n\n<p>Solutions to the second equation:\n$$7x\\equiv \\pm 3x\\mod 2\\pi\\iff\\begin{cases} 4x\\equiv 0\\mod 2\\pi\\\\10x\\equiv 0\\mod2\\pi\\end{cases}\\iff\\begin{cases} x\\equiv 0\\mod \\dfrac\\pi2\\\\[1ex]x\\equiv 0\\mod\\dfrac\\pi5\\end{cases} $$\nThe solutions in the second series are redundant either with the first or the third series. The set of solutions can ultimately be described as:\n$$x\\equiv \\dfrac\\pi2\\mod\\pi,\\quad x\\equiv 0\\mod\\dfrac\\pi5.\n$$</p>\n" }, { "answer_id": 2080429, "score": 2, "text": "<p>HINT:</p>\n\n<p>Using <a href=\"https://math.stackexchange.com/questions/175143/prove-sinab-sina-b-sin2a-sin2b\">Prove $ \\sin(A+B)\\sin(A-B)=\\sin^2A-\\sin^2B $</a>,</p>\n\n<p>$$\\sin^23x-\\sin^2x+\\sin^24x-\\sin^22x=\\sin2x(\\sin4x+\\sin6x)$$</p>\n\n<p>Now use <a href=\"http://mathworld.wolfram.com/ProsthaphaeresisFormulas.html\" rel=\"nofollow noreferrer\">Prosthaphaeresis Formulas</a> $\\sin4x+\\sin6x=2\\sin5x\\cos x$</p>\n" }, { "answer_id": 2080349, "score": 7, "text": "<p>First linearise with:\n$$\\sin^2 u=\\frac{1-\\cos 2u}2.$$\nSimplifying yields the equation \n$$\\cos 2x+\\cos 4x=\\cos6x+\\cos 8x.$$\nThen the factorisation formula:\n$$\\cos p+\\cos q=2\\cos\\frac{p+q}2\\cos\\frac{p-q}2 $$\nlets you rewrite the equation, after some simplification, as\n$$\\cos x\\cos 3x=\\cos x\\cos 7x\\iff \\begin{cases}\\cos x=0\\quad\\text{or}\\\\\\cos 3x=\\cos 7x.\\end{cases}$$\nSolutions to the first equation: $\\qquad x\\equiv \\dfrac\\pi2\\mod\\pi$.</p>\n\n<p>Solutions to the second equation:\n$$7x\\equiv \\pm 3x\\mod 2\\pi\\iff\\begin{cases} 4x\\equiv 0\\mod 2\\pi\\\\10x\\equiv 0\\mod2\\pi\\end{cases}\\iff\\begin{cases} x\\equiv 0\\mod \\dfrac\\pi2\\\\[1ex]x\\equiv 0\\mod\\dfrac\\pi5\\end{cases} $$\nThe solutions in the second series are redundant either with the first or the third series. The set of solutions can ultimately be described as:\n$$x\\equiv \\dfrac\\pi2\\mod\\pi,\\quad x\\equiv 0\\mod\\dfrac\\pi5.\n$$</p>\n" }, { "answer_id": 2080429, "score": 2, "text": "<p>HINT:</p>\n\n<p>Using <a href=\"https://math.stackexchange.com/questions/175143/prove-sinab-sina-b-sin2a-sin2b\">Prove $ \\sin(A+B)\\sin(A-B)=\\sin^2A-\\sin^2B $</a>,</p>\n\n<p>$$\\sin^23x-\\sin^2x+\\sin^24x-\\sin^22x=\\sin2x(\\sin4x+\\sin6x)$$</p>\n\n<p>Now use <a href=\"http://mathworld.wolfram.com/ProsthaphaeresisFormulas.html\" rel=\"nofollow noreferrer\">Prosthaphaeresis Formulas</a> $\\sin4x+\\sin6x=2\\sin5x\\cos x$</p>\n" }, { "answer_id": 2080349, "score": 7, "text": "<p>First linearise with:\n$$\\sin^2 u=\\frac{1-\\cos 2u}2.$$\nSimplifying yields the equation \n$$\\cos 2x+\\cos 4x=\\cos6x+\\cos 8x.$$\nThen the factorisation formula:\n$$\\cos p+\\cos q=2\\cos\\frac{p+q}2\\cos\\frac{p-q}2 $$\nlets you rewrite the equation, after some simplification, as\n$$\\cos x\\cos 3x=\\cos x\\cos 7x\\iff \\begin{cases}\\cos x=0\\quad\\text{or}\\\\\\cos 3x=\\cos 7x.\\end{cases}$$\nSolutions to the first equation: $\\qquad x\\equiv \\dfrac\\pi2\\mod\\pi$.</p>\n\n<p>Solutions to the second equation:\n$$7x\\equiv \\pm 3x\\mod 2\\pi\\iff\\begin{cases} 4x\\equiv 0\\mod 2\\pi\\\\10x\\equiv 0\\mod2\\pi\\end{cases}\\iff\\begin{cases} x\\equiv 0\\mod \\dfrac\\pi2\\\\[1ex]x\\equiv 0\\mod\\dfrac\\pi5\\end{cases} $$\nThe solutions in the second series are redundant either with the first or the third series. The set of solutions can ultimately be described as:\n$$x\\equiv \\dfrac\\pi2\\mod\\pi,\\quad x\\equiv 0\\mod\\dfrac\\pi5.\n$$</p>\n" }, { "answer_id": 2080429, "score": 2, "text": "<p>HINT:</p>\n\n<p>Using <a href=\"https://math.stackexchange.com/questions/175143/prove-sinab-sina-b-sin2a-sin2b\">Prove $ \\sin(A+B)\\sin(A-B)=\\sin^2A-\\sin^2B $</a>,</p>\n\n<p>$$\\sin^23x-\\sin^2x+\\sin^24x-\\sin^22x=\\sin2x(\\sin4x+\\sin6x)$$</p>\n\n<p>Now use <a href=\"http://mathworld.wolfram.com/ProsthaphaeresisFormulas.html\" rel=\"nofollow noreferrer\">Prosthaphaeresis Formulas</a> $\\sin4x+\\sin6x=2\\sin5x\\cos x$</p>\n" }, { "answer_id": 2080349, "score": 7, "text": "<p>First linearise with:\n$$\\sin^2 u=\\frac{1-\\cos 2u}2.$$\nSimplifying yields the equation \n$$\\cos 2x+\\cos 4x=\\cos6x+\\cos 8x.$$\nThen the factorisation formula:\n$$\\cos p+\\cos q=2\\cos\\frac{p+q}2\\cos\\frac{p-q}2 $$\nlets you rewrite the equation, after some simplification, as\n$$\\cos x\\cos 3x=\\cos x\\cos 7x\\iff \\begin{cases}\\cos x=0\\quad\\text{or}\\\\\\cos 3x=\\cos 7x.\\end{cases}$$\nSolutions to the first equation: $\\qquad x\\equiv \\dfrac\\pi2\\mod\\pi$.</p>\n\n<p>Solutions to the second equation:\n$$7x\\equiv \\pm 3x\\mod 2\\pi\\iff\\begin{cases} 4x\\equiv 0\\mod 2\\pi\\\\10x\\equiv 0\\mod2\\pi\\end{cases}\\iff\\begin{cases} x\\equiv 0\\mod \\dfrac\\pi2\\\\[1ex]x\\equiv 0\\mod\\dfrac\\pi5\\end{cases} $$\nThe solutions in the second series are redundant either with the first or the third series. The set of solutions can ultimately be described as:\n$$x\\equiv \\dfrac\\pi2\\mod\\pi,\\quad x\\equiv 0\\mod\\dfrac\\pi5.\n$$</p>\n" }, { "answer_id": 2080429, "score": 2, "text": "<p>HINT:</p>\n\n<p>Using <a href=\"https://math.stackexchange.com/questions/175143/prove-sinab-sina-b-sin2a-sin2b\">Prove $ \\sin(A+B)\\sin(A-B)=\\sin^2A-\\sin^2B $</a>,</p>\n\n<p>$$\\sin^23x-\\sin^2x+\\sin^24x-\\sin^22x=\\sin2x(\\sin4x+\\sin6x)$$</p>\n\n<p>Now use <a href=\"http://mathworld.wolfram.com/ProsthaphaeresisFormulas.html\" rel=\"nofollow noreferrer\">Prosthaphaeresis Formulas</a> $\\sin4x+\\sin6x=2\\sin5x\\cos x$</p>\n" } ]
Question: Solving $\sin^2 x+ \sin^2 2x-\sin^2 3x-\sin^2 4x=0$ Answer 1: <p>First linearise with: $$\sin^2 u=\frac{1-\cos 2u}2.$$ Simplifying yields the equation $$\cos 2x+\cos 4x=\cos6x+\cos 8x.$$ Then the factorisation formula: $$\cos p+\cos q=2\cos\frac{p+q}2\cos\frac{p-q}2 $$ lets you rewrite the equation, after some simplification, as $$\cos x\cos 3x=\cos x\cos 7x\iff \begin{cases}\cos x=0\quad\text{or}\\\cos 3x=\cos 7x.\end{cases}$$ Solutions to the first equation: $\qquad x\equiv \dfrac\pi2\mod\pi$.</p> <p>Solutions to the second equation: $$7x\equiv \pm 3x\mod 2\pi\iff\begin{cases} 4x\equiv 0\mod 2\pi\\10x\equiv 0\mod2\pi\end{cases}\iff\begin{cases} x\equiv 0\mod \dfrac\pi2\\[1ex]x\equiv 0\mod\dfrac\pi5\end{cases} $$ The solutions in the second series are redundant either with the first or the third series. The set of solutions can ultimately be described as: $$x\equiv \dfrac\pi2\mod\pi,\quad x\equiv 0\mod\dfrac\pi5. $$</p> Answer 2: <p>HINT:</p> <p>Using <a href="https://math.stackexchange.com/questions/175143/prove-sinab-sina-b-sin2a-sin2b">Prove $ \sin(A+B)\sin(A-B)=\sin^2A-\sin^2B $</a>,</p> <p>$$\sin^23x-\sin^2x+\sin^24x-\sin^22x=\sin2x(\sin4x+\sin6x)$$</p> <p>Now use <a href="http://mathworld.wolfram.com/ProsthaphaeresisFormulas.html" rel="nofollow noreferrer">Prosthaphaeresis Formulas</a> $\sin4x+\sin6x=2\sin5x\cos x$</p> Answer 3: <p>First linearise with: $$\sin^2 u=\frac{1-\cos 2u}2.$$ Simplifying yields the equation $$\cos 2x+\cos 4x=\cos6x+\cos 8x.$$ Then the factorisation formula: $$\cos p+\cos q=2\cos\frac{p+q}2\cos\frac{p-q}2 $$ lets you rewrite the equation, after some simplification, as $$\cos x\cos 3x=\cos x\cos 7x\iff \begin{cases}\cos x=0\quad\text{or}\\\cos 3x=\cos 7x.\end{cases}$$ Solutions to the first equation: $\qquad x\equiv \dfrac\pi2\mod\pi$.</p> <p>Solutions to the second equation: $$7x\equiv \pm 3x\mod 2\pi\iff\begin{cases} 4x\equiv 0\mod 2\pi\\10x\equiv 0\mod2\pi\end{cases}\iff\begin{cases} x\equiv 0\mod \dfrac\pi2\\[1ex]x\equiv 0\mod\dfrac\pi5\end{cases} $$ The solutions in the second series are redundant either with the first or the third series. The set of solutions can ultimately be described as: $$x\equiv \dfrac\pi2\mod\pi,\quad x\equiv 0\mod\dfrac\pi5. $$</p> Answer 4: <p>HINT:</p> <p>Using <a href="https://math.stackexchange.com/questions/175143/prove-sinab-sina-b-sin2a-sin2b">Prove $ \sin(A+B)\sin(A-B)=\sin^2A-\sin^2B $</a>,</p> <p>$$\sin^23x-\sin^2x+\sin^24x-\sin^22x=\sin2x(\sin4x+\sin6x)$$</p> <p>Now use <a href="http://mathworld.wolfram.com/ProsthaphaeresisFormulas.html" rel="nofollow noreferrer">Prosthaphaeresis Formulas</a> $\sin4x+\sin6x=2\sin5x\cos x$</p> Answer 5: <p>First linearise with: $$\sin^2 u=\frac{1-\cos 2u}2.$$ Simplifying yields the equation $$\cos 2x+\cos 4x=\cos6x+\cos 8x.$$ Then the factorisation formula: $$\cos p+\cos q=2\cos\frac{p+q}2\cos\frac{p-q}2 $$ lets you rewrite the equation, after some simplification, as $$\cos x\cos 3x=\cos x\cos 7x\iff \begin{cases}\cos x=0\quad\text{or}\\\cos 3x=\cos 7x.\end{cases}$$ Solutions to the first equation: $\qquad x\equiv \dfrac\pi2\mod\pi$.</p> <p>Solutions to the second equation: $$7x\equiv \pm 3x\mod 2\pi\iff\begin{cases} 4x\equiv 0\mod 2\pi\\10x\equiv 0\mod2\pi\end{cases}\iff\begin{cases} x\equiv 0\mod \dfrac\pi2\\[1ex]x\equiv 0\mod\dfrac\pi5\end{cases} $$ The solutions in the second series are redundant either with the first or the third series. The set of solutions can ultimately be described as: $$x\equiv \dfrac\pi2\mod\pi,\quad x\equiv 0\mod\dfrac\pi5. $$</p> Answer 6: <p>HINT:</p> <p>Using <a href="https://math.stackexchange.com/questions/175143/prove-sinab-sina-b-sin2a-sin2b">Prove $ \sin(A+B)\sin(A-B)=\sin^2A-\sin^2B $</a>,</p> <p>$$\sin^23x-\sin^2x+\sin^24x-\sin^22x=\sin2x(\sin4x+\sin6x)$$</p> <p>Now use <a href="http://mathworld.wolfram.com/ProsthaphaeresisFormulas.html" rel="nofollow noreferrer">Prosthaphaeresis Formulas</a> $\sin4x+\sin6x=2\sin5x\cos x$</p> Answer 7: <p>First linearise with: $$\sin^2 u=\frac{1-\cos 2u}2.$$ Simplifying yields the equation $$\cos 2x+\cos 4x=\cos6x+\cos 8x.$$ Then the factorisation formula: $$\cos p+\cos q=2\cos\frac{p+q}2\cos\frac{p-q}2 $$ lets you rewrite the equation, after some simplification, as $$\cos x\cos 3x=\cos x\cos 7x\iff \begin{cases}\cos x=0\quad\text{or}\\\cos 3x=\cos 7x.\end{cases}$$ Solutions to the first equation: $\qquad x\equiv \dfrac\pi2\mod\pi$.</p> <p>Solutions to the second equation: $$7x\equiv \pm 3x\mod 2\pi\iff\begin{cases} 4x\equiv 0\mod 2\pi\\10x\equiv 0\mod2\pi\end{cases}\iff\begin{cases} x\equiv 0\mod \dfrac\pi2\\[1ex]x\equiv 0\mod\dfrac\pi5\end{cases} $$ The solutions in the second series are redundant either with the first or the third series. The set of solutions can ultimately be described as: $$x\equiv \dfrac\pi2\mod\pi,\quad x\equiv 0\mod\dfrac\pi5. $$</p> Answer 8: <p>HINT:</p> <p>Using <a href="https://math.stackexchange.com/questions/175143/prove-sinab-sina-b-sin2a-sin2b">Prove $ \sin(A+B)\sin(A-B)=\sin^2A-\sin^2B $</a>,</p> <p>$$\sin^23x-\sin^2x+\sin^24x-\sin^22x=\sin2x(\sin4x+\sin6x)$$</p> <p>Now use <a href="http://mathworld.wolfram.com/ProsthaphaeresisFormulas.html" rel="nofollow noreferrer">Prosthaphaeresis Formulas</a> $\sin4x+\sin6x=2\sin5x\cos x$</p>
Is probability that $x&lt;2$ equal to the probability that $x^2&lt;4$ given $1&lt;x&lt;3$?
2,080,380
4
https://math.stackexchange.com/questions/2080380/is-probability-that-x2-equal-to-the-probability-that-x24-given-1x3
<p>Assuming $x$ is a real number uniformly distributed over the interval $(1,3).$</p> <p>so $x^2$ is also uniformly distributed over the interval $(1,9)${As for every $x=a\in (1,3) $ there exists $x^2=a^2\in (1,9)$}.</p> <p>Probability that $x&lt;2$ would be $\frac{1}{2}$ as $x$ can be in $(1,2)$ where sample set of $x$ is $(1,3)$, while probability that $x^2&lt;4$ is $\frac{3}{8}$ as $x^2$ can be in $(1,4)$ where sample set of $x^2$ is $(1,9).$</p> <p>So why is the probability that $x&lt;2$ different from $x^2&lt;4$ if both are identical?</p>
[ { "answer_id": 2080400, "score": 3, "text": "<p>If $X$ is uniformly distributed on $[1,3]$, then $P(X^2\\le a^2)=P(X\\le a)=\\frac{a-1}2$. Therefore, $P(X^2\\le a)=\\frac{\\sqrt{a}-1}2$. Thus, the PDF of $X^2$ is $\\frac1{4\\sqrt{a}}$.</p>\n\n<p>That is, $X^2$ is <em>not</em> uniformly distributed on $[1,9]$.</p>\n" }, { "answer_id": 2080396, "score": 0, "text": "<p>You decided for any $x$ to match $x^2$, so you are claiming that $x&lt;2$ is identical to $x^2&lt;4$, but you can also match $4x-3$ in the interval $(1,9)$ for all $x$ in the interval $(1,3)$, which makes $x&lt;2$ the same as $4x-3&lt;5$.</p>\n\n<p>That is the problem of defining probability on infinite group.</p>\n" }, { "answer_id": 2080400, "score": 3, "text": "<p>If $X$ is uniformly distributed on $[1,3]$, then $P(X^2\\le a^2)=P(X\\le a)=\\frac{a-1}2$. Therefore, $P(X^2\\le a)=\\frac{\\sqrt{a}-1}2$. Thus, the PDF of $X^2$ is $\\frac1{4\\sqrt{a}}$.</p>\n\n<p>That is, $X^2$ is <em>not</em> uniformly distributed on $[1,9]$.</p>\n" }, { "answer_id": 2080396, "score": 0, "text": "<p>You decided for any $x$ to match $x^2$, so you are claiming that $x&lt;2$ is identical to $x^2&lt;4$, but you can also match $4x-3$ in the interval $(1,9)$ for all $x$ in the interval $(1,3)$, which makes $x&lt;2$ the same as $4x-3&lt;5$.</p>\n\n<p>That is the problem of defining probability on infinite group.</p>\n" }, { "answer_id": 2080400, "score": 3, "text": "<p>If $X$ is uniformly distributed on $[1,3]$, then $P(X^2\\le a^2)=P(X\\le a)=\\frac{a-1}2$. Therefore, $P(X^2\\le a)=\\frac{\\sqrt{a}-1}2$. Thus, the PDF of $X^2$ is $\\frac1{4\\sqrt{a}}$.</p>\n\n<p>That is, $X^2$ is <em>not</em> uniformly distributed on $[1,9]$.</p>\n" }, { "answer_id": 2080396, "score": 0, "text": "<p>You decided for any $x$ to match $x^2$, so you are claiming that $x&lt;2$ is identical to $x^2&lt;4$, but you can also match $4x-3$ in the interval $(1,9)$ for all $x$ in the interval $(1,3)$, which makes $x&lt;2$ the same as $4x-3&lt;5$.</p>\n\n<p>That is the problem of defining probability on infinite group.</p>\n" }, { "answer_id": 2080400, "score": 3, "text": "<p>If $X$ is uniformly distributed on $[1,3]$, then $P(X^2\\le a^2)=P(X\\le a)=\\frac{a-1}2$. Therefore, $P(X^2\\le a)=\\frac{\\sqrt{a}-1}2$. Thus, the PDF of $X^2$ is $\\frac1{4\\sqrt{a}}$.</p>\n\n<p>That is, $X^2$ is <em>not</em> uniformly distributed on $[1,9]$.</p>\n" }, { "answer_id": 2080396, "score": 0, "text": "<p>You decided for any $x$ to match $x^2$, so you are claiming that $x&lt;2$ is identical to $x^2&lt;4$, but you can also match $4x-3$ in the interval $(1,9)$ for all $x$ in the interval $(1,3)$, which makes $x&lt;2$ the same as $4x-3&lt;5$.</p>\n\n<p>That is the problem of defining probability on infinite group.</p>\n" } ]
Question: Is probability that $x&lt;2$ equal to the probability that $x^2&lt;4$ given $1&lt;x&lt;3$? Answer 1: <p>If $X$ is uniformly distributed on $[1,3]$, then $P(X^2\le a^2)=P(X\le a)=\frac{a-1}2$. Therefore, $P(X^2\le a)=\frac{\sqrt{a}-1}2$. Thus, the PDF of $X^2$ is $\frac1{4\sqrt{a}}$.</p> <p>That is, $X^2$ is <em>not</em> uniformly distributed on $[1,9]$.</p> Answer 2: <p>You decided for any $x$ to match $x^2$, so you are claiming that $x&lt;2$ is identical to $x^2&lt;4$, but you can also match $4x-3$ in the interval $(1,9)$ for all $x$ in the interval $(1,3)$, which makes $x&lt;2$ the same as $4x-3&lt;5$.</p> <p>That is the problem of defining probability on infinite group.</p> Answer 3: <p>If $X$ is uniformly distributed on $[1,3]$, then $P(X^2\le a^2)=P(X\le a)=\frac{a-1}2$. Therefore, $P(X^2\le a)=\frac{\sqrt{a}-1}2$. Thus, the PDF of $X^2$ is $\frac1{4\sqrt{a}}$.</p> <p>That is, $X^2$ is <em>not</em> uniformly distributed on $[1,9]$.</p> Answer 4: <p>You decided for any $x$ to match $x^2$, so you are claiming that $x&lt;2$ is identical to $x^2&lt;4$, but you can also match $4x-3$ in the interval $(1,9)$ for all $x$ in the interval $(1,3)$, which makes $x&lt;2$ the same as $4x-3&lt;5$.</p> <p>That is the problem of defining probability on infinite group.</p> Answer 5: <p>If $X$ is uniformly distributed on $[1,3]$, then $P(X^2\le a^2)=P(X\le a)=\frac{a-1}2$. Therefore, $P(X^2\le a)=\frac{\sqrt{a}-1}2$. Thus, the PDF of $X^2$ is $\frac1{4\sqrt{a}}$.</p> <p>That is, $X^2$ is <em>not</em> uniformly distributed on $[1,9]$.</p> Answer 6: <p>You decided for any $x$ to match $x^2$, so you are claiming that $x&lt;2$ is identical to $x^2&lt;4$, but you can also match $4x-3$ in the interval $(1,9)$ for all $x$ in the interval $(1,3)$, which makes $x&lt;2$ the same as $4x-3&lt;5$.</p> <p>That is the problem of defining probability on infinite group.</p> Answer 7: <p>If $X$ is uniformly distributed on $[1,3]$, then $P(X^2\le a^2)=P(X\le a)=\frac{a-1}2$. Therefore, $P(X^2\le a)=\frac{\sqrt{a}-1}2$. Thus, the PDF of $X^2$ is $\frac1{4\sqrt{a}}$.</p> <p>That is, $X^2$ is <em>not</em> uniformly distributed on $[1,9]$.</p> Answer 8: <p>You decided for any $x$ to match $x^2$, so you are claiming that $x&lt;2$ is identical to $x^2&lt;4$, but you can also match $4x-3$ in the interval $(1,9)$ for all $x$ in the interval $(1,3)$, which makes $x&lt;2$ the same as $4x-3&lt;5$.</p> <p>That is the problem of defining probability on infinite group.</p>
How to find the inverse of an algebraic integer mod q
2,080,393
4
https://math.stackexchange.com/questions/2080393/how-to-find-the-inverse-of-an-algebraic-integer-mod-q
<p>Let $K = \mathbb{Q}[i]$ and $\mathcal{O}_K = \mathbb{Z}[i]$. Suppose $a = 1 + 2i$ and $q = 7$. Since $a$ and $q$ are coprime algebraic integer, then there must exists $b \in \mathcal{O}_K $ s.t. $a * b = 1 \pmod{q} $. </p> <p>Is there an algorithm for finding that for a general number field $K$?</p>
[ { "answer_id": 2080423, "score": 1, "text": "<p><em>Algorithm</em> would be a bit exaggerated for a simple observation. In general, $z=u+iv$ in a unit mod $7$ in $\\mathbf Z[i]$ if and only if its norm $N(z)=u^2+v^2$ is a unit in $\\mathbf Z/7\\mathbf Z$, and its inverse is $N(z)^{-1}\\bar z$.</p>\n\n<p>Hence, all you have to do is proceed with the <em>extended Euclidean algorithm</em> in $\\mathbf Z$ for $N(z)$.</p>\n\n<p>In the present case, $N(1+2i)=5$, its inverse mod. $7$ is $3$, hence\n$$(1+2i)^{-1} \\bmod 7=3(1-2i)=3+i.$$</p>\n" }, { "answer_id": 2080399, "score": 0, "text": "<p>If I understood correctly, you need</p>\n\n<p>$$(1+2i)(a+bi)=1+7m\\;,\\;\\;a,b,m\\in\\Bbb Z\\implies a-2b+(2a+b)i=1+7m\\iff$$</p>\n\n<p>$$\\begin{cases}a-2b=1+7m\\\\{}\\\\2a+b=0\\end{cases}\\;\\;\\;\\implies5a=1+7m\\implies a=3\\pmod 7\\;,\\;\\;b=-2a=1\\pmod7$$</p>\n\n<p>Thus, for example, with $\\;3+i\\;$:</p>\n\n<p>$$(3+i)(1+2i)=1+7i=1\\pmod7$$</p>\n\n<p>But you could pick as well $\\;10+8i\\;$ say , since</p>\n\n<p>$$(10+8i)(1+2i)=-6+28i=1\\pmod7$$</p>\n" }, { "answer_id": 2080409, "score": 0, "text": "<p>No. Consider $K=\\Bbb Q[\\sqrt{-5}]$. Let $a=1+\\sqrt{-5}$, $q=2$.</p>\n\n<p>Suppose that there exists $b,c\\in \\Bbb Z[\\sqrt{-5}]$ such that $ab+cq=1$. But now note that\n$$3q=(1-\\sqrt{-5})a$$\nthat is,\n$$3bq=(1-\\sqrt{-5})ab=(1-\\sqrt{-5})(1-cq)$$\ntherefore,\n$$q(3b+(1-\\sqrt{-5})c)=1-\\sqrt{-5}$$\nwhich is clearly a contradiction.</p>\n\n<p>In general, we can say that if the equation $ax\\equiv 1\\pmod m$ is solvable on some integral domain $R$ when $a$ and $m$ are coprime, then $R$ is an Unique Factorization Domain. But rings like $\\Bbb Z[\\sqrt d]$ where $d$ is a non square integer are not always UFD.</p>\n\n<p>For those such rings that are Euclidean domains, the Euclidean algorithm works and you can use it to find inverses, just like in $\\Bbb Z$.</p>\n" }, { "answer_id": 2080414, "score": 0, "text": "<p>In the specific case I would proceed as follows.</p>\n\n<p>Since $7\\equiv 3\\bmod 4$ one knows that $7{\\cal O}_K$ is a prime ideal and\n$$\n{\\cal O}_K/7{\\cal O}_K\\simeq{\\Bbb F}_7[i]\n$$\nis the field with $7^2$ elements.</p>\n\n<p>Then one can solve $(1+2i)(x+2y)=1$ in ${\\Bbb F}_7[i]$ which gives rise to a linear system in $\\Bbb F_7^2$ pretty much as in DonAntonio's answer (which came out as I'm writing this).\nIt gives the solution\n$$\nx+2y\\equiv 3-6i\\equiv 3+i\\bmod 7{\\cal O}_K.\n$$</p>\n\n<p>I guess that one can tackle the general problem along the same lines as long as the ideal $q{\\cal O}_K$ is maximal.</p>\n\n<p>If not, one has to preliminary find the decomposition $q{\\cal O}_K=\\prod_i\\wp_i^{e_i}$ in ${\\cal O}_K$ where the $\\wp_i$ are maximal. Then your starting element $a$ will be invertible modulo $q$ if and only if it is invertible modulo each and every $\\wp_i^{e_i}$ by the Chinese Remainder Theorem.</p>\n\n<p>Of course in the rare event where ${\\cal O}_K$ is euclidean, one can use the division algorithm to explicit the Bezout identity.</p>\n" }, { "answer_id": 2080423, "score": 1, "text": "<p><em>Algorithm</em> would be a bit exaggerated for a simple observation. In general, $z=u+iv$ in a unit mod $7$ in $\\mathbf Z[i]$ if and only if its norm $N(z)=u^2+v^2$ is a unit in $\\mathbf Z/7\\mathbf Z$, and its inverse is $N(z)^{-1}\\bar z$.</p>\n\n<p>Hence, all you have to do is proceed with the <em>extended Euclidean algorithm</em> in $\\mathbf Z$ for $N(z)$.</p>\n\n<p>In the present case, $N(1+2i)=5$, its inverse mod. $7$ is $3$, hence\n$$(1+2i)^{-1} \\bmod 7=3(1-2i)=3+i.$$</p>\n" }, { "answer_id": 2080399, "score": 0, "text": "<p>If I understood correctly, you need</p>\n\n<p>$$(1+2i)(a+bi)=1+7m\\;,\\;\\;a,b,m\\in\\Bbb Z\\implies a-2b+(2a+b)i=1+7m\\iff$$</p>\n\n<p>$$\\begin{cases}a-2b=1+7m\\\\{}\\\\2a+b=0\\end{cases}\\;\\;\\;\\implies5a=1+7m\\implies a=3\\pmod 7\\;,\\;\\;b=-2a=1\\pmod7$$</p>\n\n<p>Thus, for example, with $\\;3+i\\;$:</p>\n\n<p>$$(3+i)(1+2i)=1+7i=1\\pmod7$$</p>\n\n<p>But you could pick as well $\\;10+8i\\;$ say , since</p>\n\n<p>$$(10+8i)(1+2i)=-6+28i=1\\pmod7$$</p>\n" }, { "answer_id": 2080409, "score": 0, "text": "<p>No. Consider $K=\\Bbb Q[\\sqrt{-5}]$. Let $a=1+\\sqrt{-5}$, $q=2$.</p>\n\n<p>Suppose that there exists $b,c\\in \\Bbb Z[\\sqrt{-5}]$ such that $ab+cq=1$. But now note that\n$$3q=(1-\\sqrt{-5})a$$\nthat is,\n$$3bq=(1-\\sqrt{-5})ab=(1-\\sqrt{-5})(1-cq)$$\ntherefore,\n$$q(3b+(1-\\sqrt{-5})c)=1-\\sqrt{-5}$$\nwhich is clearly a contradiction.</p>\n\n<p>In general, we can say that if the equation $ax\\equiv 1\\pmod m$ is solvable on some integral domain $R$ when $a$ and $m$ are coprime, then $R$ is an Unique Factorization Domain. But rings like $\\Bbb Z[\\sqrt d]$ where $d$ is a non square integer are not always UFD.</p>\n\n<p>For those such rings that are Euclidean domains, the Euclidean algorithm works and you can use it to find inverses, just like in $\\Bbb Z$.</p>\n" }, { "answer_id": 2080414, "score": 0, "text": "<p>In the specific case I would proceed as follows.</p>\n\n<p>Since $7\\equiv 3\\bmod 4$ one knows that $7{\\cal O}_K$ is a prime ideal and\n$$\n{\\cal O}_K/7{\\cal O}_K\\simeq{\\Bbb F}_7[i]\n$$\nis the field with $7^2$ elements.</p>\n\n<p>Then one can solve $(1+2i)(x+2y)=1$ in ${\\Bbb F}_7[i]$ which gives rise to a linear system in $\\Bbb F_7^2$ pretty much as in DonAntonio's answer (which came out as I'm writing this).\nIt gives the solution\n$$\nx+2y\\equiv 3-6i\\equiv 3+i\\bmod 7{\\cal O}_K.\n$$</p>\n\n<p>I guess that one can tackle the general problem along the same lines as long as the ideal $q{\\cal O}_K$ is maximal.</p>\n\n<p>If not, one has to preliminary find the decomposition $q{\\cal O}_K=\\prod_i\\wp_i^{e_i}$ in ${\\cal O}_K$ where the $\\wp_i$ are maximal. Then your starting element $a$ will be invertible modulo $q$ if and only if it is invertible modulo each and every $\\wp_i^{e_i}$ by the Chinese Remainder Theorem.</p>\n\n<p>Of course in the rare event where ${\\cal O}_K$ is euclidean, one can use the division algorithm to explicit the Bezout identity.</p>\n" }, { "answer_id": 2080423, "score": 1, "text": "<p><em>Algorithm</em> would be a bit exaggerated for a simple observation. In general, $z=u+iv$ in a unit mod $7$ in $\\mathbf Z[i]$ if and only if its norm $N(z)=u^2+v^2$ is a unit in $\\mathbf Z/7\\mathbf Z$, and its inverse is $N(z)^{-1}\\bar z$.</p>\n\n<p>Hence, all you have to do is proceed with the <em>extended Euclidean algorithm</em> in $\\mathbf Z$ for $N(z)$.</p>\n\n<p>In the present case, $N(1+2i)=5$, its inverse mod. $7$ is $3$, hence\n$$(1+2i)^{-1} \\bmod 7=3(1-2i)=3+i.$$</p>\n" }, { "answer_id": 2080399, "score": 0, "text": "<p>If I understood correctly, you need</p>\n\n<p>$$(1+2i)(a+bi)=1+7m\\;,\\;\\;a,b,m\\in\\Bbb Z\\implies a-2b+(2a+b)i=1+7m\\iff$$</p>\n\n<p>$$\\begin{cases}a-2b=1+7m\\\\{}\\\\2a+b=0\\end{cases}\\;\\;\\;\\implies5a=1+7m\\implies a=3\\pmod 7\\;,\\;\\;b=-2a=1\\pmod7$$</p>\n\n<p>Thus, for example, with $\\;3+i\\;$:</p>\n\n<p>$$(3+i)(1+2i)=1+7i=1\\pmod7$$</p>\n\n<p>But you could pick as well $\\;10+8i\\;$ say , since</p>\n\n<p>$$(10+8i)(1+2i)=-6+28i=1\\pmod7$$</p>\n" }, { "answer_id": 2080409, "score": 0, "text": "<p>No. Consider $K=\\Bbb Q[\\sqrt{-5}]$. Let $a=1+\\sqrt{-5}$, $q=2$.</p>\n\n<p>Suppose that there exists $b,c\\in \\Bbb Z[\\sqrt{-5}]$ such that $ab+cq=1$. But now note that\n$$3q=(1-\\sqrt{-5})a$$\nthat is,\n$$3bq=(1-\\sqrt{-5})ab=(1-\\sqrt{-5})(1-cq)$$\ntherefore,\n$$q(3b+(1-\\sqrt{-5})c)=1-\\sqrt{-5}$$\nwhich is clearly a contradiction.</p>\n\n<p>In general, we can say that if the equation $ax\\equiv 1\\pmod m$ is solvable on some integral domain $R$ when $a$ and $m$ are coprime, then $R$ is an Unique Factorization Domain. But rings like $\\Bbb Z[\\sqrt d]$ where $d$ is a non square integer are not always UFD.</p>\n\n<p>For those such rings that are Euclidean domains, the Euclidean algorithm works and you can use it to find inverses, just like in $\\Bbb Z$.</p>\n" }, { "answer_id": 2080414, "score": 0, "text": "<p>In the specific case I would proceed as follows.</p>\n\n<p>Since $7\\equiv 3\\bmod 4$ one knows that $7{\\cal O}_K$ is a prime ideal and\n$$\n{\\cal O}_K/7{\\cal O}_K\\simeq{\\Bbb F}_7[i]\n$$\nis the field with $7^2$ elements.</p>\n\n<p>Then one can solve $(1+2i)(x+2y)=1$ in ${\\Bbb F}_7[i]$ which gives rise to a linear system in $\\Bbb F_7^2$ pretty much as in DonAntonio's answer (which came out as I'm writing this).\nIt gives the solution\n$$\nx+2y\\equiv 3-6i\\equiv 3+i\\bmod 7{\\cal O}_K.\n$$</p>\n\n<p>I guess that one can tackle the general problem along the same lines as long as the ideal $q{\\cal O}_K$ is maximal.</p>\n\n<p>If not, one has to preliminary find the decomposition $q{\\cal O}_K=\\prod_i\\wp_i^{e_i}$ in ${\\cal O}_K$ where the $\\wp_i$ are maximal. Then your starting element $a$ will be invertible modulo $q$ if and only if it is invertible modulo each and every $\\wp_i^{e_i}$ by the Chinese Remainder Theorem.</p>\n\n<p>Of course in the rare event where ${\\cal O}_K$ is euclidean, one can use the division algorithm to explicit the Bezout identity.</p>\n" }, { "answer_id": 2080423, "score": 1, "text": "<p><em>Algorithm</em> would be a bit exaggerated for a simple observation. In general, $z=u+iv$ in a unit mod $7$ in $\\mathbf Z[i]$ if and only if its norm $N(z)=u^2+v^2$ is a unit in $\\mathbf Z/7\\mathbf Z$, and its inverse is $N(z)^{-1}\\bar z$.</p>\n\n<p>Hence, all you have to do is proceed with the <em>extended Euclidean algorithm</em> in $\\mathbf Z$ for $N(z)$.</p>\n\n<p>In the present case, $N(1+2i)=5$, its inverse mod. $7$ is $3$, hence\n$$(1+2i)^{-1} \\bmod 7=3(1-2i)=3+i.$$</p>\n" }, { "answer_id": 2080399, "score": 0, "text": "<p>If I understood correctly, you need</p>\n\n<p>$$(1+2i)(a+bi)=1+7m\\;,\\;\\;a,b,m\\in\\Bbb Z\\implies a-2b+(2a+b)i=1+7m\\iff$$</p>\n\n<p>$$\\begin{cases}a-2b=1+7m\\\\{}\\\\2a+b=0\\end{cases}\\;\\;\\;\\implies5a=1+7m\\implies a=3\\pmod 7\\;,\\;\\;b=-2a=1\\pmod7$$</p>\n\n<p>Thus, for example, with $\\;3+i\\;$:</p>\n\n<p>$$(3+i)(1+2i)=1+7i=1\\pmod7$$</p>\n\n<p>But you could pick as well $\\;10+8i\\;$ say , since</p>\n\n<p>$$(10+8i)(1+2i)=-6+28i=1\\pmod7$$</p>\n" }, { "answer_id": 2080409, "score": 0, "text": "<p>No. Consider $K=\\Bbb Q[\\sqrt{-5}]$. Let $a=1+\\sqrt{-5}$, $q=2$.</p>\n\n<p>Suppose that there exists $b,c\\in \\Bbb Z[\\sqrt{-5}]$ such that $ab+cq=1$. But now note that\n$$3q=(1-\\sqrt{-5})a$$\nthat is,\n$$3bq=(1-\\sqrt{-5})ab=(1-\\sqrt{-5})(1-cq)$$\ntherefore,\n$$q(3b+(1-\\sqrt{-5})c)=1-\\sqrt{-5}$$\nwhich is clearly a contradiction.</p>\n\n<p>In general, we can say that if the equation $ax\\equiv 1\\pmod m$ is solvable on some integral domain $R$ when $a$ and $m$ are coprime, then $R$ is an Unique Factorization Domain. But rings like $\\Bbb Z[\\sqrt d]$ where $d$ is a non square integer are not always UFD.</p>\n\n<p>For those such rings that are Euclidean domains, the Euclidean algorithm works and you can use it to find inverses, just like in $\\Bbb Z$.</p>\n" }, { "answer_id": 2080414, "score": 0, "text": "<p>In the specific case I would proceed as follows.</p>\n\n<p>Since $7\\equiv 3\\bmod 4$ one knows that $7{\\cal O}_K$ is a prime ideal and\n$$\n{\\cal O}_K/7{\\cal O}_K\\simeq{\\Bbb F}_7[i]\n$$\nis the field with $7^2$ elements.</p>\n\n<p>Then one can solve $(1+2i)(x+2y)=1$ in ${\\Bbb F}_7[i]$ which gives rise to a linear system in $\\Bbb F_7^2$ pretty much as in DonAntonio's answer (which came out as I'm writing this).\nIt gives the solution\n$$\nx+2y\\equiv 3-6i\\equiv 3+i\\bmod 7{\\cal O}_K.\n$$</p>\n\n<p>I guess that one can tackle the general problem along the same lines as long as the ideal $q{\\cal O}_K$ is maximal.</p>\n\n<p>If not, one has to preliminary find the decomposition $q{\\cal O}_K=\\prod_i\\wp_i^{e_i}$ in ${\\cal O}_K$ where the $\\wp_i$ are maximal. Then your starting element $a$ will be invertible modulo $q$ if and only if it is invertible modulo each and every $\\wp_i^{e_i}$ by the Chinese Remainder Theorem.</p>\n\n<p>Of course in the rare event where ${\\cal O}_K$ is euclidean, one can use the division algorithm to explicit the Bezout identity.</p>\n" } ]
Question: How to find the inverse of an algebraic integer mod q Answer 1: <p><em>Algorithm</em> would be a bit exaggerated for a simple observation. In general, $z=u+iv$ in a unit mod $7$ in $\mathbf Z[i]$ if and only if its norm $N(z)=u^2+v^2$ is a unit in $\mathbf Z/7\mathbf Z$, and its inverse is $N(z)^{-1}\bar z$.</p> <p>Hence, all you have to do is proceed with the <em>extended Euclidean algorithm</em> in $\mathbf Z$ for $N(z)$.</p> <p>In the present case, $N(1+2i)=5$, its inverse mod. $7$ is $3$, hence $$(1+2i)^{-1} \bmod 7=3(1-2i)=3+i.$$</p> Answer 2: <p>If I understood correctly, you need</p> <p>$$(1+2i)(a+bi)=1+7m\;,\;\;a,b,m\in\Bbb Z\implies a-2b+(2a+b)i=1+7m\iff$$</p> <p>$$\begin{cases}a-2b=1+7m\\{}\\2a+b=0\end{cases}\;\;\;\implies5a=1+7m\implies a=3\pmod 7\;,\;\;b=-2a=1\pmod7$$</p> <p>Thus, for example, with $\;3+i\;$:</p> <p>$$(3+i)(1+2i)=1+7i=1\pmod7$$</p> <p>But you could pick as well $\;10+8i\;$ say , since</p> <p>$$(10+8i)(1+2i)=-6+28i=1\pmod7$$</p> Answer 3: <p>No. Consider $K=\Bbb Q[\sqrt{-5}]$. Let $a=1+\sqrt{-5}$, $q=2$.</p> <p>Suppose that there exists $b,c\in \Bbb Z[\sqrt{-5}]$ such that $ab+cq=1$. But now note that $$3q=(1-\sqrt{-5})a$$ that is, $$3bq=(1-\sqrt{-5})ab=(1-\sqrt{-5})(1-cq)$$ therefore, $$q(3b+(1-\sqrt{-5})c)=1-\sqrt{-5}$$ which is clearly a contradiction.</p> <p>In general, we can say that if the equation $ax\equiv 1\pmod m$ is solvable on some integral domain $R$ when $a$ and $m$ are coprime, then $R$ is an Unique Factorization Domain. But rings like $\Bbb Z[\sqrt d]$ where $d$ is a non square integer are not always UFD.</p> <p>For those such rings that are Euclidean domains, the Euclidean algorithm works and you can use it to find inverses, just like in $\Bbb Z$.</p> Answer 4: <p>In the specific case I would proceed as follows.</p> <p>Since $7\equiv 3\bmod 4$ one knows that $7{\cal O}_K$ is a prime ideal and $$ {\cal O}_K/7{\cal O}_K\simeq{\Bbb F}_7[i] $$ is the field with $7^2$ elements.</p> <p>Then one can solve $(1+2i)(x+2y)=1$ in ${\Bbb F}_7[i]$ which gives rise to a linear system in $\Bbb F_7^2$ pretty much as in DonAntonio's answer (which came out as I'm writing this). It gives the solution $$ x+2y\equiv 3-6i\equiv 3+i\bmod 7{\cal O}_K. $$</p> <p>I guess that one can tackle the general problem along the same lines as long as the ideal $q{\cal O}_K$ is maximal.</p> <p>If not, one has to preliminary find the decomposition $q{\cal O}_K=\prod_i\wp_i^{e_i}$ in ${\cal O}_K$ where the $\wp_i$ are maximal. Then your starting element $a$ will be invertible modulo $q$ if and only if it is invertible modulo each and every $\wp_i^{e_i}$ by the Chinese Remainder Theorem.</p> <p>Of course in the rare event where ${\cal O}_K$ is euclidean, one can use the division algorithm to explicit the Bezout identity.</p> Answer 5: <p><em>Algorithm</em> would be a bit exaggerated for a simple observation. In general, $z=u+iv$ in a unit mod $7$ in $\mathbf Z[i]$ if and only if its norm $N(z)=u^2+v^2$ is a unit in $\mathbf Z/7\mathbf Z$, and its inverse is $N(z)^{-1}\bar z$.</p> <p>Hence, all you have to do is proceed with the <em>extended Euclidean algorithm</em> in $\mathbf Z$ for $N(z)$.</p> <p>In the present case, $N(1+2i)=5$, its inverse mod. $7$ is $3$, hence $$(1+2i)^{-1} \bmod 7=3(1-2i)=3+i.$$</p> Answer 6: <p>If I understood correctly, you need</p> <p>$$(1+2i)(a+bi)=1+7m\;,\;\;a,b,m\in\Bbb Z\implies a-2b+(2a+b)i=1+7m\iff$$</p> <p>$$\begin{cases}a-2b=1+7m\\{}\\2a+b=0\end{cases}\;\;\;\implies5a=1+7m\implies a=3\pmod 7\;,\;\;b=-2a=1\pmod7$$</p> <p>Thus, for example, with $\;3+i\;$:</p> <p>$$(3+i)(1+2i)=1+7i=1\pmod7$$</p> <p>But you could pick as well $\;10+8i\;$ say , since</p> <p>$$(10+8i)(1+2i)=-6+28i=1\pmod7$$</p> Answer 7: <p>No. Consider $K=\Bbb Q[\sqrt{-5}]$. Let $a=1+\sqrt{-5}$, $q=2$.</p> <p>Suppose that there exists $b,c\in \Bbb Z[\sqrt{-5}]$ such that $ab+cq=1$. But now note that $$3q=(1-\sqrt{-5})a$$ that is, $$3bq=(1-\sqrt{-5})ab=(1-\sqrt{-5})(1-cq)$$ therefore, $$q(3b+(1-\sqrt{-5})c)=1-\sqrt{-5}$$ which is clearly a contradiction.</p> <p>In general, we can say that if the equation $ax\equiv 1\pmod m$ is solvable on some integral domain $R$ when $a$ and $m$ are coprime, then $R$ is an Unique Factorization Domain. But rings like $\Bbb Z[\sqrt d]$ where $d$ is a non square integer are not always UFD.</p> <p>For those such rings that are Euclidean domains, the Euclidean algorithm works and you can use it to find inverses, just like in $\Bbb Z$.</p> Answer 8: <p>In the specific case I would proceed as follows.</p> <p>Since $7\equiv 3\bmod 4$ one knows that $7{\cal O}_K$ is a prime ideal and $$ {\cal O}_K/7{\cal O}_K\simeq{\Bbb F}_7[i] $$ is the field with $7^2$ elements.</p> <p>Then one can solve $(1+2i)(x+2y)=1$ in ${\Bbb F}_7[i]$ which gives rise to a linear system in $\Bbb F_7^2$ pretty much as in DonAntonio's answer (which came out as I'm writing this). It gives the solution $$ x+2y\equiv 3-6i\equiv 3+i\bmod 7{\cal O}_K. $$</p> <p>I guess that one can tackle the general problem along the same lines as long as the ideal $q{\cal O}_K$ is maximal.</p> <p>If not, one has to preliminary find the decomposition $q{\cal O}_K=\prod_i\wp_i^{e_i}$ in ${\cal O}_K$ where the $\wp_i$ are maximal. Then your starting element $a$ will be invertible modulo $q$ if and only if it is invertible modulo each and every $\wp_i^{e_i}$ by the Chinese Remainder Theorem.</p> <p>Of course in the rare event where ${\cal O}_K$ is euclidean, one can use the division algorithm to explicit the Bezout identity.</p> Answer 9: <p><em>Algorithm</em> would be a bit exaggerated for a simple observation. In general, $z=u+iv$ in a unit mod $7$ in $\mathbf Z[i]$ if and only if its norm $N(z)=u^2+v^2$ is a unit in $\mathbf Z/7\mathbf Z$, and its inverse is $N(z)^{-1}\bar z$.</p> <p>Hence, all you have to do is proceed with the <em>extended Euclidean algorithm</em> in $\mathbf Z$ for $N(z)$.</p> <p>In the present case, $N(1+2i)=5$, its inverse mod. $7$ is $3$, hence $$(1+2i)^{-1} \bmod 7=3(1-2i)=3+i.$$</p> Answer 10: <p>If I understood correctly, you need</p> <p>$$(1+2i)(a+bi)=1+7m\;,\;\;a,b,m\in\Bbb Z\implies a-2b+(2a+b)i=1+7m\iff$$</p> <p>$$\begin{cases}a-2b=1+7m\\{}\\2a+b=0\end{cases}\;\;\;\implies5a=1+7m\implies a=3\pmod 7\;,\;\;b=-2a=1\pmod7$$</p> <p>Thus, for example, with $\;3+i\;$:</p> <p>$$(3+i)(1+2i)=1+7i=1\pmod7$$</p> <p>But you could pick as well $\;10+8i\;$ say , since</p> <p>$$(10+8i)(1+2i)=-6+28i=1\pmod7$$</p> Answer 11: <p>No. Consider $K=\Bbb Q[\sqrt{-5}]$. Let $a=1+\sqrt{-5}$, $q=2$.</p> <p>Suppose that there exists $b,c\in \Bbb Z[\sqrt{-5}]$ such that $ab+cq=1$. But now note that $$3q=(1-\sqrt{-5})a$$ that is, $$3bq=(1-\sqrt{-5})ab=(1-\sqrt{-5})(1-cq)$$ therefore, $$q(3b+(1-\sqrt{-5})c)=1-\sqrt{-5}$$ which is clearly a contradiction.</p> <p>In general, we can say that if the equation $ax\equiv 1\pmod m$ is solvable on some integral domain $R$ when $a$ and $m$ are coprime, then $R$ is an Unique Factorization Domain. But rings like $\Bbb Z[\sqrt d]$ where $d$ is a non square integer are not always UFD.</p> <p>For those such rings that are Euclidean domains, the Euclidean algorithm works and you can use it to find inverses, just like in $\Bbb Z$.</p> Answer 12: <p>In the specific case I would proceed as follows.</p> <p>Since $7\equiv 3\bmod 4$ one knows that $7{\cal O}_K$ is a prime ideal and $$ {\cal O}_K/7{\cal O}_K\simeq{\Bbb F}_7[i] $$ is the field with $7^2$ elements.</p> <p>Then one can solve $(1+2i)(x+2y)=1$ in ${\Bbb F}_7[i]$ which gives rise to a linear system in $\Bbb F_7^2$ pretty much as in DonAntonio's answer (which came out as I'm writing this). It gives the solution $$ x+2y\equiv 3-6i\equiv 3+i\bmod 7{\cal O}_K. $$</p> <p>I guess that one can tackle the general problem along the same lines as long as the ideal $q{\cal O}_K$ is maximal.</p> <p>If not, one has to preliminary find the decomposition $q{\cal O}_K=\prod_i\wp_i^{e_i}$ in ${\cal O}_K$ where the $\wp_i$ are maximal. Then your starting element $a$ will be invertible modulo $q$ if and only if it is invertible modulo each and every $\wp_i^{e_i}$ by the Chinese Remainder Theorem.</p> <p>Of course in the rare event where ${\cal O}_K$ is euclidean, one can use the division algorithm to explicit the Bezout identity.</p> Answer 13: <p><em>Algorithm</em> would be a bit exaggerated for a simple observation. In general, $z=u+iv$ in a unit mod $7$ in $\mathbf Z[i]$ if and only if its norm $N(z)=u^2+v^2$ is a unit in $\mathbf Z/7\mathbf Z$, and its inverse is $N(z)^{-1}\bar z$.</p> <p>Hence, all you have to do is proceed with the <em>extended Euclidean algorithm</em> in $\mathbf Z$ for $N(z)$.</p> <p>In the present case, $N(1+2i)=5$, its inverse mod. $7$ is $3$, hence $$(1+2i)^{-1} \bmod 7=3(1-2i)=3+i.$$</p> Answer 14: <p>If I understood correctly, you need</p> <p>$$(1+2i)(a+bi)=1+7m\;,\;\;a,b,m\in\Bbb Z\implies a-2b+(2a+b)i=1+7m\iff$$</p> <p>$$\begin{cases}a-2b=1+7m\\{}\\2a+b=0\end{cases}\;\;\;\implies5a=1+7m\implies a=3\pmod 7\;,\;\;b=-2a=1\pmod7$$</p> <p>Thus, for example, with $\;3+i\;$:</p> <p>$$(3+i)(1+2i)=1+7i=1\pmod7$$</p> <p>But you could pick as well $\;10+8i\;$ say , since</p> <p>$$(10+8i)(1+2i)=-6+28i=1\pmod7$$</p> Answer 15: <p>No. Consider $K=\Bbb Q[\sqrt{-5}]$. Let $a=1+\sqrt{-5}$, $q=2$.</p> <p>Suppose that there exists $b,c\in \Bbb Z[\sqrt{-5}]$ such that $ab+cq=1$. But now note that $$3q=(1-\sqrt{-5})a$$ that is, $$3bq=(1-\sqrt{-5})ab=(1-\sqrt{-5})(1-cq)$$ therefore, $$q(3b+(1-\sqrt{-5})c)=1-\sqrt{-5}$$ which is clearly a contradiction.</p> <p>In general, we can say that if the equation $ax\equiv 1\pmod m$ is solvable on some integral domain $R$ when $a$ and $m$ are coprime, then $R$ is an Unique Factorization Domain. But rings like $\Bbb Z[\sqrt d]$ where $d$ is a non square integer are not always UFD.</p> <p>For those such rings that are Euclidean domains, the Euclidean algorithm works and you can use it to find inverses, just like in $\Bbb Z$.</p> Answer 16: <p>In the specific case I would proceed as follows.</p> <p>Since $7\equiv 3\bmod 4$ one knows that $7{\cal O}_K$ is a prime ideal and $$ {\cal O}_K/7{\cal O}_K\simeq{\Bbb F}_7[i] $$ is the field with $7^2$ elements.</p> <p>Then one can solve $(1+2i)(x+2y)=1$ in ${\Bbb F}_7[i]$ which gives rise to a linear system in $\Bbb F_7^2$ pretty much as in DonAntonio's answer (which came out as I'm writing this). It gives the solution $$ x+2y\equiv 3-6i\equiv 3+i\bmod 7{\cal O}_K. $$</p> <p>I guess that one can tackle the general problem along the same lines as long as the ideal $q{\cal O}_K$ is maximal.</p> <p>If not, one has to preliminary find the decomposition $q{\cal O}_K=\prod_i\wp_i^{e_i}$ in ${\cal O}_K$ where the $\wp_i$ are maximal. Then your starting element $a$ will be invertible modulo $q$ if and only if it is invertible modulo each and every $\wp_i^{e_i}$ by the Chinese Remainder Theorem.</p> <p>Of course in the rare event where ${\cal O}_K$ is euclidean, one can use the division algorithm to explicit the Bezout identity.</p>
Intermediate values of $p$-norms
2,080,402
4
https://math.stackexchange.com/questions/2080402/intermediate-values-of-p-norms
<p>Let $x,y \in [0,1]^n$ be $n$ dimensional vectors with elements in the continuous set $[0,1]$. </p> <p>Suppose we have the following conditions:</p> <p>1) $\Vert x\Vert_1 = \Vert y \Vert_1=1$ ,</p> <p>2) $\Vert x\Vert_2 \geq \Vert y \Vert_2$ ,</p> <p>3) $\Vert x\Vert_\infty \geq \Vert y \Vert_\infty$,</p> <p>where $\Vert x\Vert_\alpha := \left( \sum x_i^\alpha\right)^{1/\alpha}.$</p> <p><strong>Can we deduce the following: $\Vert x\Vert_\alpha \geq \Vert y \Vert_\alpha$ for all $\alpha&gt;1$?</strong></p>
[ { "answer_id": 2080934, "score": 3, "text": "<p>To see that this is a bit too optimistic, imagine two vectors $x$ and $y$ that have the same 1-norm, same 2-norm, and same $\\infty$-norm. If your conjecture were true, than they would have the same $\\alpha$-norm for every $\\alpha$, which is impossible without them being the same. For a concrete example, take </p>\n\n<p>$$x=(10,2,2,0),\\qquad y=(10,1,(3+\\sqrt{5})/2, (3-\\sqrt{5})/2)$$ \nwhich satisfy</p>\n\n<ul>\n<li>$\\|x\\|_1=\\|y\\|_1 = 14$, </li>\n<li>$\\|x\\|_2=\\|y\\|_2 = \\sqrt{108}$, </li>\n<li>$\\|x\\|_\\infty=\\|y\\|_\\infty = 10$</li>\n</ul>\n\n<p>These will have different $3$-norms, for example ($1016^{1/3}$ and $1019^{1/3}$, to be specific). They can be normalized to have 1-norm equal to $1$, just divide by 14 if desired.</p>\n" }, { "answer_id": 2080892, "score": 0, "text": "<p><strong>No.</strong></p>\n\n<p>A counter example is:</p>\n\n<p>$$x = \\left[0.354019 \\quad 0.162424 \\quad 0.302226 \\quad 0.145733 \\quad 0.035598\\right]$$\n$$y = \\left[0.137069\\quad 0.065747\\quad 0.327494 \\quad 0.346246\\quad 0.123444\\right]$$\n$$p = 8 .$$</p>\n\n<p>I wrote the following Matlab script to prepare vectors in the required form. I then loop over different values of $p$ at the end to check if I can find vectors that do not conform to $\\Vert x \\Vert_p \\geq \\Vert y \\Vert_p$. </p>\n\n<p><code>clear all\nn = 5; p = 2; count = 0;trial = 1000;\nfor i = 1:trial\n x(i,:) = exprnd(1,1,n); x(i,:) = x(i,:)/norm(x(i,:),1);\n y(i,:) = exprnd(1,1,n); y(i,:) = y(i,:)/norm(y(i,:),1);\n if max(x(i,:)) &lt; max(y(i,:))\n temp = y(i,:);\n y(i,:) = x(i,:);\n x(i,:) = temp;\n end \n if norm(x(i,:),2) > norm(y(i,:),2)\n count = count +1;\n x1(count,:) = x(i,:);\n y1(count,:) = y(i,:);\n end\nend\nfor p = 1.1:0.1:10\n for i = 1:count\n if norm(x1(i,:),p) &lt; norm(y1(i,:),p)\n norm(x1(i,:),p) &lt; norm(y1(i,:),p)\n p\n end<br>\n end\nend</code></p>\n" }, { "answer_id": 2080934, "score": 3, "text": "<p>To see that this is a bit too optimistic, imagine two vectors $x$ and $y$ that have the same 1-norm, same 2-norm, and same $\\infty$-norm. If your conjecture were true, than they would have the same $\\alpha$-norm for every $\\alpha$, which is impossible without them being the same. For a concrete example, take </p>\n\n<p>$$x=(10,2,2,0),\\qquad y=(10,1,(3+\\sqrt{5})/2, (3-\\sqrt{5})/2)$$ \nwhich satisfy</p>\n\n<ul>\n<li>$\\|x\\|_1=\\|y\\|_1 = 14$, </li>\n<li>$\\|x\\|_2=\\|y\\|_2 = \\sqrt{108}$, </li>\n<li>$\\|x\\|_\\infty=\\|y\\|_\\infty = 10$</li>\n</ul>\n\n<p>These will have different $3$-norms, for example ($1016^{1/3}$ and $1019^{1/3}$, to be specific). They can be normalized to have 1-norm equal to $1$, just divide by 14 if desired.</p>\n" }, { "answer_id": 2080892, "score": 0, "text": "<p><strong>No.</strong></p>\n\n<p>A counter example is:</p>\n\n<p>$$x = \\left[0.354019 \\quad 0.162424 \\quad 0.302226 \\quad 0.145733 \\quad 0.035598\\right]$$\n$$y = \\left[0.137069\\quad 0.065747\\quad 0.327494 \\quad 0.346246\\quad 0.123444\\right]$$\n$$p = 8 .$$</p>\n\n<p>I wrote the following Matlab script to prepare vectors in the required form. I then loop over different values of $p$ at the end to check if I can find vectors that do not conform to $\\Vert x \\Vert_p \\geq \\Vert y \\Vert_p$. </p>\n\n<p><code>clear all\nn = 5; p = 2; count = 0;trial = 1000;\nfor i = 1:trial\n x(i,:) = exprnd(1,1,n); x(i,:) = x(i,:)/norm(x(i,:),1);\n y(i,:) = exprnd(1,1,n); y(i,:) = y(i,:)/norm(y(i,:),1);\n if max(x(i,:)) &lt; max(y(i,:))\n temp = y(i,:);\n y(i,:) = x(i,:);\n x(i,:) = temp;\n end \n if norm(x(i,:),2) > norm(y(i,:),2)\n count = count +1;\n x1(count,:) = x(i,:);\n y1(count,:) = y(i,:);\n end\nend\nfor p = 1.1:0.1:10\n for i = 1:count\n if norm(x1(i,:),p) &lt; norm(y1(i,:),p)\n norm(x1(i,:),p) &lt; norm(y1(i,:),p)\n p\n end<br>\n end\nend</code></p>\n" }, { "answer_id": 2080934, "score": 3, "text": "<p>To see that this is a bit too optimistic, imagine two vectors $x$ and $y$ that have the same 1-norm, same 2-norm, and same $\\infty$-norm. If your conjecture were true, than they would have the same $\\alpha$-norm for every $\\alpha$, which is impossible without them being the same. For a concrete example, take </p>\n\n<p>$$x=(10,2,2,0),\\qquad y=(10,1,(3+\\sqrt{5})/2, (3-\\sqrt{5})/2)$$ \nwhich satisfy</p>\n\n<ul>\n<li>$\\|x\\|_1=\\|y\\|_1 = 14$, </li>\n<li>$\\|x\\|_2=\\|y\\|_2 = \\sqrt{108}$, </li>\n<li>$\\|x\\|_\\infty=\\|y\\|_\\infty = 10$</li>\n</ul>\n\n<p>These will have different $3$-norms, for example ($1016^{1/3}$ and $1019^{1/3}$, to be specific). They can be normalized to have 1-norm equal to $1$, just divide by 14 if desired.</p>\n" }, { "answer_id": 2080892, "score": 0, "text": "<p><strong>No.</strong></p>\n\n<p>A counter example is:</p>\n\n<p>$$x = \\left[0.354019 \\quad 0.162424 \\quad 0.302226 \\quad 0.145733 \\quad 0.035598\\right]$$\n$$y = \\left[0.137069\\quad 0.065747\\quad 0.327494 \\quad 0.346246\\quad 0.123444\\right]$$\n$$p = 8 .$$</p>\n\n<p>I wrote the following Matlab script to prepare vectors in the required form. I then loop over different values of $p$ at the end to check if I can find vectors that do not conform to $\\Vert x \\Vert_p \\geq \\Vert y \\Vert_p$. </p>\n\n<p><code>clear all\nn = 5; p = 2; count = 0;trial = 1000;\nfor i = 1:trial\n x(i,:) = exprnd(1,1,n); x(i,:) = x(i,:)/norm(x(i,:),1);\n y(i,:) = exprnd(1,1,n); y(i,:) = y(i,:)/norm(y(i,:),1);\n if max(x(i,:)) &lt; max(y(i,:))\n temp = y(i,:);\n y(i,:) = x(i,:);\n x(i,:) = temp;\n end \n if norm(x(i,:),2) > norm(y(i,:),2)\n count = count +1;\n x1(count,:) = x(i,:);\n y1(count,:) = y(i,:);\n end\nend\nfor p = 1.1:0.1:10\n for i = 1:count\n if norm(x1(i,:),p) &lt; norm(y1(i,:),p)\n norm(x1(i,:),p) &lt; norm(y1(i,:),p)\n p\n end<br>\n end\nend</code></p>\n" }, { "answer_id": 2080934, "score": 3, "text": "<p>To see that this is a bit too optimistic, imagine two vectors $x$ and $y$ that have the same 1-norm, same 2-norm, and same $\\infty$-norm. If your conjecture were true, than they would have the same $\\alpha$-norm for every $\\alpha$, which is impossible without them being the same. For a concrete example, take </p>\n\n<p>$$x=(10,2,2,0),\\qquad y=(10,1,(3+\\sqrt{5})/2, (3-\\sqrt{5})/2)$$ \nwhich satisfy</p>\n\n<ul>\n<li>$\\|x\\|_1=\\|y\\|_1 = 14$, </li>\n<li>$\\|x\\|_2=\\|y\\|_2 = \\sqrt{108}$, </li>\n<li>$\\|x\\|_\\infty=\\|y\\|_\\infty = 10$</li>\n</ul>\n\n<p>These will have different $3$-norms, for example ($1016^{1/3}$ and $1019^{1/3}$, to be specific). They can be normalized to have 1-norm equal to $1$, just divide by 14 if desired.</p>\n" }, { "answer_id": 2080892, "score": 0, "text": "<p><strong>No.</strong></p>\n\n<p>A counter example is:</p>\n\n<p>$$x = \\left[0.354019 \\quad 0.162424 \\quad 0.302226 \\quad 0.145733 \\quad 0.035598\\right]$$\n$$y = \\left[0.137069\\quad 0.065747\\quad 0.327494 \\quad 0.346246\\quad 0.123444\\right]$$\n$$p = 8 .$$</p>\n\n<p>I wrote the following Matlab script to prepare vectors in the required form. I then loop over different values of $p$ at the end to check if I can find vectors that do not conform to $\\Vert x \\Vert_p \\geq \\Vert y \\Vert_p$. </p>\n\n<p><code>clear all\nn = 5; p = 2; count = 0;trial = 1000;\nfor i = 1:trial\n x(i,:) = exprnd(1,1,n); x(i,:) = x(i,:)/norm(x(i,:),1);\n y(i,:) = exprnd(1,1,n); y(i,:) = y(i,:)/norm(y(i,:),1);\n if max(x(i,:)) &lt; max(y(i,:))\n temp = y(i,:);\n y(i,:) = x(i,:);\n x(i,:) = temp;\n end \n if norm(x(i,:),2) > norm(y(i,:),2)\n count = count +1;\n x1(count,:) = x(i,:);\n y1(count,:) = y(i,:);\n end\nend\nfor p = 1.1:0.1:10\n for i = 1:count\n if norm(x1(i,:),p) &lt; norm(y1(i,:),p)\n norm(x1(i,:),p) &lt; norm(y1(i,:),p)\n p\n end<br>\n end\nend</code></p>\n" } ]
Question: Intermediate values of $p$-norms Answer 1: <p>To see that this is a bit too optimistic, imagine two vectors $x$ and $y$ that have the same 1-norm, same 2-norm, and same $\infty$-norm. If your conjecture were true, than they would have the same $\alpha$-norm for every $\alpha$, which is impossible without them being the same. For a concrete example, take </p> <p>$$x=(10,2,2,0),\qquad y=(10,1,(3+\sqrt{5})/2, (3-\sqrt{5})/2)$$ which satisfy</p> <ul> <li>$\|x\|_1=\|y\|_1 = 14$, </li> <li>$\|x\|_2=\|y\|_2 = \sqrt{108}$, </li> <li>$\|x\|_\infty=\|y\|_\infty = 10$</li> </ul> <p>These will have different $3$-norms, for example ($1016^{1/3}$ and $1019^{1/3}$, to be specific). They can be normalized to have 1-norm equal to $1$, just divide by 14 if desired.</p> Answer 2: <p><strong>No.</strong></p> <p>A counter example is:</p> <p>$$x = \left[0.354019 \quad 0.162424 \quad 0.302226 \quad 0.145733 \quad 0.035598\right]$$ $$y = \left[0.137069\quad 0.065747\quad 0.327494 \quad 0.346246\quad 0.123444\right]$$ $$p = 8 .$$</p> <p>I wrote the following Matlab script to prepare vectors in the required form. I then loop over different values of $p$ at the end to check if I can find vectors that do not conform to $\Vert x \Vert_p \geq \Vert y \Vert_p$. </p> <p><code>clear all n = 5; p = 2; count = 0;trial = 1000; for i = 1:trial x(i,:) = exprnd(1,1,n); x(i,:) = x(i,:)/norm(x(i,:),1); y(i,:) = exprnd(1,1,n); y(i,:) = y(i,:)/norm(y(i,:),1); if max(x(i,:)) &lt; max(y(i,:)) temp = y(i,:); y(i,:) = x(i,:); x(i,:) = temp; end if norm(x(i,:),2) > norm(y(i,:),2) count = count +1; x1(count,:) = x(i,:); y1(count,:) = y(i,:); end end for p = 1.1:0.1:10 for i = 1:count if norm(x1(i,:),p) &lt; norm(y1(i,:),p) norm(x1(i,:),p) &lt; norm(y1(i,:),p) p end<br> end end</code></p> Answer 3: <p>To see that this is a bit too optimistic, imagine two vectors $x$ and $y$ that have the same 1-norm, same 2-norm, and same $\infty$-norm. If your conjecture were true, than they would have the same $\alpha$-norm for every $\alpha$, which is impossible without them being the same. For a concrete example, take </p> <p>$$x=(10,2,2,0),\qquad y=(10,1,(3+\sqrt{5})/2, (3-\sqrt{5})/2)$$ which satisfy</p> <ul> <li>$\|x\|_1=\|y\|_1 = 14$, </li> <li>$\|x\|_2=\|y\|_2 = \sqrt{108}$, </li> <li>$\|x\|_\infty=\|y\|_\infty = 10$</li> </ul> <p>These will have different $3$-norms, for example ($1016^{1/3}$ and $1019^{1/3}$, to be specific). They can be normalized to have 1-norm equal to $1$, just divide by 14 if desired.</p> Answer 4: <p><strong>No.</strong></p> <p>A counter example is:</p> <p>$$x = \left[0.354019 \quad 0.162424 \quad 0.302226 \quad 0.145733 \quad 0.035598\right]$$ $$y = \left[0.137069\quad 0.065747\quad 0.327494 \quad 0.346246\quad 0.123444\right]$$ $$p = 8 .$$</p> <p>I wrote the following Matlab script to prepare vectors in the required form. I then loop over different values of $p$ at the end to check if I can find vectors that do not conform to $\Vert x \Vert_p \geq \Vert y \Vert_p$. </p> <p><code>clear all n = 5; p = 2; count = 0;trial = 1000; for i = 1:trial x(i,:) = exprnd(1,1,n); x(i,:) = x(i,:)/norm(x(i,:),1); y(i,:) = exprnd(1,1,n); y(i,:) = y(i,:)/norm(y(i,:),1); if max(x(i,:)) &lt; max(y(i,:)) temp = y(i,:); y(i,:) = x(i,:); x(i,:) = temp; end if norm(x(i,:),2) > norm(y(i,:),2) count = count +1; x1(count,:) = x(i,:); y1(count,:) = y(i,:); end end for p = 1.1:0.1:10 for i = 1:count if norm(x1(i,:),p) &lt; norm(y1(i,:),p) norm(x1(i,:),p) &lt; norm(y1(i,:),p) p end<br> end end</code></p> Answer 5: <p>To see that this is a bit too optimistic, imagine two vectors $x$ and $y$ that have the same 1-norm, same 2-norm, and same $\infty$-norm. If your conjecture were true, than they would have the same $\alpha$-norm for every $\alpha$, which is impossible without them being the same. For a concrete example, take </p> <p>$$x=(10,2,2,0),\qquad y=(10,1,(3+\sqrt{5})/2, (3-\sqrt{5})/2)$$ which satisfy</p> <ul> <li>$\|x\|_1=\|y\|_1 = 14$, </li> <li>$\|x\|_2=\|y\|_2 = \sqrt{108}$, </li> <li>$\|x\|_\infty=\|y\|_\infty = 10$</li> </ul> <p>These will have different $3$-norms, for example ($1016^{1/3}$ and $1019^{1/3}$, to be specific). They can be normalized to have 1-norm equal to $1$, just divide by 14 if desired.</p> Answer 6: <p><strong>No.</strong></p> <p>A counter example is:</p> <p>$$x = \left[0.354019 \quad 0.162424 \quad 0.302226 \quad 0.145733 \quad 0.035598\right]$$ $$y = \left[0.137069\quad 0.065747\quad 0.327494 \quad 0.346246\quad 0.123444\right]$$ $$p = 8 .$$</p> <p>I wrote the following Matlab script to prepare vectors in the required form. I then loop over different values of $p$ at the end to check if I can find vectors that do not conform to $\Vert x \Vert_p \geq \Vert y \Vert_p$. </p> <p><code>clear all n = 5; p = 2; count = 0;trial = 1000; for i = 1:trial x(i,:) = exprnd(1,1,n); x(i,:) = x(i,:)/norm(x(i,:),1); y(i,:) = exprnd(1,1,n); y(i,:) = y(i,:)/norm(y(i,:),1); if max(x(i,:)) &lt; max(y(i,:)) temp = y(i,:); y(i,:) = x(i,:); x(i,:) = temp; end if norm(x(i,:),2) > norm(y(i,:),2) count = count +1; x1(count,:) = x(i,:); y1(count,:) = y(i,:); end end for p = 1.1:0.1:10 for i = 1:count if norm(x1(i,:),p) &lt; norm(y1(i,:),p) norm(x1(i,:),p) &lt; norm(y1(i,:),p) p end<br> end end</code></p> Answer 7: <p>To see that this is a bit too optimistic, imagine two vectors $x$ and $y$ that have the same 1-norm, same 2-norm, and same $\infty$-norm. If your conjecture were true, than they would have the same $\alpha$-norm for every $\alpha$, which is impossible without them being the same. For a concrete example, take </p> <p>$$x=(10,2,2,0),\qquad y=(10,1,(3+\sqrt{5})/2, (3-\sqrt{5})/2)$$ which satisfy</p> <ul> <li>$\|x\|_1=\|y\|_1 = 14$, </li> <li>$\|x\|_2=\|y\|_2 = \sqrt{108}$, </li> <li>$\|x\|_\infty=\|y\|_\infty = 10$</li> </ul> <p>These will have different $3$-norms, for example ($1016^{1/3}$ and $1019^{1/3}$, to be specific). They can be normalized to have 1-norm equal to $1$, just divide by 14 if desired.</p> Answer 8: <p><strong>No.</strong></p> <p>A counter example is:</p> <p>$$x = \left[0.354019 \quad 0.162424 \quad 0.302226 \quad 0.145733 \quad 0.035598\right]$$ $$y = \left[0.137069\quad 0.065747\quad 0.327494 \quad 0.346246\quad 0.123444\right]$$ $$p = 8 .$$</p> <p>I wrote the following Matlab script to prepare vectors in the required form. I then loop over different values of $p$ at the end to check if I can find vectors that do not conform to $\Vert x \Vert_p \geq \Vert y \Vert_p$. </p> <p><code>clear all n = 5; p = 2; count = 0;trial = 1000; for i = 1:trial x(i,:) = exprnd(1,1,n); x(i,:) = x(i,:)/norm(x(i,:),1); y(i,:) = exprnd(1,1,n); y(i,:) = y(i,:)/norm(y(i,:),1); if max(x(i,:)) &lt; max(y(i,:)) temp = y(i,:); y(i,:) = x(i,:); x(i,:) = temp; end if norm(x(i,:),2) > norm(y(i,:),2) count = count +1; x1(count,:) = x(i,:); y1(count,:) = y(i,:); end end for p = 1.1:0.1:10 for i = 1:count if norm(x1(i,:),p) &lt; norm(y1(i,:),p) norm(x1(i,:),p) &lt; norm(y1(i,:),p) p end<br> end end</code></p>
Solving $W$ function
2,080,477
4
https://math.stackexchange.com/questions/2080477/solving-w-function
<p>What is the general form of Lambert $W$-function to calculate any $x$ in $\mathbb R$?</p> <p>I had problems solving for $x, x^2-e^x=0$. I reached $x=-2W(\frac12)$.</p> <p>What does $W(\frac12)$ equal to?</p>
[ { "answer_id": 2080488, "score": 7, "text": "<p>There is no \"general form of the Lambert W-function\". All you have is the definition, tables, power series, other numerical methods. It's like the trigonometric functions - no algebraic \"formula\". </p>\n\n<p>From wikipedia:</p>\n\n<blockquote>\n <p>The Lambert W relation cannot be expressed in terms of elementary\n functions.<a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">1</a></p>\n</blockquote>\n\n<p>Nowadays the trig functions are so common that you don't tend to think of them as special in that way. \nThe Lambert W-function seems to appear more and more often on this site. Perhaps some day people will view it as just another special function.</p>\n" }, { "answer_id": 2080498, "score": 3, "text": "<p>Short answer:</p>\n\n<p>As Ethan Bolker said, in general the only way to do it is using a computer and approximation methods. When a computer evaluates $W(z)$, it generally uses an approximation method.</p>\n\n<p>Longer answer:</p>\n\n<p>One such approximation method is as follows. It's an iterative method, which means you pick a starting number as a guess, plug it into the formula, and a better guess comes out. Repeat until you get the desired precision.</p>\n\n<blockquote>\n <p>Source: 1996: R.M. Corless, G.H. Gonnet, D.E.G Hare, D.J. Jeffrey and\n D.E. Knuth: On the Lambert W Function (Vol. 5: 329 – 359)</p>\n</blockquote>\n\n<p>Quoted verbatim, except for changed formatting:</p>\n\n<blockquote>\n <p>We return to the specific problem at hand that of computing a value\n of $W_k(z)$ for arbitrary integer $k$ and complex $z$ .</p>\n \n <p>Taking full advantage of the features of iterative rootfinders\n outlined above we compared the efficiency of three methods, namely,</p>\n \n <p><code>(1)</code> Newton's method </p>\n \n <p><code>(2)</code> Halley's method </p>\n \n <p><code>(3)</code> the fourth order method described in <code>[30]</code> (as published this last method evaluates only the principal branch of $W$ at\n positive real arguments but it easily extends to all branches and to\n all complex arguments)</p>\n \n <p>... The results showed quite consistently that method <code>(2)</code> is\n the optimal method ...</p>\n \n <p>For the $W$ function, Halley's method takes the form:</p>\n \n <p>$$w_{j+1} = w_j - \\frac{w_j e^{w_j}-z}{e^{w_j}(w_j+1)-\\frac{(w_j+2)w_je^{w_j}-z}{2w_j + 2}}$$</p>\n</blockquote>\n\n<p>You can find more information about Newton's Method and Halley's method on Wikipedia. Let me know if you're confused about the notation, what $k$ means, or what a \"branch\" is.</p>\n\n<p>If there's more than one solution to $x = We^W$, then you have to pick one in order for $W$ to be a function. For real numbers, $x=We^W$ has a unique solution $W$ for $x&gt;0$, has two solutions for $-\\frac 1 e \\le x \\le 0$ and no solution for $x &lt; -\\frac 1 e$. So we \"cut off the branch\" that gives us the second solution for $-\\frac 1 e \\le x \\le 0$, and the remaining function is called the \"principal branch\".</p>\n" }, { "answer_id": 2091768, "score": 0, "text": "<p>A longish comment:</p>\n\n<p>Even without actually knowing what the Lambert function is, you can get a reasonably good answer by knowing that $x$ is reasonably small (you know this since $x^2-e^x$ is monotonic, and checking the values at say $x=-1,0$).</p>\n\n<p>If you expand $e^x$, and keep only up to the quadratic term, you have:\n$$x^2=1+x+x^2/2$$\nWhich gives:\n$$x= 1-\\sqrt{1+2}\\approx 0.73$$\nWhich is pretty close. You can then use Newton's method to refine if needed.</p>\n" }, { "answer_id": 2080488, "score": 7, "text": "<p>There is no \"general form of the Lambert W-function\". All you have is the definition, tables, power series, other numerical methods. It's like the trigonometric functions - no algebraic \"formula\". </p>\n\n<p>From wikipedia:</p>\n\n<blockquote>\n <p>The Lambert W relation cannot be expressed in terms of elementary\n functions.<a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">1</a></p>\n</blockquote>\n\n<p>Nowadays the trig functions are so common that you don't tend to think of them as special in that way. \nThe Lambert W-function seems to appear more and more often on this site. Perhaps some day people will view it as just another special function.</p>\n" }, { "answer_id": 2080498, "score": 3, "text": "<p>Short answer:</p>\n\n<p>As Ethan Bolker said, in general the only way to do it is using a computer and approximation methods. When a computer evaluates $W(z)$, it generally uses an approximation method.</p>\n\n<p>Longer answer:</p>\n\n<p>One such approximation method is as follows. It's an iterative method, which means you pick a starting number as a guess, plug it into the formula, and a better guess comes out. Repeat until you get the desired precision.</p>\n\n<blockquote>\n <p>Source: 1996: R.M. Corless, G.H. Gonnet, D.E.G Hare, D.J. Jeffrey and\n D.E. Knuth: On the Lambert W Function (Vol. 5: 329 – 359)</p>\n</blockquote>\n\n<p>Quoted verbatim, except for changed formatting:</p>\n\n<blockquote>\n <p>We return to the specific problem at hand that of computing a value\n of $W_k(z)$ for arbitrary integer $k$ and complex $z$ .</p>\n \n <p>Taking full advantage of the features of iterative rootfinders\n outlined above we compared the efficiency of three methods, namely,</p>\n \n <p><code>(1)</code> Newton's method </p>\n \n <p><code>(2)</code> Halley's method </p>\n \n <p><code>(3)</code> the fourth order method described in <code>[30]</code> (as published this last method evaluates only the principal branch of $W$ at\n positive real arguments but it easily extends to all branches and to\n all complex arguments)</p>\n \n <p>... The results showed quite consistently that method <code>(2)</code> is\n the optimal method ...</p>\n \n <p>For the $W$ function, Halley's method takes the form:</p>\n \n <p>$$w_{j+1} = w_j - \\frac{w_j e^{w_j}-z}{e^{w_j}(w_j+1)-\\frac{(w_j+2)w_je^{w_j}-z}{2w_j + 2}}$$</p>\n</blockquote>\n\n<p>You can find more information about Newton's Method and Halley's method on Wikipedia. Let me know if you're confused about the notation, what $k$ means, or what a \"branch\" is.</p>\n\n<p>If there's more than one solution to $x = We^W$, then you have to pick one in order for $W$ to be a function. For real numbers, $x=We^W$ has a unique solution $W$ for $x&gt;0$, has two solutions for $-\\frac 1 e \\le x \\le 0$ and no solution for $x &lt; -\\frac 1 e$. So we \"cut off the branch\" that gives us the second solution for $-\\frac 1 e \\le x \\le 0$, and the remaining function is called the \"principal branch\".</p>\n" }, { "answer_id": 2091768, "score": 0, "text": "<p>A longish comment:</p>\n\n<p>Even without actually knowing what the Lambert function is, you can get a reasonably good answer by knowing that $x$ is reasonably small (you know this since $x^2-e^x$ is monotonic, and checking the values at say $x=-1,0$).</p>\n\n<p>If you expand $e^x$, and keep only up to the quadratic term, you have:\n$$x^2=1+x+x^2/2$$\nWhich gives:\n$$x= 1-\\sqrt{1+2}\\approx 0.73$$\nWhich is pretty close. You can then use Newton's method to refine if needed.</p>\n" }, { "answer_id": 2080488, "score": 7, "text": "<p>There is no \"general form of the Lambert W-function\". All you have is the definition, tables, power series, other numerical methods. It's like the trigonometric functions - no algebraic \"formula\". </p>\n\n<p>From wikipedia:</p>\n\n<blockquote>\n <p>The Lambert W relation cannot be expressed in terms of elementary\n functions.<a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">1</a></p>\n</blockquote>\n\n<p>Nowadays the trig functions are so common that you don't tend to think of them as special in that way. \nThe Lambert W-function seems to appear more and more often on this site. Perhaps some day people will view it as just another special function.</p>\n" }, { "answer_id": 2080498, "score": 3, "text": "<p>Short answer:</p>\n\n<p>As Ethan Bolker said, in general the only way to do it is using a computer and approximation methods. When a computer evaluates $W(z)$, it generally uses an approximation method.</p>\n\n<p>Longer answer:</p>\n\n<p>One such approximation method is as follows. It's an iterative method, which means you pick a starting number as a guess, plug it into the formula, and a better guess comes out. Repeat until you get the desired precision.</p>\n\n<blockquote>\n <p>Source: 1996: R.M. Corless, G.H. Gonnet, D.E.G Hare, D.J. Jeffrey and\n D.E. Knuth: On the Lambert W Function (Vol. 5: 329 – 359)</p>\n</blockquote>\n\n<p>Quoted verbatim, except for changed formatting:</p>\n\n<blockquote>\n <p>We return to the specific problem at hand that of computing a value\n of $W_k(z)$ for arbitrary integer $k$ and complex $z$ .</p>\n \n <p>Taking full advantage of the features of iterative rootfinders\n outlined above we compared the efficiency of three methods, namely,</p>\n \n <p><code>(1)</code> Newton's method </p>\n \n <p><code>(2)</code> Halley's method </p>\n \n <p><code>(3)</code> the fourth order method described in <code>[30]</code> (as published this last method evaluates only the principal branch of $W$ at\n positive real arguments but it easily extends to all branches and to\n all complex arguments)</p>\n \n <p>... The results showed quite consistently that method <code>(2)</code> is\n the optimal method ...</p>\n \n <p>For the $W$ function, Halley's method takes the form:</p>\n \n <p>$$w_{j+1} = w_j - \\frac{w_j e^{w_j}-z}{e^{w_j}(w_j+1)-\\frac{(w_j+2)w_je^{w_j}-z}{2w_j + 2}}$$</p>\n</blockquote>\n\n<p>You can find more information about Newton's Method and Halley's method on Wikipedia. Let me know if you're confused about the notation, what $k$ means, or what a \"branch\" is.</p>\n\n<p>If there's more than one solution to $x = We^W$, then you have to pick one in order for $W$ to be a function. For real numbers, $x=We^W$ has a unique solution $W$ for $x&gt;0$, has two solutions for $-\\frac 1 e \\le x \\le 0$ and no solution for $x &lt; -\\frac 1 e$. So we \"cut off the branch\" that gives us the second solution for $-\\frac 1 e \\le x \\le 0$, and the remaining function is called the \"principal branch\".</p>\n" }, { "answer_id": 2091768, "score": 0, "text": "<p>A longish comment:</p>\n\n<p>Even without actually knowing what the Lambert function is, you can get a reasonably good answer by knowing that $x$ is reasonably small (you know this since $x^2-e^x$ is monotonic, and checking the values at say $x=-1,0$).</p>\n\n<p>If you expand $e^x$, and keep only up to the quadratic term, you have:\n$$x^2=1+x+x^2/2$$\nWhich gives:\n$$x= 1-\\sqrt{1+2}\\approx 0.73$$\nWhich is pretty close. You can then use Newton's method to refine if needed.</p>\n" }, { "answer_id": 2080488, "score": 7, "text": "<p>There is no \"general form of the Lambert W-function\". All you have is the definition, tables, power series, other numerical methods. It's like the trigonometric functions - no algebraic \"formula\". </p>\n\n<p>From wikipedia:</p>\n\n<blockquote>\n <p>The Lambert W relation cannot be expressed in terms of elementary\n functions.<a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">1</a></p>\n</blockquote>\n\n<p>Nowadays the trig functions are so common that you don't tend to think of them as special in that way. \nThe Lambert W-function seems to appear more and more often on this site. Perhaps some day people will view it as just another special function.</p>\n" }, { "answer_id": 2080498, "score": 3, "text": "<p>Short answer:</p>\n\n<p>As Ethan Bolker said, in general the only way to do it is using a computer and approximation methods. When a computer evaluates $W(z)$, it generally uses an approximation method.</p>\n\n<p>Longer answer:</p>\n\n<p>One such approximation method is as follows. It's an iterative method, which means you pick a starting number as a guess, plug it into the formula, and a better guess comes out. Repeat until you get the desired precision.</p>\n\n<blockquote>\n <p>Source: 1996: R.M. Corless, G.H. Gonnet, D.E.G Hare, D.J. Jeffrey and\n D.E. Knuth: On the Lambert W Function (Vol. 5: 329 – 359)</p>\n</blockquote>\n\n<p>Quoted verbatim, except for changed formatting:</p>\n\n<blockquote>\n <p>We return to the specific problem at hand that of computing a value\n of $W_k(z)$ for arbitrary integer $k$ and complex $z$ .</p>\n \n <p>Taking full advantage of the features of iterative rootfinders\n outlined above we compared the efficiency of three methods, namely,</p>\n \n <p><code>(1)</code> Newton's method </p>\n \n <p><code>(2)</code> Halley's method </p>\n \n <p><code>(3)</code> the fourth order method described in <code>[30]</code> (as published this last method evaluates only the principal branch of $W$ at\n positive real arguments but it easily extends to all branches and to\n all complex arguments)</p>\n \n <p>... The results showed quite consistently that method <code>(2)</code> is\n the optimal method ...</p>\n \n <p>For the $W$ function, Halley's method takes the form:</p>\n \n <p>$$w_{j+1} = w_j - \\frac{w_j e^{w_j}-z}{e^{w_j}(w_j+1)-\\frac{(w_j+2)w_je^{w_j}-z}{2w_j + 2}}$$</p>\n</blockquote>\n\n<p>You can find more information about Newton's Method and Halley's method on Wikipedia. Let me know if you're confused about the notation, what $k$ means, or what a \"branch\" is.</p>\n\n<p>If there's more than one solution to $x = We^W$, then you have to pick one in order for $W$ to be a function. For real numbers, $x=We^W$ has a unique solution $W$ for $x&gt;0$, has two solutions for $-\\frac 1 e \\le x \\le 0$ and no solution for $x &lt; -\\frac 1 e$. So we \"cut off the branch\" that gives us the second solution for $-\\frac 1 e \\le x \\le 0$, and the remaining function is called the \"principal branch\".</p>\n" }, { "answer_id": 2091768, "score": 0, "text": "<p>A longish comment:</p>\n\n<p>Even without actually knowing what the Lambert function is, you can get a reasonably good answer by knowing that $x$ is reasonably small (you know this since $x^2-e^x$ is monotonic, and checking the values at say $x=-1,0$).</p>\n\n<p>If you expand $e^x$, and keep only up to the quadratic term, you have:\n$$x^2=1+x+x^2/2$$\nWhich gives:\n$$x= 1-\\sqrt{1+2}\\approx 0.73$$\nWhich is pretty close. You can then use Newton's method to refine if needed.</p>\n" } ]
Question: Solving $W$ function Answer 1: <p>There is no "general form of the Lambert W-function". All you have is the definition, tables, power series, other numerical methods. It's like the trigonometric functions - no algebraic "formula". </p> <p>From wikipedia:</p> <blockquote> <p>The Lambert W relation cannot be expressed in terms of elementary functions.<a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">1</a></p> </blockquote> <p>Nowadays the trig functions are so common that you don't tend to think of them as special in that way. The Lambert W-function seems to appear more and more often on this site. Perhaps some day people will view it as just another special function.</p> Answer 2: <p>Short answer:</p> <p>As Ethan Bolker said, in general the only way to do it is using a computer and approximation methods. When a computer evaluates $W(z)$, it generally uses an approximation method.</p> <p>Longer answer:</p> <p>One such approximation method is as follows. It's an iterative method, which means you pick a starting number as a guess, plug it into the formula, and a better guess comes out. Repeat until you get the desired precision.</p> <blockquote> <p>Source: 1996: R.M. Corless, G.H. Gonnet, D.E.G Hare, D.J. Jeffrey and D.E. Knuth: On the Lambert W Function (Vol. 5: 329 – 359)</p> </blockquote> <p>Quoted verbatim, except for changed formatting:</p> <blockquote> <p>We return to the specific problem at hand that of computing a value of $W_k(z)$ for arbitrary integer $k$ and complex $z$ .</p> <p>Taking full advantage of the features of iterative rootfinders outlined above we compared the efficiency of three methods, namely,</p> <p><code>(1)</code> Newton's method </p> <p><code>(2)</code> Halley's method </p> <p><code>(3)</code> the fourth order method described in <code>[30]</code> (as published this last method evaluates only the principal branch of $W$ at positive real arguments but it easily extends to all branches and to all complex arguments)</p> <p>... The results showed quite consistently that method <code>(2)</code> is the optimal method ...</p> <p>For the $W$ function, Halley's method takes the form:</p> <p>$$w_{j+1} = w_j - \frac{w_j e^{w_j}-z}{e^{w_j}(w_j+1)-\frac{(w_j+2)w_je^{w_j}-z}{2w_j + 2}}$$</p> </blockquote> <p>You can find more information about Newton's Method and Halley's method on Wikipedia. Let me know if you're confused about the notation, what $k$ means, or what a "branch" is.</p> <p>If there's more than one solution to $x = We^W$, then you have to pick one in order for $W$ to be a function. For real numbers, $x=We^W$ has a unique solution $W$ for $x&gt;0$, has two solutions for $-\frac 1 e \le x \le 0$ and no solution for $x &lt; -\frac 1 e$. So we "cut off the branch" that gives us the second solution for $-\frac 1 e \le x \le 0$, and the remaining function is called the "principal branch".</p> Answer 3: <p>A longish comment:</p> <p>Even without actually knowing what the Lambert function is, you can get a reasonably good answer by knowing that $x$ is reasonably small (you know this since $x^2-e^x$ is monotonic, and checking the values at say $x=-1,0$).</p> <p>If you expand $e^x$, and keep only up to the quadratic term, you have: $$x^2=1+x+x^2/2$$ Which gives: $$x= 1-\sqrt{1+2}\approx 0.73$$ Which is pretty close. You can then use Newton's method to refine if needed.</p> Answer 4: <p>There is no "general form of the Lambert W-function". All you have is the definition, tables, power series, other numerical methods. It's like the trigonometric functions - no algebraic "formula". </p> <p>From wikipedia:</p> <blockquote> <p>The Lambert W relation cannot be expressed in terms of elementary functions.<a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">1</a></p> </blockquote> <p>Nowadays the trig functions are so common that you don't tend to think of them as special in that way. The Lambert W-function seems to appear more and more often on this site. Perhaps some day people will view it as just another special function.</p> Answer 5: <p>Short answer:</p> <p>As Ethan Bolker said, in general the only way to do it is using a computer and approximation methods. When a computer evaluates $W(z)$, it generally uses an approximation method.</p> <p>Longer answer:</p> <p>One such approximation method is as follows. It's an iterative method, which means you pick a starting number as a guess, plug it into the formula, and a better guess comes out. Repeat until you get the desired precision.</p> <blockquote> <p>Source: 1996: R.M. Corless, G.H. Gonnet, D.E.G Hare, D.J. Jeffrey and D.E. Knuth: On the Lambert W Function (Vol. 5: 329 – 359)</p> </blockquote> <p>Quoted verbatim, except for changed formatting:</p> <blockquote> <p>We return to the specific problem at hand that of computing a value of $W_k(z)$ for arbitrary integer $k$ and complex $z$ .</p> <p>Taking full advantage of the features of iterative rootfinders outlined above we compared the efficiency of three methods, namely,</p> <p><code>(1)</code> Newton's method </p> <p><code>(2)</code> Halley's method </p> <p><code>(3)</code> the fourth order method described in <code>[30]</code> (as published this last method evaluates only the principal branch of $W$ at positive real arguments but it easily extends to all branches and to all complex arguments)</p> <p>... The results showed quite consistently that method <code>(2)</code> is the optimal method ...</p> <p>For the $W$ function, Halley's method takes the form:</p> <p>$$w_{j+1} = w_j - \frac{w_j e^{w_j}-z}{e^{w_j}(w_j+1)-\frac{(w_j+2)w_je^{w_j}-z}{2w_j + 2}}$$</p> </blockquote> <p>You can find more information about Newton's Method and Halley's method on Wikipedia. Let me know if you're confused about the notation, what $k$ means, or what a "branch" is.</p> <p>If there's more than one solution to $x = We^W$, then you have to pick one in order for $W$ to be a function. For real numbers, $x=We^W$ has a unique solution $W$ for $x&gt;0$, has two solutions for $-\frac 1 e \le x \le 0$ and no solution for $x &lt; -\frac 1 e$. So we "cut off the branch" that gives us the second solution for $-\frac 1 e \le x \le 0$, and the remaining function is called the "principal branch".</p> Answer 6: <p>A longish comment:</p> <p>Even without actually knowing what the Lambert function is, you can get a reasonably good answer by knowing that $x$ is reasonably small (you know this since $x^2-e^x$ is monotonic, and checking the values at say $x=-1,0$).</p> <p>If you expand $e^x$, and keep only up to the quadratic term, you have: $$x^2=1+x+x^2/2$$ Which gives: $$x= 1-\sqrt{1+2}\approx 0.73$$ Which is pretty close. You can then use Newton's method to refine if needed.</p> Answer 7: <p>There is no "general form of the Lambert W-function". All you have is the definition, tables, power series, other numerical methods. It's like the trigonometric functions - no algebraic "formula". </p> <p>From wikipedia:</p> <blockquote> <p>The Lambert W relation cannot be expressed in terms of elementary functions.<a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">1</a></p> </blockquote> <p>Nowadays the trig functions are so common that you don't tend to think of them as special in that way. The Lambert W-function seems to appear more and more often on this site. Perhaps some day people will view it as just another special function.</p> Answer 8: <p>Short answer:</p> <p>As Ethan Bolker said, in general the only way to do it is using a computer and approximation methods. When a computer evaluates $W(z)$, it generally uses an approximation method.</p> <p>Longer answer:</p> <p>One such approximation method is as follows. It's an iterative method, which means you pick a starting number as a guess, plug it into the formula, and a better guess comes out. Repeat until you get the desired precision.</p> <blockquote> <p>Source: 1996: R.M. Corless, G.H. Gonnet, D.E.G Hare, D.J. Jeffrey and D.E. Knuth: On the Lambert W Function (Vol. 5: 329 – 359)</p> </blockquote> <p>Quoted verbatim, except for changed formatting:</p> <blockquote> <p>We return to the specific problem at hand that of computing a value of $W_k(z)$ for arbitrary integer $k$ and complex $z$ .</p> <p>Taking full advantage of the features of iterative rootfinders outlined above we compared the efficiency of three methods, namely,</p> <p><code>(1)</code> Newton's method </p> <p><code>(2)</code> Halley's method </p> <p><code>(3)</code> the fourth order method described in <code>[30]</code> (as published this last method evaluates only the principal branch of $W$ at positive real arguments but it easily extends to all branches and to all complex arguments)</p> <p>... The results showed quite consistently that method <code>(2)</code> is the optimal method ...</p> <p>For the $W$ function, Halley's method takes the form:</p> <p>$$w_{j+1} = w_j - \frac{w_j e^{w_j}-z}{e^{w_j}(w_j+1)-\frac{(w_j+2)w_je^{w_j}-z}{2w_j + 2}}$$</p> </blockquote> <p>You can find more information about Newton's Method and Halley's method on Wikipedia. Let me know if you're confused about the notation, what $k$ means, or what a "branch" is.</p> <p>If there's more than one solution to $x = We^W$, then you have to pick one in order for $W$ to be a function. For real numbers, $x=We^W$ has a unique solution $W$ for $x&gt;0$, has two solutions for $-\frac 1 e \le x \le 0$ and no solution for $x &lt; -\frac 1 e$. So we "cut off the branch" that gives us the second solution for $-\frac 1 e \le x \le 0$, and the remaining function is called the "principal branch".</p> Answer 9: <p>A longish comment:</p> <p>Even without actually knowing what the Lambert function is, you can get a reasonably good answer by knowing that $x$ is reasonably small (you know this since $x^2-e^x$ is monotonic, and checking the values at say $x=-1,0$).</p> <p>If you expand $e^x$, and keep only up to the quadratic term, you have: $$x^2=1+x+x^2/2$$ Which gives: $$x= 1-\sqrt{1+2}\approx 0.73$$ Which is pretty close. You can then use Newton's method to refine if needed.</p> Answer 10: <p>There is no "general form of the Lambert W-function". All you have is the definition, tables, power series, other numerical methods. It's like the trigonometric functions - no algebraic "formula". </p> <p>From wikipedia:</p> <blockquote> <p>The Lambert W relation cannot be expressed in terms of elementary functions.<a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">1</a></p> </blockquote> <p>Nowadays the trig functions are so common that you don't tend to think of them as special in that way. The Lambert W-function seems to appear more and more often on this site. Perhaps some day people will view it as just another special function.</p> Answer 11: <p>Short answer:</p> <p>As Ethan Bolker said, in general the only way to do it is using a computer and approximation methods. When a computer evaluates $W(z)$, it generally uses an approximation method.</p> <p>Longer answer:</p> <p>One such approximation method is as follows. It's an iterative method, which means you pick a starting number as a guess, plug it into the formula, and a better guess comes out. Repeat until you get the desired precision.</p> <blockquote> <p>Source: 1996: R.M. Corless, G.H. Gonnet, D.E.G Hare, D.J. Jeffrey and D.E. Knuth: On the Lambert W Function (Vol. 5: 329 – 359)</p> </blockquote> <p>Quoted verbatim, except for changed formatting:</p> <blockquote> <p>We return to the specific problem at hand that of computing a value of $W_k(z)$ for arbitrary integer $k$ and complex $z$ .</p> <p>Taking full advantage of the features of iterative rootfinders outlined above we compared the efficiency of three methods, namely,</p> <p><code>(1)</code> Newton's method </p> <p><code>(2)</code> Halley's method </p> <p><code>(3)</code> the fourth order method described in <code>[30]</code> (as published this last method evaluates only the principal branch of $W$ at positive real arguments but it easily extends to all branches and to all complex arguments)</p> <p>... The results showed quite consistently that method <code>(2)</code> is the optimal method ...</p> <p>For the $W$ function, Halley's method takes the form:</p> <p>$$w_{j+1} = w_j - \frac{w_j e^{w_j}-z}{e^{w_j}(w_j+1)-\frac{(w_j+2)w_je^{w_j}-z}{2w_j + 2}}$$</p> </blockquote> <p>You can find more information about Newton's Method and Halley's method on Wikipedia. Let me know if you're confused about the notation, what $k$ means, or what a "branch" is.</p> <p>If there's more than one solution to $x = We^W$, then you have to pick one in order for $W$ to be a function. For real numbers, $x=We^W$ has a unique solution $W$ for $x&gt;0$, has two solutions for $-\frac 1 e \le x \le 0$ and no solution for $x &lt; -\frac 1 e$. So we "cut off the branch" that gives us the second solution for $-\frac 1 e \le x \le 0$, and the remaining function is called the "principal branch".</p> Answer 12: <p>A longish comment:</p> <p>Even without actually knowing what the Lambert function is, you can get a reasonably good answer by knowing that $x$ is reasonably small (you know this since $x^2-e^x$ is monotonic, and checking the values at say $x=-1,0$).</p> <p>If you expand $e^x$, and keep only up to the quadratic term, you have: $$x^2=1+x+x^2/2$$ Which gives: $$x= 1-\sqrt{1+2}\approx 0.73$$ Which is pretty close. You can then use Newton's method to refine if needed.</p>
e as limit with a rational sequence instead of n
2,080,673
4
https://math.stackexchange.com/questions/2080673/e-as-limit-with-a-rational-sequence-instead-of-n
<p>Hello and happy New Year to the Stackexchange community. I had trouble solving this seemingly trivial problem and it would be great if you could help me out.</p> <p>$$\lim_{n \to \infty}(1+\frac{1}{n})^{n}=e$$</p> <p>This definition of e is given and I want to show that the following identitiy is true as well.</p> <p>$$\lim_{n \to \infty} (1+\frac{1}{q_{n}})^{q_{n}} = e$$</p> <p>with $q_{n}$ as a positive rational sequence which diverges to $+\infty$ (therefore, $\lim_{n \to \infty} q_{n}=+\infty$).</p> <hr> <p>I figured that since</p> <p>$$\lim_{n \to \infty} n = +\infty = \lim_{n \to \infty} q_{n}$$</p> <p>I could move $\lim_{n \to \infty}$ inside the given expression, so that we have</p> <p>$$\lim_{n \to \infty}(1+\frac{1}{q_{n}})^{q_{n}}=(1+\frac{1}{\lim_{n \to \infty} q_{n}})^{\lim_{n \to \infty}q_{n}} = (1+\frac{1}{\lim_{n \to \infty} n})^{\lim_{n \to \infty} n} = \lim_{n \to \infty}(1+\frac{1}{n})^{n}=e.$$</p> <p>However, apperantly, I need to show that the sequence is continuous in order to do this and my course didn't introduce continuity yet. Of course, I could first prove the therom needed to move the limit inside the expression and then show the sequence is continous, but I feel like there is a much more easier and elegant solution to this.</p> <p>I'm sorry if this is a duplicate of a previously answered question (at least I couldn't find one) and any insight would be appreciated. Thank you very much.</p> <hr> <p>Update — I think I managed to solve it by brute force, but it took me hours and my proof is 2+ pages long. I'm sure that my proof was not the intended one and there must be a simpler one. Anyway, thank you for the help and when I have time, I will update this question with my solution as reference.</p>
[ { "answer_id": 2083219, "score": 2, "text": "<p>It is not much difficult to establish what you seek. The function $$f(x) = \\left(1 + \\frac{1}{x}\\right)^{x}$$ for $x \\in \\mathbb{Q}^{+}$ is an increasing function. This is easy to prove via Bernoulli's inequality. Let $x, y$ be positive rationals with $x &gt; y$ then we have $$\\left(1 + \\dfrac{1}{x}\\right)^{x/y} \\geq 1 + \\frac{x}{y}\\cdot\\frac{1}{x} = 1 + \\frac{1}{y}$$ so that $f(x) \\geq f(y)$.</p>\n\n<p>Let $q_{n}$ be a sequence of rationals tending to $\\infty$ and let $s_{n} = \\lfloor q_{n}\\rfloor$ and $t_{n} = s_{n} + 1$. Clearly we can see that from a certain value of $n$ the sequences $s_{n}, t_{n}$ are made of positive integers only and both tend to $\\infty$. Now we know that $f(n) \\to e$ as $n \\to \\infty$ and hence $f(s_{n}), f(t_{n})$ both tend to $e$ as $n \\to \\infty$. Since $s_{n}\\leq q_{n}\\leq t_{n}$ it follows that $f(s_{n}) \\leq f(q_{n})\\leq f(t_{n})$ and by Squeeze theorem $f(q_{n}) \\to e$ as $n \\to \\infty$.</p>\n\n<hr>\n\n<p><strong>Note</strong>: The above proof requires the <a href=\"https://math.stackexchange.com/a/327189/72031\">version of Bernoulli's inequality with rational exponents</a>.</p>\n" }, { "answer_id": 2080802, "score": -1, "text": "<p>(long comment)</p>\n\n<p>I suggest a way(not the only way) here. First you can treat $\\lim_{n \\to \\infty}(1+\\frac{1}{n})^{n}=e$ as a limit of a <strong>function</strong> $f:\\Bbb R\\to\\Bbb R$, rather than a sequence, to infinity. So you may write $\\lim_{x \\to \\infty}(1+\\frac{1}{x})^{x}=e$ instead.</p>\n\n<p>Then, if you have admit that $\\lim_{x \\to \\infty}(1+\\frac{1}{x})^{x}=e$ to be true, then you can directly use the famous theorem, Sequential Characterization of Limits(in particular, the version that related to limits as $x$ goes to $\\infty$, if you don't know, just google it!). Then you can instantly get that $\\lim_{n \\to \\infty} (1+\\frac{1}{q_{n}})^{q_{n}} = e$ for all rational sequence $q_n$ that diverges to $\\infty$.</p>\n\n<p>PS: I'm afraid that if you are only allowable to admit $\\lim_{n \\to \\infty}(1+\\frac{1}{n})^{n}=e$ but not $\\lim_{x \\to \\infty}(1+\\frac{1}{x})^{x}=e$, then your claim can not be proved, though I'm not sure about this.</p>\n" }, { "answer_id": 2083219, "score": 2, "text": "<p>It is not much difficult to establish what you seek. The function $$f(x) = \\left(1 + \\frac{1}{x}\\right)^{x}$$ for $x \\in \\mathbb{Q}^{+}$ is an increasing function. This is easy to prove via Bernoulli's inequality. Let $x, y$ be positive rationals with $x &gt; y$ then we have $$\\left(1 + \\dfrac{1}{x}\\right)^{x/y} \\geq 1 + \\frac{x}{y}\\cdot\\frac{1}{x} = 1 + \\frac{1}{y}$$ so that $f(x) \\geq f(y)$.</p>\n\n<p>Let $q_{n}$ be a sequence of rationals tending to $\\infty$ and let $s_{n} = \\lfloor q_{n}\\rfloor$ and $t_{n} = s_{n} + 1$. Clearly we can see that from a certain value of $n$ the sequences $s_{n}, t_{n}$ are made of positive integers only and both tend to $\\infty$. Now we know that $f(n) \\to e$ as $n \\to \\infty$ and hence $f(s_{n}), f(t_{n})$ both tend to $e$ as $n \\to \\infty$. Since $s_{n}\\leq q_{n}\\leq t_{n}$ it follows that $f(s_{n}) \\leq f(q_{n})\\leq f(t_{n})$ and by Squeeze theorem $f(q_{n}) \\to e$ as $n \\to \\infty$.</p>\n\n<hr>\n\n<p><strong>Note</strong>: The above proof requires the <a href=\"https://math.stackexchange.com/a/327189/72031\">version of Bernoulli's inequality with rational exponents</a>.</p>\n" }, { "answer_id": 2080802, "score": -1, "text": "<p>(long comment)</p>\n\n<p>I suggest a way(not the only way) here. First you can treat $\\lim_{n \\to \\infty}(1+\\frac{1}{n})^{n}=e$ as a limit of a <strong>function</strong> $f:\\Bbb R\\to\\Bbb R$, rather than a sequence, to infinity. So you may write $\\lim_{x \\to \\infty}(1+\\frac{1}{x})^{x}=e$ instead.</p>\n\n<p>Then, if you have admit that $\\lim_{x \\to \\infty}(1+\\frac{1}{x})^{x}=e$ to be true, then you can directly use the famous theorem, Sequential Characterization of Limits(in particular, the version that related to limits as $x$ goes to $\\infty$, if you don't know, just google it!). Then you can instantly get that $\\lim_{n \\to \\infty} (1+\\frac{1}{q_{n}})^{q_{n}} = e$ for all rational sequence $q_n$ that diverges to $\\infty$.</p>\n\n<p>PS: I'm afraid that if you are only allowable to admit $\\lim_{n \\to \\infty}(1+\\frac{1}{n})^{n}=e$ but not $\\lim_{x \\to \\infty}(1+\\frac{1}{x})^{x}=e$, then your claim can not be proved, though I'm not sure about this.</p>\n" }, { "answer_id": 2083219, "score": 2, "text": "<p>It is not much difficult to establish what you seek. The function $$f(x) = \\left(1 + \\frac{1}{x}\\right)^{x}$$ for $x \\in \\mathbb{Q}^{+}$ is an increasing function. This is easy to prove via Bernoulli's inequality. Let $x, y$ be positive rationals with $x &gt; y$ then we have $$\\left(1 + \\dfrac{1}{x}\\right)^{x/y} \\geq 1 + \\frac{x}{y}\\cdot\\frac{1}{x} = 1 + \\frac{1}{y}$$ so that $f(x) \\geq f(y)$.</p>\n\n<p>Let $q_{n}$ be a sequence of rationals tending to $\\infty$ and let $s_{n} = \\lfloor q_{n}\\rfloor$ and $t_{n} = s_{n} + 1$. Clearly we can see that from a certain value of $n$ the sequences $s_{n}, t_{n}$ are made of positive integers only and both tend to $\\infty$. Now we know that $f(n) \\to e$ as $n \\to \\infty$ and hence $f(s_{n}), f(t_{n})$ both tend to $e$ as $n \\to \\infty$. Since $s_{n}\\leq q_{n}\\leq t_{n}$ it follows that $f(s_{n}) \\leq f(q_{n})\\leq f(t_{n})$ and by Squeeze theorem $f(q_{n}) \\to e$ as $n \\to \\infty$.</p>\n\n<hr>\n\n<p><strong>Note</strong>: The above proof requires the <a href=\"https://math.stackexchange.com/a/327189/72031\">version of Bernoulli's inequality with rational exponents</a>.</p>\n" }, { "answer_id": 2080802, "score": -1, "text": "<p>(long comment)</p>\n\n<p>I suggest a way(not the only way) here. First you can treat $\\lim_{n \\to \\infty}(1+\\frac{1}{n})^{n}=e$ as a limit of a <strong>function</strong> $f:\\Bbb R\\to\\Bbb R$, rather than a sequence, to infinity. So you may write $\\lim_{x \\to \\infty}(1+\\frac{1}{x})^{x}=e$ instead.</p>\n\n<p>Then, if you have admit that $\\lim_{x \\to \\infty}(1+\\frac{1}{x})^{x}=e$ to be true, then you can directly use the famous theorem, Sequential Characterization of Limits(in particular, the version that related to limits as $x$ goes to $\\infty$, if you don't know, just google it!). Then you can instantly get that $\\lim_{n \\to \\infty} (1+\\frac{1}{q_{n}})^{q_{n}} = e$ for all rational sequence $q_n$ that diverges to $\\infty$.</p>\n\n<p>PS: I'm afraid that if you are only allowable to admit $\\lim_{n \\to \\infty}(1+\\frac{1}{n})^{n}=e$ but not $\\lim_{x \\to \\infty}(1+\\frac{1}{x})^{x}=e$, then your claim can not be proved, though I'm not sure about this.</p>\n" }, { "answer_id": 2083219, "score": 2, "text": "<p>It is not much difficult to establish what you seek. The function $$f(x) = \\left(1 + \\frac{1}{x}\\right)^{x}$$ for $x \\in \\mathbb{Q}^{+}$ is an increasing function. This is easy to prove via Bernoulli's inequality. Let $x, y$ be positive rationals with $x &gt; y$ then we have $$\\left(1 + \\dfrac{1}{x}\\right)^{x/y} \\geq 1 + \\frac{x}{y}\\cdot\\frac{1}{x} = 1 + \\frac{1}{y}$$ so that $f(x) \\geq f(y)$.</p>\n\n<p>Let $q_{n}$ be a sequence of rationals tending to $\\infty$ and let $s_{n} = \\lfloor q_{n}\\rfloor$ and $t_{n} = s_{n} + 1$. Clearly we can see that from a certain value of $n$ the sequences $s_{n}, t_{n}$ are made of positive integers only and both tend to $\\infty$. Now we know that $f(n) \\to e$ as $n \\to \\infty$ and hence $f(s_{n}), f(t_{n})$ both tend to $e$ as $n \\to \\infty$. Since $s_{n}\\leq q_{n}\\leq t_{n}$ it follows that $f(s_{n}) \\leq f(q_{n})\\leq f(t_{n})$ and by Squeeze theorem $f(q_{n}) \\to e$ as $n \\to \\infty$.</p>\n\n<hr>\n\n<p><strong>Note</strong>: The above proof requires the <a href=\"https://math.stackexchange.com/a/327189/72031\">version of Bernoulli's inequality with rational exponents</a>.</p>\n" }, { "answer_id": 2080802, "score": -1, "text": "<p>(long comment)</p>\n\n<p>I suggest a way(not the only way) here. First you can treat $\\lim_{n \\to \\infty}(1+\\frac{1}{n})^{n}=e$ as a limit of a <strong>function</strong> $f:\\Bbb R\\to\\Bbb R$, rather than a sequence, to infinity. So you may write $\\lim_{x \\to \\infty}(1+\\frac{1}{x})^{x}=e$ instead.</p>\n\n<p>Then, if you have admit that $\\lim_{x \\to \\infty}(1+\\frac{1}{x})^{x}=e$ to be true, then you can directly use the famous theorem, Sequential Characterization of Limits(in particular, the version that related to limits as $x$ goes to $\\infty$, if you don't know, just google it!). Then you can instantly get that $\\lim_{n \\to \\infty} (1+\\frac{1}{q_{n}})^{q_{n}} = e$ for all rational sequence $q_n$ that diverges to $\\infty$.</p>\n\n<p>PS: I'm afraid that if you are only allowable to admit $\\lim_{n \\to \\infty}(1+\\frac{1}{n})^{n}=e$ but not $\\lim_{x \\to \\infty}(1+\\frac{1}{x})^{x}=e$, then your claim can not be proved, though I'm not sure about this.</p>\n" } ]
Question: e as limit with a rational sequence instead of n Answer 1: <p>It is not much difficult to establish what you seek. The function $$f(x) = \left(1 + \frac{1}{x}\right)^{x}$$ for $x \in \mathbb{Q}^{+}$ is an increasing function. This is easy to prove via Bernoulli's inequality. Let $x, y$ be positive rationals with $x &gt; y$ then we have $$\left(1 + \dfrac{1}{x}\right)^{x/y} \geq 1 + \frac{x}{y}\cdot\frac{1}{x} = 1 + \frac{1}{y}$$ so that $f(x) \geq f(y)$.</p> <p>Let $q_{n}$ be a sequence of rationals tending to $\infty$ and let $s_{n} = \lfloor q_{n}\rfloor$ and $t_{n} = s_{n} + 1$. Clearly we can see that from a certain value of $n$ the sequences $s_{n}, t_{n}$ are made of positive integers only and both tend to $\infty$. Now we know that $f(n) \to e$ as $n \to \infty$ and hence $f(s_{n}), f(t_{n})$ both tend to $e$ as $n \to \infty$. Since $s_{n}\leq q_{n}\leq t_{n}$ it follows that $f(s_{n}) \leq f(q_{n})\leq f(t_{n})$ and by Squeeze theorem $f(q_{n}) \to e$ as $n \to \infty$.</p> <hr> <p><strong>Note</strong>: The above proof requires the <a href="https://math.stackexchange.com/a/327189/72031">version of Bernoulli's inequality with rational exponents</a>.</p> Answer 2: <p>(long comment)</p> <p>I suggest a way(not the only way) here. First you can treat $\lim_{n \to \infty}(1+\frac{1}{n})^{n}=e$ as a limit of a <strong>function</strong> $f:\Bbb R\to\Bbb R$, rather than a sequence, to infinity. So you may write $\lim_{x \to \infty}(1+\frac{1}{x})^{x}=e$ instead.</p> <p>Then, if you have admit that $\lim_{x \to \infty}(1+\frac{1}{x})^{x}=e$ to be true, then you can directly use the famous theorem, Sequential Characterization of Limits(in particular, the version that related to limits as $x$ goes to $\infty$, if you don't know, just google it!). Then you can instantly get that $\lim_{n \to \infty} (1+\frac{1}{q_{n}})^{q_{n}} = e$ for all rational sequence $q_n$ that diverges to $\infty$.</p> <p>PS: I'm afraid that if you are only allowable to admit $\lim_{n \to \infty}(1+\frac{1}{n})^{n}=e$ but not $\lim_{x \to \infty}(1+\frac{1}{x})^{x}=e$, then your claim can not be proved, though I'm not sure about this.</p> Answer 3: <p>It is not much difficult to establish what you seek. The function $$f(x) = \left(1 + \frac{1}{x}\right)^{x}$$ for $x \in \mathbb{Q}^{+}$ is an increasing function. This is easy to prove via Bernoulli's inequality. Let $x, y$ be positive rationals with $x &gt; y$ then we have $$\left(1 + \dfrac{1}{x}\right)^{x/y} \geq 1 + \frac{x}{y}\cdot\frac{1}{x} = 1 + \frac{1}{y}$$ so that $f(x) \geq f(y)$.</p> <p>Let $q_{n}$ be a sequence of rationals tending to $\infty$ and let $s_{n} = \lfloor q_{n}\rfloor$ and $t_{n} = s_{n} + 1$. Clearly we can see that from a certain value of $n$ the sequences $s_{n}, t_{n}$ are made of positive integers only and both tend to $\infty$. Now we know that $f(n) \to e$ as $n \to \infty$ and hence $f(s_{n}), f(t_{n})$ both tend to $e$ as $n \to \infty$. Since $s_{n}\leq q_{n}\leq t_{n}$ it follows that $f(s_{n}) \leq f(q_{n})\leq f(t_{n})$ and by Squeeze theorem $f(q_{n}) \to e$ as $n \to \infty$.</p> <hr> <p><strong>Note</strong>: The above proof requires the <a href="https://math.stackexchange.com/a/327189/72031">version of Bernoulli's inequality with rational exponents</a>.</p> Answer 4: <p>(long comment)</p> <p>I suggest a way(not the only way) here. First you can treat $\lim_{n \to \infty}(1+\frac{1}{n})^{n}=e$ as a limit of a <strong>function</strong> $f:\Bbb R\to\Bbb R$, rather than a sequence, to infinity. So you may write $\lim_{x \to \infty}(1+\frac{1}{x})^{x}=e$ instead.</p> <p>Then, if you have admit that $\lim_{x \to \infty}(1+\frac{1}{x})^{x}=e$ to be true, then you can directly use the famous theorem, Sequential Characterization of Limits(in particular, the version that related to limits as $x$ goes to $\infty$, if you don't know, just google it!). Then you can instantly get that $\lim_{n \to \infty} (1+\frac{1}{q_{n}})^{q_{n}} = e$ for all rational sequence $q_n$ that diverges to $\infty$.</p> <p>PS: I'm afraid that if you are only allowable to admit $\lim_{n \to \infty}(1+\frac{1}{n})^{n}=e$ but not $\lim_{x \to \infty}(1+\frac{1}{x})^{x}=e$, then your claim can not be proved, though I'm not sure about this.</p> Answer 5: <p>It is not much difficult to establish what you seek. The function $$f(x) = \left(1 + \frac{1}{x}\right)^{x}$$ for $x \in \mathbb{Q}^{+}$ is an increasing function. This is easy to prove via Bernoulli's inequality. Let $x, y$ be positive rationals with $x &gt; y$ then we have $$\left(1 + \dfrac{1}{x}\right)^{x/y} \geq 1 + \frac{x}{y}\cdot\frac{1}{x} = 1 + \frac{1}{y}$$ so that $f(x) \geq f(y)$.</p> <p>Let $q_{n}$ be a sequence of rationals tending to $\infty$ and let $s_{n} = \lfloor q_{n}\rfloor$ and $t_{n} = s_{n} + 1$. Clearly we can see that from a certain value of $n$ the sequences $s_{n}, t_{n}$ are made of positive integers only and both tend to $\infty$. Now we know that $f(n) \to e$ as $n \to \infty$ and hence $f(s_{n}), f(t_{n})$ both tend to $e$ as $n \to \infty$. Since $s_{n}\leq q_{n}\leq t_{n}$ it follows that $f(s_{n}) \leq f(q_{n})\leq f(t_{n})$ and by Squeeze theorem $f(q_{n}) \to e$ as $n \to \infty$.</p> <hr> <p><strong>Note</strong>: The above proof requires the <a href="https://math.stackexchange.com/a/327189/72031">version of Bernoulli's inequality with rational exponents</a>.</p> Answer 6: <p>(long comment)</p> <p>I suggest a way(not the only way) here. First you can treat $\lim_{n \to \infty}(1+\frac{1}{n})^{n}=e$ as a limit of a <strong>function</strong> $f:\Bbb R\to\Bbb R$, rather than a sequence, to infinity. So you may write $\lim_{x \to \infty}(1+\frac{1}{x})^{x}=e$ instead.</p> <p>Then, if you have admit that $\lim_{x \to \infty}(1+\frac{1}{x})^{x}=e$ to be true, then you can directly use the famous theorem, Sequential Characterization of Limits(in particular, the version that related to limits as $x$ goes to $\infty$, if you don't know, just google it!). Then you can instantly get that $\lim_{n \to \infty} (1+\frac{1}{q_{n}})^{q_{n}} = e$ for all rational sequence $q_n$ that diverges to $\infty$.</p> <p>PS: I'm afraid that if you are only allowable to admit $\lim_{n \to \infty}(1+\frac{1}{n})^{n}=e$ but not $\lim_{x \to \infty}(1+\frac{1}{x})^{x}=e$, then your claim can not be proved, though I'm not sure about this.</p> Answer 7: <p>It is not much difficult to establish what you seek. The function $$f(x) = \left(1 + \frac{1}{x}\right)^{x}$$ for $x \in \mathbb{Q}^{+}$ is an increasing function. This is easy to prove via Bernoulli's inequality. Let $x, y$ be positive rationals with $x &gt; y$ then we have $$\left(1 + \dfrac{1}{x}\right)^{x/y} \geq 1 + \frac{x}{y}\cdot\frac{1}{x} = 1 + \frac{1}{y}$$ so that $f(x) \geq f(y)$.</p> <p>Let $q_{n}$ be a sequence of rationals tending to $\infty$ and let $s_{n} = \lfloor q_{n}\rfloor$ and $t_{n} = s_{n} + 1$. Clearly we can see that from a certain value of $n$ the sequences $s_{n}, t_{n}$ are made of positive integers only and both tend to $\infty$. Now we know that $f(n) \to e$ as $n \to \infty$ and hence $f(s_{n}), f(t_{n})$ both tend to $e$ as $n \to \infty$. Since $s_{n}\leq q_{n}\leq t_{n}$ it follows that $f(s_{n}) \leq f(q_{n})\leq f(t_{n})$ and by Squeeze theorem $f(q_{n}) \to e$ as $n \to \infty$.</p> <hr> <p><strong>Note</strong>: The above proof requires the <a href="https://math.stackexchange.com/a/327189/72031">version of Bernoulli's inequality with rational exponents</a>.</p> Answer 8: <p>(long comment)</p> <p>I suggest a way(not the only way) here. First you can treat $\lim_{n \to \infty}(1+\frac{1}{n})^{n}=e$ as a limit of a <strong>function</strong> $f:\Bbb R\to\Bbb R$, rather than a sequence, to infinity. So you may write $\lim_{x \to \infty}(1+\frac{1}{x})^{x}=e$ instead.</p> <p>Then, if you have admit that $\lim_{x \to \infty}(1+\frac{1}{x})^{x}=e$ to be true, then you can directly use the famous theorem, Sequential Characterization of Limits(in particular, the version that related to limits as $x$ goes to $\infty$, if you don't know, just google it!). Then you can instantly get that $\lim_{n \to \infty} (1+\frac{1}{q_{n}})^{q_{n}} = e$ for all rational sequence $q_n$ that diverges to $\infty$.</p> <p>PS: I'm afraid that if you are only allowable to admit $\lim_{n \to \infty}(1+\frac{1}{n})^{n}=e$ but not $\lim_{x \to \infty}(1+\frac{1}{x})^{x}=e$, then your claim can not be proved, though I'm not sure about this.</p>
Compare definitions of locally compact spaces
2,080,931
4
https://math.stackexchange.com/questions/2080931/compare-definitions-of-locally-compact-spaces
<p>I already searched and saw some questions about definitions of locally compact spaces (like <a href="https://math.stackexchange.com/questions/1574608/two-definitions-of-locally-compact-space">Two definitions of locally compact space</a>), but I could NOT find the exact comparison of the two definitions I want to discuss below:</p> <p><strong>Definition 1:</strong> Let T=(S,τ) be a topological space. Then T is locally compact iff every point of S has a local basis $\mathscr B$ such that all elements of $\mathscr B$ are compact. (from <a href="https://proofwiki.org/wiki/Definition:Locally_Compact" rel="nofollow noreferrer">https://proofwiki.org/wiki/Definition:Locally_Compact</a>)</p> <p><strong>Definition 2:</strong> Let T=(S,τ) be a topological space. Then T is locally compact iff - a) T is Hausdorff; and b) Every point has a compact neighbourhood (from <a href="https://www.math.ksu.edu/~nagy/real-an/1-05-top-loc-comp.pdf" rel="nofollow noreferrer">https://www.math.ksu.edu/~nagy/real-an/1-05-top-loc-comp.pdf</a>)</p> <p>I could see that from Definition 1, it is easy to get property b of Definition 2</p> <p>From <a href="https://proofwiki.org/wiki/Definition:Locally_Compact" rel="nofollow noreferrer">https://proofwiki.org/wiki/Definition:Locally_Compact</a>, it comments that "if T is a Hausdorff space, then Definition 1 and Definition 2 are equivalent" - <strong>could anyone help to prove the direction from Definition 2 to Definition 1 when T is a Hausdorff space?</strong></p>
[ { "answer_id": 2082793, "score": 3, "text": "<p>For some point <span class=\"math-container\">$x$</span> take its compact neighborhood <span class=\"math-container\">$K$</span>. \nWe want to show that for any open neighborhood <span class=\"math-container\">$U$</span> of <span class=\"math-container\">$x$</span>, there is some compact neighborhood <span class=\"math-container\">$K'\\subseteq U$</span> and we will use <span class=\"math-container\">$K$</span> to construct it.</p>\n\n<p>So, take any open neighborhood <span class=\"math-container\">$U$</span> of <span class=\"math-container\">$x$</span>. If <span class=\"math-container\">$K\\subseteq U$</span>, we are done. Otherwise, <span class=\"math-container\">$K\\cap U^c$</span> is closed subset of <span class=\"math-container\">$K$</span>, thus compact. Since space is Hausdorff, we can find open <span class=\"math-container\">$V$</span> and <span class=\"math-container\">$V'$</span> such that <span class=\"math-container\">$K\\cap U^c\\subseteq V$</span> and <span class=\"math-container\">$x\\in V'$</span>, <span class=\"math-container\">$V\\cap V'=\\emptyset$</span>. Define <span class=\"math-container\">$K' = K\\cap V^c \\subseteq K$</span>. You can easily check that <span class=\"math-container\">$K'$</span> is compact neighborhood of <span class=\"math-container\">$x$</span> contained in <span class=\"math-container\">$U$</span>.</p>\n\n<p>Edit:</p>\n\n<p>Let me clarify, in Hausdorff space you can separate <em>compact</em> sets from points. Let <span class=\"math-container\">$K$</span> be compact and <span class=\"math-container\">$x\\not\\in K$</span>. For each <span class=\"math-container\">$y\\in K$</span> choose open <span class=\"math-container\">$y\\in U_y$</span>, <span class=\"math-container\">$x\\in V_y$</span> such that <span class=\"math-container\">$U_y\\cap V_y=\\emptyset$</span>. <span class=\"math-container\">$\\{U_y\\mid y\\in K\\}$</span> is open cover for <span class=\"math-container\">$K$</span>, so we can find finite subcover <span class=\"math-container\">$\\{U_{y_1},\\ldots,U_{y_n}\\}$</span> of <span class=\"math-container\">$K$</span>. Define <span class=\"math-container\">$U=\\cup_{i=1}^n U_{y_i}$</span>, <span class=\"math-container\">$V=\\cap_{i=1}^n V_{y_i}$</span>. Then <span class=\"math-container\">$K\\subseteq U$</span>, <span class=\"math-container\">$x\\in V$</span>, <span class=\"math-container\">$U\\cap V =\\emptyset$</span>.</p>\n" }, { "answer_id": 2083324, "score": 1, "text": "<p>To get from Def'n 2 to Def'n 1 for a Hausdorff space. For $p\\in S$ let $U$ be a compact nbhd of $p.$ Let $V\\in \\tau$ with $p\\in V\\subset U.$ Then $Cl_X(V)$ is compact, because it is a closed subset of the compact Hausdorff space $U.$</p>\n\n<p>(i). Observe that for all $W\\subset V$ we have ($W$ is open in the space $U$) iff $W\\in \\tau.$</p>\n\n<p>Let $B$ be an open local base (basis) at $p$ in the space $X.$ Then $C=\\{V\\cap b:b\\in B\\}$ is an open local base at $p$ in the space $X$ and also in the space $U$, by (i).</p>\n\n<p>Now a compact Hausdorff space is a regular space. So $U$ is regular.</p>\n\n<p>So for each $c\\in C$ we can choose $c'$ where $c'$ is open in $U$ and $p\\in c'\\subset Cl_U(c')\\subset c.$ Let $C'=\\{c': c\\in C\\}.$ By (i), $C'$ is a local open base at $p.$ And $Cl_U(c')$ (which is equal to $Cl_X(c'))$ is a closed subset of the compact Hausdorff space space $U$ so $Cl_U(c')$ is compact.</p>\n\n<p>Now for $p\\in Y\\in T,$ take $c\\in C$ with $p\\in c \\subset Y.$ Then $\\;\\overline {c'}\\;$ is a compact nbhd of $p,$ and is a subset of $Y. $ And we are done.</p>\n\n<p>NOTE: To show that a compact Hausdorff $S$ space is regular, let $p\\in S$ with $p\\not \\in A=\\bar A\\subset S.$ To find disjoint open $U,V$ with $p\\in U$ and $A\\subset V$: </p>\n\n<p>(i). If $A$ is empty let $U=S$ and $V=\\emptyset.$</p>\n\n<p>(ii). If $A$ is not empty then for each $a\\in A$ let $U_a,V_a$ be disjoint open sets with $p\\in U_a$ and $a\\in B_a.$ Now $A$ is compact (as it is a closed subset of the compact Hausdorff space $S$) so the open cover $\\{B_a:a\\in A\\}$ of $A$ has a finite non-empty sub-cover $\\{B_a:a\\in F\\}$ where $F$ is a finite subset of $A.$ </p>\n\n<p>Let $U=\\cap_{a\\in F}U_a$ and $V=\\cup_{a\\in F}B_a.$</p>\n" }, { "answer_id": 2082793, "score": 3, "text": "<p>For some point <span class=\"math-container\">$x$</span> take its compact neighborhood <span class=\"math-container\">$K$</span>. \nWe want to show that for any open neighborhood <span class=\"math-container\">$U$</span> of <span class=\"math-container\">$x$</span>, there is some compact neighborhood <span class=\"math-container\">$K'\\subseteq U$</span> and we will use <span class=\"math-container\">$K$</span> to construct it.</p>\n\n<p>So, take any open neighborhood <span class=\"math-container\">$U$</span> of <span class=\"math-container\">$x$</span>. If <span class=\"math-container\">$K\\subseteq U$</span>, we are done. Otherwise, <span class=\"math-container\">$K\\cap U^c$</span> is closed subset of <span class=\"math-container\">$K$</span>, thus compact. Since space is Hausdorff, we can find open <span class=\"math-container\">$V$</span> and <span class=\"math-container\">$V'$</span> such that <span class=\"math-container\">$K\\cap U^c\\subseteq V$</span> and <span class=\"math-container\">$x\\in V'$</span>, <span class=\"math-container\">$V\\cap V'=\\emptyset$</span>. Define <span class=\"math-container\">$K' = K\\cap V^c \\subseteq K$</span>. You can easily check that <span class=\"math-container\">$K'$</span> is compact neighborhood of <span class=\"math-container\">$x$</span> contained in <span class=\"math-container\">$U$</span>.</p>\n\n<p>Edit:</p>\n\n<p>Let me clarify, in Hausdorff space you can separate <em>compact</em> sets from points. Let <span class=\"math-container\">$K$</span> be compact and <span class=\"math-container\">$x\\not\\in K$</span>. For each <span class=\"math-container\">$y\\in K$</span> choose open <span class=\"math-container\">$y\\in U_y$</span>, <span class=\"math-container\">$x\\in V_y$</span> such that <span class=\"math-container\">$U_y\\cap V_y=\\emptyset$</span>. <span class=\"math-container\">$\\{U_y\\mid y\\in K\\}$</span> is open cover for <span class=\"math-container\">$K$</span>, so we can find finite subcover <span class=\"math-container\">$\\{U_{y_1},\\ldots,U_{y_n}\\}$</span> of <span class=\"math-container\">$K$</span>. Define <span class=\"math-container\">$U=\\cup_{i=1}^n U_{y_i}$</span>, <span class=\"math-container\">$V=\\cap_{i=1}^n V_{y_i}$</span>. Then <span class=\"math-container\">$K\\subseteq U$</span>, <span class=\"math-container\">$x\\in V$</span>, <span class=\"math-container\">$U\\cap V =\\emptyset$</span>.</p>\n" }, { "answer_id": 2083324, "score": 1, "text": "<p>To get from Def'n 2 to Def'n 1 for a Hausdorff space. For $p\\in S$ let $U$ be a compact nbhd of $p.$ Let $V\\in \\tau$ with $p\\in V\\subset U.$ Then $Cl_X(V)$ is compact, because it is a closed subset of the compact Hausdorff space $U.$</p>\n\n<p>(i). Observe that for all $W\\subset V$ we have ($W$ is open in the space $U$) iff $W\\in \\tau.$</p>\n\n<p>Let $B$ be an open local base (basis) at $p$ in the space $X.$ Then $C=\\{V\\cap b:b\\in B\\}$ is an open local base at $p$ in the space $X$ and also in the space $U$, by (i).</p>\n\n<p>Now a compact Hausdorff space is a regular space. So $U$ is regular.</p>\n\n<p>So for each $c\\in C$ we can choose $c'$ where $c'$ is open in $U$ and $p\\in c'\\subset Cl_U(c')\\subset c.$ Let $C'=\\{c': c\\in C\\}.$ By (i), $C'$ is a local open base at $p.$ And $Cl_U(c')$ (which is equal to $Cl_X(c'))$ is a closed subset of the compact Hausdorff space space $U$ so $Cl_U(c')$ is compact.</p>\n\n<p>Now for $p\\in Y\\in T,$ take $c\\in C$ with $p\\in c \\subset Y.$ Then $\\;\\overline {c'}\\;$ is a compact nbhd of $p,$ and is a subset of $Y. $ And we are done.</p>\n\n<p>NOTE: To show that a compact Hausdorff $S$ space is regular, let $p\\in S$ with $p\\not \\in A=\\bar A\\subset S.$ To find disjoint open $U,V$ with $p\\in U$ and $A\\subset V$: </p>\n\n<p>(i). If $A$ is empty let $U=S$ and $V=\\emptyset.$</p>\n\n<p>(ii). If $A$ is not empty then for each $a\\in A$ let $U_a,V_a$ be disjoint open sets with $p\\in U_a$ and $a\\in B_a.$ Now $A$ is compact (as it is a closed subset of the compact Hausdorff space $S$) so the open cover $\\{B_a:a\\in A\\}$ of $A$ has a finite non-empty sub-cover $\\{B_a:a\\in F\\}$ where $F$ is a finite subset of $A.$ </p>\n\n<p>Let $U=\\cap_{a\\in F}U_a$ and $V=\\cup_{a\\in F}B_a.$</p>\n" }, { "answer_id": 2082793, "score": 3, "text": "<p>For some point <span class=\"math-container\">$x$</span> take its compact neighborhood <span class=\"math-container\">$K$</span>. \nWe want to show that for any open neighborhood <span class=\"math-container\">$U$</span> of <span class=\"math-container\">$x$</span>, there is some compact neighborhood <span class=\"math-container\">$K'\\subseteq U$</span> and we will use <span class=\"math-container\">$K$</span> to construct it.</p>\n\n<p>So, take any open neighborhood <span class=\"math-container\">$U$</span> of <span class=\"math-container\">$x$</span>. If <span class=\"math-container\">$K\\subseteq U$</span>, we are done. Otherwise, <span class=\"math-container\">$K\\cap U^c$</span> is closed subset of <span class=\"math-container\">$K$</span>, thus compact. Since space is Hausdorff, we can find open <span class=\"math-container\">$V$</span> and <span class=\"math-container\">$V'$</span> such that <span class=\"math-container\">$K\\cap U^c\\subseteq V$</span> and <span class=\"math-container\">$x\\in V'$</span>, <span class=\"math-container\">$V\\cap V'=\\emptyset$</span>. Define <span class=\"math-container\">$K' = K\\cap V^c \\subseteq K$</span>. You can easily check that <span class=\"math-container\">$K'$</span> is compact neighborhood of <span class=\"math-container\">$x$</span> contained in <span class=\"math-container\">$U$</span>.</p>\n\n<p>Edit:</p>\n\n<p>Let me clarify, in Hausdorff space you can separate <em>compact</em> sets from points. Let <span class=\"math-container\">$K$</span> be compact and <span class=\"math-container\">$x\\not\\in K$</span>. For each <span class=\"math-container\">$y\\in K$</span> choose open <span class=\"math-container\">$y\\in U_y$</span>, <span class=\"math-container\">$x\\in V_y$</span> such that <span class=\"math-container\">$U_y\\cap V_y=\\emptyset$</span>. <span class=\"math-container\">$\\{U_y\\mid y\\in K\\}$</span> is open cover for <span class=\"math-container\">$K$</span>, so we can find finite subcover <span class=\"math-container\">$\\{U_{y_1},\\ldots,U_{y_n}\\}$</span> of <span class=\"math-container\">$K$</span>. Define <span class=\"math-container\">$U=\\cup_{i=1}^n U_{y_i}$</span>, <span class=\"math-container\">$V=\\cap_{i=1}^n V_{y_i}$</span>. Then <span class=\"math-container\">$K\\subseteq U$</span>, <span class=\"math-container\">$x\\in V$</span>, <span class=\"math-container\">$U\\cap V =\\emptyset$</span>.</p>\n" }, { "answer_id": 2083324, "score": 1, "text": "<p>To get from Def'n 2 to Def'n 1 for a Hausdorff space. For $p\\in S$ let $U$ be a compact nbhd of $p.$ Let $V\\in \\tau$ with $p\\in V\\subset U.$ Then $Cl_X(V)$ is compact, because it is a closed subset of the compact Hausdorff space $U.$</p>\n\n<p>(i). Observe that for all $W\\subset V$ we have ($W$ is open in the space $U$) iff $W\\in \\tau.$</p>\n\n<p>Let $B$ be an open local base (basis) at $p$ in the space $X.$ Then $C=\\{V\\cap b:b\\in B\\}$ is an open local base at $p$ in the space $X$ and also in the space $U$, by (i).</p>\n\n<p>Now a compact Hausdorff space is a regular space. So $U$ is regular.</p>\n\n<p>So for each $c\\in C$ we can choose $c'$ where $c'$ is open in $U$ and $p\\in c'\\subset Cl_U(c')\\subset c.$ Let $C'=\\{c': c\\in C\\}.$ By (i), $C'$ is a local open base at $p.$ And $Cl_U(c')$ (which is equal to $Cl_X(c'))$ is a closed subset of the compact Hausdorff space space $U$ so $Cl_U(c')$ is compact.</p>\n\n<p>Now for $p\\in Y\\in T,$ take $c\\in C$ with $p\\in c \\subset Y.$ Then $\\;\\overline {c'}\\;$ is a compact nbhd of $p,$ and is a subset of $Y. $ And we are done.</p>\n\n<p>NOTE: To show that a compact Hausdorff $S$ space is regular, let $p\\in S$ with $p\\not \\in A=\\bar A\\subset S.$ To find disjoint open $U,V$ with $p\\in U$ and $A\\subset V$: </p>\n\n<p>(i). If $A$ is empty let $U=S$ and $V=\\emptyset.$</p>\n\n<p>(ii). If $A$ is not empty then for each $a\\in A$ let $U_a,V_a$ be disjoint open sets with $p\\in U_a$ and $a\\in B_a.$ Now $A$ is compact (as it is a closed subset of the compact Hausdorff space $S$) so the open cover $\\{B_a:a\\in A\\}$ of $A$ has a finite non-empty sub-cover $\\{B_a:a\\in F\\}$ where $F$ is a finite subset of $A.$ </p>\n\n<p>Let $U=\\cap_{a\\in F}U_a$ and $V=\\cup_{a\\in F}B_a.$</p>\n" }, { "answer_id": 2082793, "score": 3, "text": "<p>For some point <span class=\"math-container\">$x$</span> take its compact neighborhood <span class=\"math-container\">$K$</span>. \nWe want to show that for any open neighborhood <span class=\"math-container\">$U$</span> of <span class=\"math-container\">$x$</span>, there is some compact neighborhood <span class=\"math-container\">$K'\\subseteq U$</span> and we will use <span class=\"math-container\">$K$</span> to construct it.</p>\n\n<p>So, take any open neighborhood <span class=\"math-container\">$U$</span> of <span class=\"math-container\">$x$</span>. If <span class=\"math-container\">$K\\subseteq U$</span>, we are done. Otherwise, <span class=\"math-container\">$K\\cap U^c$</span> is closed subset of <span class=\"math-container\">$K$</span>, thus compact. Since space is Hausdorff, we can find open <span class=\"math-container\">$V$</span> and <span class=\"math-container\">$V'$</span> such that <span class=\"math-container\">$K\\cap U^c\\subseteq V$</span> and <span class=\"math-container\">$x\\in V'$</span>, <span class=\"math-container\">$V\\cap V'=\\emptyset$</span>. Define <span class=\"math-container\">$K' = K\\cap V^c \\subseteq K$</span>. You can easily check that <span class=\"math-container\">$K'$</span> is compact neighborhood of <span class=\"math-container\">$x$</span> contained in <span class=\"math-container\">$U$</span>.</p>\n\n<p>Edit:</p>\n\n<p>Let me clarify, in Hausdorff space you can separate <em>compact</em> sets from points. Let <span class=\"math-container\">$K$</span> be compact and <span class=\"math-container\">$x\\not\\in K$</span>. For each <span class=\"math-container\">$y\\in K$</span> choose open <span class=\"math-container\">$y\\in U_y$</span>, <span class=\"math-container\">$x\\in V_y$</span> such that <span class=\"math-container\">$U_y\\cap V_y=\\emptyset$</span>. <span class=\"math-container\">$\\{U_y\\mid y\\in K\\}$</span> is open cover for <span class=\"math-container\">$K$</span>, so we can find finite subcover <span class=\"math-container\">$\\{U_{y_1},\\ldots,U_{y_n}\\}$</span> of <span class=\"math-container\">$K$</span>. Define <span class=\"math-container\">$U=\\cup_{i=1}^n U_{y_i}$</span>, <span class=\"math-container\">$V=\\cap_{i=1}^n V_{y_i}$</span>. Then <span class=\"math-container\">$K\\subseteq U$</span>, <span class=\"math-container\">$x\\in V$</span>, <span class=\"math-container\">$U\\cap V =\\emptyset$</span>.</p>\n" }, { "answer_id": 2083324, "score": 1, "text": "<p>To get from Def'n 2 to Def'n 1 for a Hausdorff space. For $p\\in S$ let $U$ be a compact nbhd of $p.$ Let $V\\in \\tau$ with $p\\in V\\subset U.$ Then $Cl_X(V)$ is compact, because it is a closed subset of the compact Hausdorff space $U.$</p>\n\n<p>(i). Observe that for all $W\\subset V$ we have ($W$ is open in the space $U$) iff $W\\in \\tau.$</p>\n\n<p>Let $B$ be an open local base (basis) at $p$ in the space $X.$ Then $C=\\{V\\cap b:b\\in B\\}$ is an open local base at $p$ in the space $X$ and also in the space $U$, by (i).</p>\n\n<p>Now a compact Hausdorff space is a regular space. So $U$ is regular.</p>\n\n<p>So for each $c\\in C$ we can choose $c'$ where $c'$ is open in $U$ and $p\\in c'\\subset Cl_U(c')\\subset c.$ Let $C'=\\{c': c\\in C\\}.$ By (i), $C'$ is a local open base at $p.$ And $Cl_U(c')$ (which is equal to $Cl_X(c'))$ is a closed subset of the compact Hausdorff space space $U$ so $Cl_U(c')$ is compact.</p>\n\n<p>Now for $p\\in Y\\in T,$ take $c\\in C$ with $p\\in c \\subset Y.$ Then $\\;\\overline {c'}\\;$ is a compact nbhd of $p,$ and is a subset of $Y. $ And we are done.</p>\n\n<p>NOTE: To show that a compact Hausdorff $S$ space is regular, let $p\\in S$ with $p\\not \\in A=\\bar A\\subset S.$ To find disjoint open $U,V$ with $p\\in U$ and $A\\subset V$: </p>\n\n<p>(i). If $A$ is empty let $U=S$ and $V=\\emptyset.$</p>\n\n<p>(ii). If $A$ is not empty then for each $a\\in A$ let $U_a,V_a$ be disjoint open sets with $p\\in U_a$ and $a\\in B_a.$ Now $A$ is compact (as it is a closed subset of the compact Hausdorff space $S$) so the open cover $\\{B_a:a\\in A\\}$ of $A$ has a finite non-empty sub-cover $\\{B_a:a\\in F\\}$ where $F$ is a finite subset of $A.$ </p>\n\n<p>Let $U=\\cap_{a\\in F}U_a$ and $V=\\cup_{a\\in F}B_a.$</p>\n" } ]
Question: Compare definitions of locally compact spaces Answer 1: <p>For some point <span class="math-container">$x$</span> take its compact neighborhood <span class="math-container">$K$</span>. We want to show that for any open neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span>, there is some compact neighborhood <span class="math-container">$K'\subseteq U$</span> and we will use <span class="math-container">$K$</span> to construct it.</p> <p>So, take any open neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span>. If <span class="math-container">$K\subseteq U$</span>, we are done. Otherwise, <span class="math-container">$K\cap U^c$</span> is closed subset of <span class="math-container">$K$</span>, thus compact. Since space is Hausdorff, we can find open <span class="math-container">$V$</span> and <span class="math-container">$V'$</span> such that <span class="math-container">$K\cap U^c\subseteq V$</span> and <span class="math-container">$x\in V'$</span>, <span class="math-container">$V\cap V'=\emptyset$</span>. Define <span class="math-container">$K' = K\cap V^c \subseteq K$</span>. You can easily check that <span class="math-container">$K'$</span> is compact neighborhood of <span class="math-container">$x$</span> contained in <span class="math-container">$U$</span>.</p> <p>Edit:</p> <p>Let me clarify, in Hausdorff space you can separate <em>compact</em> sets from points. Let <span class="math-container">$K$</span> be compact and <span class="math-container">$x\not\in K$</span>. For each <span class="math-container">$y\in K$</span> choose open <span class="math-container">$y\in U_y$</span>, <span class="math-container">$x\in V_y$</span> such that <span class="math-container">$U_y\cap V_y=\emptyset$</span>. <span class="math-container">$\{U_y\mid y\in K\}$</span> is open cover for <span class="math-container">$K$</span>, so we can find finite subcover <span class="math-container">$\{U_{y_1},\ldots,U_{y_n}\}$</span> of <span class="math-container">$K$</span>. Define <span class="math-container">$U=\cup_{i=1}^n U_{y_i}$</span>, <span class="math-container">$V=\cap_{i=1}^n V_{y_i}$</span>. Then <span class="math-container">$K\subseteq U$</span>, <span class="math-container">$x\in V$</span>, <span class="math-container">$U\cap V =\emptyset$</span>.</p> Answer 2: <p>To get from Def'n 2 to Def'n 1 for a Hausdorff space. For $p\in S$ let $U$ be a compact nbhd of $p.$ Let $V\in \tau$ with $p\in V\subset U.$ Then $Cl_X(V)$ is compact, because it is a closed subset of the compact Hausdorff space $U.$</p> <p>(i). Observe that for all $W\subset V$ we have ($W$ is open in the space $U$) iff $W\in \tau.$</p> <p>Let $B$ be an open local base (basis) at $p$ in the space $X.$ Then $C=\{V\cap b:b\in B\}$ is an open local base at $p$ in the space $X$ and also in the space $U$, by (i).</p> <p>Now a compact Hausdorff space is a regular space. So $U$ is regular.</p> <p>So for each $c\in C$ we can choose $c'$ where $c'$ is open in $U$ and $p\in c'\subset Cl_U(c')\subset c.$ Let $C'=\{c': c\in C\}.$ By (i), $C'$ is a local open base at $p.$ And $Cl_U(c')$ (which is equal to $Cl_X(c'))$ is a closed subset of the compact Hausdorff space space $U$ so $Cl_U(c')$ is compact.</p> <p>Now for $p\in Y\in T,$ take $c\in C$ with $p\in c \subset Y.$ Then $\;\overline {c'}\;$ is a compact nbhd of $p,$ and is a subset of $Y. $ And we are done.</p> <p>NOTE: To show that a compact Hausdorff $S$ space is regular, let $p\in S$ with $p\not \in A=\bar A\subset S.$ To find disjoint open $U,V$ with $p\in U$ and $A\subset V$: </p> <p>(i). If $A$ is empty let $U=S$ and $V=\emptyset.$</p> <p>(ii). If $A$ is not empty then for each $a\in A$ let $U_a,V_a$ be disjoint open sets with $p\in U_a$ and $a\in B_a.$ Now $A$ is compact (as it is a closed subset of the compact Hausdorff space $S$) so the open cover $\{B_a:a\in A\}$ of $A$ has a finite non-empty sub-cover $\{B_a:a\in F\}$ where $F$ is a finite subset of $A.$ </p> <p>Let $U=\cap_{a\in F}U_a$ and $V=\cup_{a\in F}B_a.$</p> Answer 3: <p>For some point <span class="math-container">$x$</span> take its compact neighborhood <span class="math-container">$K$</span>. We want to show that for any open neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span>, there is some compact neighborhood <span class="math-container">$K'\subseteq U$</span> and we will use <span class="math-container">$K$</span> to construct it.</p> <p>So, take any open neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span>. If <span class="math-container">$K\subseteq U$</span>, we are done. Otherwise, <span class="math-container">$K\cap U^c$</span> is closed subset of <span class="math-container">$K$</span>, thus compact. Since space is Hausdorff, we can find open <span class="math-container">$V$</span> and <span class="math-container">$V'$</span> such that <span class="math-container">$K\cap U^c\subseteq V$</span> and <span class="math-container">$x\in V'$</span>, <span class="math-container">$V\cap V'=\emptyset$</span>. Define <span class="math-container">$K' = K\cap V^c \subseteq K$</span>. You can easily check that <span class="math-container">$K'$</span> is compact neighborhood of <span class="math-container">$x$</span> contained in <span class="math-container">$U$</span>.</p> <p>Edit:</p> <p>Let me clarify, in Hausdorff space you can separate <em>compact</em> sets from points. Let <span class="math-container">$K$</span> be compact and <span class="math-container">$x\not\in K$</span>. For each <span class="math-container">$y\in K$</span> choose open <span class="math-container">$y\in U_y$</span>, <span class="math-container">$x\in V_y$</span> such that <span class="math-container">$U_y\cap V_y=\emptyset$</span>. <span class="math-container">$\{U_y\mid y\in K\}$</span> is open cover for <span class="math-container">$K$</span>, so we can find finite subcover <span class="math-container">$\{U_{y_1},\ldots,U_{y_n}\}$</span> of <span class="math-container">$K$</span>. Define <span class="math-container">$U=\cup_{i=1}^n U_{y_i}$</span>, <span class="math-container">$V=\cap_{i=1}^n V_{y_i}$</span>. Then <span class="math-container">$K\subseteq U$</span>, <span class="math-container">$x\in V$</span>, <span class="math-container">$U\cap V =\emptyset$</span>.</p> Answer 4: <p>To get from Def'n 2 to Def'n 1 for a Hausdorff space. For $p\in S$ let $U$ be a compact nbhd of $p.$ Let $V\in \tau$ with $p\in V\subset U.$ Then $Cl_X(V)$ is compact, because it is a closed subset of the compact Hausdorff space $U.$</p> <p>(i). Observe that for all $W\subset V$ we have ($W$ is open in the space $U$) iff $W\in \tau.$</p> <p>Let $B$ be an open local base (basis) at $p$ in the space $X.$ Then $C=\{V\cap b:b\in B\}$ is an open local base at $p$ in the space $X$ and also in the space $U$, by (i).</p> <p>Now a compact Hausdorff space is a regular space. So $U$ is regular.</p> <p>So for each $c\in C$ we can choose $c'$ where $c'$ is open in $U$ and $p\in c'\subset Cl_U(c')\subset c.$ Let $C'=\{c': c\in C\}.$ By (i), $C'$ is a local open base at $p.$ And $Cl_U(c')$ (which is equal to $Cl_X(c'))$ is a closed subset of the compact Hausdorff space space $U$ so $Cl_U(c')$ is compact.</p> <p>Now for $p\in Y\in T,$ take $c\in C$ with $p\in c \subset Y.$ Then $\;\overline {c'}\;$ is a compact nbhd of $p,$ and is a subset of $Y. $ And we are done.</p> <p>NOTE: To show that a compact Hausdorff $S$ space is regular, let $p\in S$ with $p\not \in A=\bar A\subset S.$ To find disjoint open $U,V$ with $p\in U$ and $A\subset V$: </p> <p>(i). If $A$ is empty let $U=S$ and $V=\emptyset.$</p> <p>(ii). If $A$ is not empty then for each $a\in A$ let $U_a,V_a$ be disjoint open sets with $p\in U_a$ and $a\in B_a.$ Now $A$ is compact (as it is a closed subset of the compact Hausdorff space $S$) so the open cover $\{B_a:a\in A\}$ of $A$ has a finite non-empty sub-cover $\{B_a:a\in F\}$ where $F$ is a finite subset of $A.$ </p> <p>Let $U=\cap_{a\in F}U_a$ and $V=\cup_{a\in F}B_a.$</p> Answer 5: <p>For some point <span class="math-container">$x$</span> take its compact neighborhood <span class="math-container">$K$</span>. We want to show that for any open neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span>, there is some compact neighborhood <span class="math-container">$K'\subseteq U$</span> and we will use <span class="math-container">$K$</span> to construct it.</p> <p>So, take any open neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span>. If <span class="math-container">$K\subseteq U$</span>, we are done. Otherwise, <span class="math-container">$K\cap U^c$</span> is closed subset of <span class="math-container">$K$</span>, thus compact. Since space is Hausdorff, we can find open <span class="math-container">$V$</span> and <span class="math-container">$V'$</span> such that <span class="math-container">$K\cap U^c\subseteq V$</span> and <span class="math-container">$x\in V'$</span>, <span class="math-container">$V\cap V'=\emptyset$</span>. Define <span class="math-container">$K' = K\cap V^c \subseteq K$</span>. You can easily check that <span class="math-container">$K'$</span> is compact neighborhood of <span class="math-container">$x$</span> contained in <span class="math-container">$U$</span>.</p> <p>Edit:</p> <p>Let me clarify, in Hausdorff space you can separate <em>compact</em> sets from points. Let <span class="math-container">$K$</span> be compact and <span class="math-container">$x\not\in K$</span>. For each <span class="math-container">$y\in K$</span> choose open <span class="math-container">$y\in U_y$</span>, <span class="math-container">$x\in V_y$</span> such that <span class="math-container">$U_y\cap V_y=\emptyset$</span>. <span class="math-container">$\{U_y\mid y\in K\}$</span> is open cover for <span class="math-container">$K$</span>, so we can find finite subcover <span class="math-container">$\{U_{y_1},\ldots,U_{y_n}\}$</span> of <span class="math-container">$K$</span>. Define <span class="math-container">$U=\cup_{i=1}^n U_{y_i}$</span>, <span class="math-container">$V=\cap_{i=1}^n V_{y_i}$</span>. Then <span class="math-container">$K\subseteq U$</span>, <span class="math-container">$x\in V$</span>, <span class="math-container">$U\cap V =\emptyset$</span>.</p> Answer 6: <p>To get from Def'n 2 to Def'n 1 for a Hausdorff space. For $p\in S$ let $U$ be a compact nbhd of $p.$ Let $V\in \tau$ with $p\in V\subset U.$ Then $Cl_X(V)$ is compact, because it is a closed subset of the compact Hausdorff space $U.$</p> <p>(i). Observe that for all $W\subset V$ we have ($W$ is open in the space $U$) iff $W\in \tau.$</p> <p>Let $B$ be an open local base (basis) at $p$ in the space $X.$ Then $C=\{V\cap b:b\in B\}$ is an open local base at $p$ in the space $X$ and also in the space $U$, by (i).</p> <p>Now a compact Hausdorff space is a regular space. So $U$ is regular.</p> <p>So for each $c\in C$ we can choose $c'$ where $c'$ is open in $U$ and $p\in c'\subset Cl_U(c')\subset c.$ Let $C'=\{c': c\in C\}.$ By (i), $C'$ is a local open base at $p.$ And $Cl_U(c')$ (which is equal to $Cl_X(c'))$ is a closed subset of the compact Hausdorff space space $U$ so $Cl_U(c')$ is compact.</p> <p>Now for $p\in Y\in T,$ take $c\in C$ with $p\in c \subset Y.$ Then $\;\overline {c'}\;$ is a compact nbhd of $p,$ and is a subset of $Y. $ And we are done.</p> <p>NOTE: To show that a compact Hausdorff $S$ space is regular, let $p\in S$ with $p\not \in A=\bar A\subset S.$ To find disjoint open $U,V$ with $p\in U$ and $A\subset V$: </p> <p>(i). If $A$ is empty let $U=S$ and $V=\emptyset.$</p> <p>(ii). If $A$ is not empty then for each $a\in A$ let $U_a,V_a$ be disjoint open sets with $p\in U_a$ and $a\in B_a.$ Now $A$ is compact (as it is a closed subset of the compact Hausdorff space $S$) so the open cover $\{B_a:a\in A\}$ of $A$ has a finite non-empty sub-cover $\{B_a:a\in F\}$ where $F$ is a finite subset of $A.$ </p> <p>Let $U=\cap_{a\in F}U_a$ and $V=\cup_{a\in F}B_a.$</p> Answer 7: <p>For some point <span class="math-container">$x$</span> take its compact neighborhood <span class="math-container">$K$</span>. We want to show that for any open neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span>, there is some compact neighborhood <span class="math-container">$K'\subseteq U$</span> and we will use <span class="math-container">$K$</span> to construct it.</p> <p>So, take any open neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span>. If <span class="math-container">$K\subseteq U$</span>, we are done. Otherwise, <span class="math-container">$K\cap U^c$</span> is closed subset of <span class="math-container">$K$</span>, thus compact. Since space is Hausdorff, we can find open <span class="math-container">$V$</span> and <span class="math-container">$V'$</span> such that <span class="math-container">$K\cap U^c\subseteq V$</span> and <span class="math-container">$x\in V'$</span>, <span class="math-container">$V\cap V'=\emptyset$</span>. Define <span class="math-container">$K' = K\cap V^c \subseteq K$</span>. You can easily check that <span class="math-container">$K'$</span> is compact neighborhood of <span class="math-container">$x$</span> contained in <span class="math-container">$U$</span>.</p> <p>Edit:</p> <p>Let me clarify, in Hausdorff space you can separate <em>compact</em> sets from points. Let <span class="math-container">$K$</span> be compact and <span class="math-container">$x\not\in K$</span>. For each <span class="math-container">$y\in K$</span> choose open <span class="math-container">$y\in U_y$</span>, <span class="math-container">$x\in V_y$</span> such that <span class="math-container">$U_y\cap V_y=\emptyset$</span>. <span class="math-container">$\{U_y\mid y\in K\}$</span> is open cover for <span class="math-container">$K$</span>, so we can find finite subcover <span class="math-container">$\{U_{y_1},\ldots,U_{y_n}\}$</span> of <span class="math-container">$K$</span>. Define <span class="math-container">$U=\cup_{i=1}^n U_{y_i}$</span>, <span class="math-container">$V=\cap_{i=1}^n V_{y_i}$</span>. Then <span class="math-container">$K\subseteq U$</span>, <span class="math-container">$x\in V$</span>, <span class="math-container">$U\cap V =\emptyset$</span>.</p> Answer 8: <p>To get from Def'n 2 to Def'n 1 for a Hausdorff space. For $p\in S$ let $U$ be a compact nbhd of $p.$ Let $V\in \tau$ with $p\in V\subset U.$ Then $Cl_X(V)$ is compact, because it is a closed subset of the compact Hausdorff space $U.$</p> <p>(i). Observe that for all $W\subset V$ we have ($W$ is open in the space $U$) iff $W\in \tau.$</p> <p>Let $B$ be an open local base (basis) at $p$ in the space $X.$ Then $C=\{V\cap b:b\in B\}$ is an open local base at $p$ in the space $X$ and also in the space $U$, by (i).</p> <p>Now a compact Hausdorff space is a regular space. So $U$ is regular.</p> <p>So for each $c\in C$ we can choose $c'$ where $c'$ is open in $U$ and $p\in c'\subset Cl_U(c')\subset c.$ Let $C'=\{c': c\in C\}.$ By (i), $C'$ is a local open base at $p.$ And $Cl_U(c')$ (which is equal to $Cl_X(c'))$ is a closed subset of the compact Hausdorff space space $U$ so $Cl_U(c')$ is compact.</p> <p>Now for $p\in Y\in T,$ take $c\in C$ with $p\in c \subset Y.$ Then $\;\overline {c'}\;$ is a compact nbhd of $p,$ and is a subset of $Y. $ And we are done.</p> <p>NOTE: To show that a compact Hausdorff $S$ space is regular, let $p\in S$ with $p\not \in A=\bar A\subset S.$ To find disjoint open $U,V$ with $p\in U$ and $A\subset V$: </p> <p>(i). If $A$ is empty let $U=S$ and $V=\emptyset.$</p> <p>(ii). If $A$ is not empty then for each $a\in A$ let $U_a,V_a$ be disjoint open sets with $p\in U_a$ and $a\in B_a.$ Now $A$ is compact (as it is a closed subset of the compact Hausdorff space $S$) so the open cover $\{B_a:a\in A\}$ of $A$ has a finite non-empty sub-cover $\{B_a:a\in F\}$ where $F$ is a finite subset of $A.$ </p> <p>Let $U=\cap_{a\in F}U_a$ and $V=\cup_{a\in F}B_a.$</p>
How to prove $\lim\limits_{x\to 0}x^m (\ln x)^n = 0$
2,080,954
4
https://math.stackexchange.com/questions/2080954/how-to-prove-lim-limits-x-to-0xm-ln-xn-0
<p><span class="math-container">$$\lim\limits_{x\to 0^+}x^m (\ln x)^n = 0\quad \,\text{for} \quad m,n \in \mathbb N$$</span></p> <p><strong>Question</strong>: How can I prove this?</p> <p>Is there a better way than saying, well, if the factor <span class="math-container">$\lim\limits_{x\to 0}x = 0$</span> the whole equation is <span class="math-container">$0$</span>?</p> <p><strong>Note</strong>: I have found a similiar <a href="https://math.stackexchange.com/questions/868935/lim-x-to-0-lnx-cdot-x-0-by-boundedness-of-lnx-cdot-x">post</a> but however I need to solve the task without integrals.</p> <p>I appreciate every hint.</p>
[ { "answer_id": 2080989, "score": 6, "text": "<p>Changing variable $y=-\\ln x$ the limit becomes\n$$(-1)^n\\lim_{y\\to+\\infty}e^{-my}y^n.$$\nSince for positive $z$\n$$e^z=\\sum_{k=0}^\\infty \\frac{z^k}{k!}\\geq \\frac{z^{n+1}}{(n+1)!}$$\nputting $z=my$ we obtain\n$$ e^{-my}y^n=\\frac{y^n}{e^{my}}\\leq\\frac{y^n}{\\frac{1}{(n+1)!}(my)^{n+1}}=\\frac{(n+1)!}{m^{n+1}}\\frac1y$$\nand this goes to zero as $y\\to+\\infty$. Since $e^{-my}y^n$ is positive, by the squeeze theorem the original limit is zero.</p>\n" }, { "answer_id": 2080969, "score": 2, "text": "<p>The issue here is that <span class=\"math-container\">$\\lim_{x\\rightarrow 0} x = 0$</span>, but isn't it possible that <span class=\"math-container\">$\\ln(x)$</span> approaches <span class=\"math-container\">$-\\infty$</span> <em>faster</em> than <span class=\"math-container\">$x$</span> goes to <span class=\"math-container\">$0$</span>? For instance, consider <span class=\"math-container\">$e^x/x$</span>. Certainly this function approaches <span class=\"math-container\">$\\infty$</span> as <span class=\"math-container\">$x$</span> goes to infinity, but <span class=\"math-container\">$1/x$</span> approaches <span class=\"math-container\">$0$</span> as <span class=\"math-container\">$x$</span> goes to infinity. So <span class=\"math-container\">$\\lim_{x\\rightarrow 0}=0$</span> is not a sufficient reason for the larger limit to be <span class=\"math-container\">$0$</span>.</p>\n<p>The real question to consider: what tools do you have for solving limits? Do you have the <span class=\"math-container\">$\\epsilon$</span>-<span class=\"math-container\">$\\delta$</span> definition? L'Hopital's Rule (as asked and answered above)? Basically, what are the rules of the game for this question? (I ask, because there are probably multiple ways to solve that problem.)</p>\n" }, { "answer_id": 2080997, "score": 2, "text": "<p>Let $x=e^{-t}$. Then</p>\n\n<p>$$L=\\lim_{t\\to\\infty}{e^{-mt}t^n}=\\left(\\lim_{t\\to\\infty}{e^{-mt/n}t}\\right)^n=\\left(\\frac nm\\lim_{t\\to\\infty}{e^{-t}t}\\right)^n.$$</p>\n\n<p>Now by induction,</p>\n\n<p>$$t\\ge3\\implies e^{-t-1}(t+1)=\\frac{t+1}{et}e^{-t}t&lt;\\frac12e^{-t}t$$ and the limit is $0$.</p>\n" }, { "answer_id": 2496092, "score": 2, "text": "<p>Write <span class=\"math-container\">$$x^m\\ln^nx=e^{m\\ln x+\\ln\\ln^n x}:=e^{\\varphi(x)},\\quad \\:\\varphi(x)=m\\ln x\\left(1+{\\ln \\ln^n x\\over m\\ln x}\\right).$$</span>\nWe have that</p>\n<p><span class=\"math-container\">$$\\forall x&gt;0\\:\\:\\:\\forall \\varepsilon&gt;0\\:\\:\\exists N_{\\varepsilon,x}&gt;0\\:\\:\\forall n\\ge N_{\\varepsilon,x}\\implies \\left|m\\ln x-\\varphi (x)\\right|\\le \\varepsilon\\:|m\\ln x|.$$</span> <span class=\"math-container\">$$$$</span></p>\n<p>So that <span class=\"math-container\">$\\:\\varphi(x)\\underset{\\:0^+}{\\sim}\\:m\\ln x\\:$</span>, since <span class=\"math-container\">$\\:\\lim_{x\\to 0^+}\\Large(\\normalsize\\varphi(x)/(m\\ln x)-1\\Large)\\normalsize\\:=0.$</span> <span class=\"math-container\">$$$$</span></p>\n<p>By now it's a piece of cake to prove that</p>\n<p><span class=\"math-container\">$$ \\forall (m,n)\\in \\mathbf N^2\\:,\\:\\: e^{\\varphi(x)}\\underset {\\:0^+}\\sim\\:e^{\\ln x^m}=\\:x^m\\overset{\\:x\\to 0^+}\\longrightarrow \\:0.$$</span></p>\n" }, { "answer_id": 2080993, "score": 1, "text": "<p>The ideal solution here is highly dependent on the mathematical toolbox available to you. I present a solution here which is not too technical (easing off on calculus where possible), at the cost of a little rigor:</p>\n\n<p><strong>Claim:</strong></p>\n\n<p>$$\\lim\\limits_{x\\to 0^+}x^m (\\ln x)^n = 0, \\,\\text{for} \\quad m,n \\in \\mathbb N$$</p>\n\n<p><em>Remark:</em> Taking $n^{th}$ roots, we note that, if we can show $x^\\alpha\\ln x \\to0$ for all (rational) $\\alpha&gt;0$, then we have our result.</p>\n\n<p><em>Observation:</em> $$\\lim_{x\\to0^+}x^\\alpha\\ln x=\\lim_{1/x\\to0}(1/x)^\\alpha\\ln(1/x)=\\lim_{x\\to\\infty}\\left[-\\frac{\\ln x}{x^\\alpha}\\right]$$</p>\n\n<p>Now, admittedly this is not totally trivial - it relies on the continuity of the functions as hand, as well as being able to convince yourself that we should be allowed to shift limits at $0$ to limits at infinity this way. However, once you can come to terms with this being true (perhaps luckily, it is), we're not too far from a solution.</p>\n\n<p><strong>Lemma:</strong></p>\n\n<p>$$\\ln x \\le x-1 &lt; x \\text{ for } x&gt;0$$</p>\n\n<p>There are many proofs of this, though in honesty, most use calculus or differentiation at some point or another. Still, this is a fairly uncontroversial inequality, so we use it to establish:</p>\n\n<p><em>Corollary:</em></p>\n\n<p>$$\\ln\\left(x^{1/m}\\right)&lt;x^{1/m}\\implies \\ln x &lt; mx^{1/m} \\text{ for } m&gt;0$$</p>\n\n<p><strong>Bringing it all together:</strong></p>\n\n<p>From this, we can see that $\\frac{\\ln x}{x^\\alpha}&lt;mx^{1/m-\\alpha}$ for any positive $m$.</p>\n\n<p>Thus for sufficiently large $m$, this ratio is bounded above by a multiple of $x^{-r}$ for some positive $r$, which tends to 0. </p>\n\n<p>Given that the ratio is nonnegative for $x&gt;1$, we see that $\\frac{\\ln x}{x^\\alpha}\\to 0$ as $x\\to\\infty$.</p>\n\n<p>Retracing our steps, we can see that our claimed inequality must be true.</p>\n" }, { "answer_id": 2081081, "score": 1, "text": "<p>We want to show $x^m(\\ln x)^n \\to 0,$ which is the same as showing $x^m|\\ln x|^n \\to 0.$ Apply $\\ln $ to see this is the same as showing</p>\n\n<p>$$\\tag 1 m\\ln x + n \\ln (|\\ln x|) \\to -\\infty.$$</p>\n\n<p>Now for $u\\ge 4,$ $\\ln u \\le u^{1/2}.^*$ And for small $x&gt;0,$ $|\\ln x| \\ge 4.$ For such $x$ the left side of $(1)$ is bounded above by</p>\n\n<p>$$m\\ln x + n |\\ln x|^{1/2} = -m|\\ln x| + n |\\ln x|^{1/2} = |\\ln x|^{1/2}(-m|\\ln x|^{1/2} + n).$$ </p>\n\n<p>As $x\\to 0^+,$ the last expression is looking like $\\infty\\cdot (-\\infty)=-\\infty.$ This proves $(1)$ and we're done.</p>\n\n<p>$^*$To prove this, note that it's true at $4,$ and then compare derivatives.</p>\n" }, { "answer_id": 2080989, "score": 6, "text": "<p>Changing variable $y=-\\ln x$ the limit becomes\n$$(-1)^n\\lim_{y\\to+\\infty}e^{-my}y^n.$$\nSince for positive $z$\n$$e^z=\\sum_{k=0}^\\infty \\frac{z^k}{k!}\\geq \\frac{z^{n+1}}{(n+1)!}$$\nputting $z=my$ we obtain\n$$ e^{-my}y^n=\\frac{y^n}{e^{my}}\\leq\\frac{y^n}{\\frac{1}{(n+1)!}(my)^{n+1}}=\\frac{(n+1)!}{m^{n+1}}\\frac1y$$\nand this goes to zero as $y\\to+\\infty$. Since $e^{-my}y^n$ is positive, by the squeeze theorem the original limit is zero.</p>\n" }, { "answer_id": 2080969, "score": 2, "text": "<p>The issue here is that <span class=\"math-container\">$\\lim_{x\\rightarrow 0} x = 0$</span>, but isn't it possible that <span class=\"math-container\">$\\ln(x)$</span> approaches <span class=\"math-container\">$-\\infty$</span> <em>faster</em> than <span class=\"math-container\">$x$</span> goes to <span class=\"math-container\">$0$</span>? For instance, consider <span class=\"math-container\">$e^x/x$</span>. Certainly this function approaches <span class=\"math-container\">$\\infty$</span> as <span class=\"math-container\">$x$</span> goes to infinity, but <span class=\"math-container\">$1/x$</span> approaches <span class=\"math-container\">$0$</span> as <span class=\"math-container\">$x$</span> goes to infinity. So <span class=\"math-container\">$\\lim_{x\\rightarrow 0}=0$</span> is not a sufficient reason for the larger limit to be <span class=\"math-container\">$0$</span>.</p>\n<p>The real question to consider: what tools do you have for solving limits? Do you have the <span class=\"math-container\">$\\epsilon$</span>-<span class=\"math-container\">$\\delta$</span> definition? L'Hopital's Rule (as asked and answered above)? Basically, what are the rules of the game for this question? (I ask, because there are probably multiple ways to solve that problem.)</p>\n" }, { "answer_id": 2080997, "score": 2, "text": "<p>Let $x=e^{-t}$. Then</p>\n\n<p>$$L=\\lim_{t\\to\\infty}{e^{-mt}t^n}=\\left(\\lim_{t\\to\\infty}{e^{-mt/n}t}\\right)^n=\\left(\\frac nm\\lim_{t\\to\\infty}{e^{-t}t}\\right)^n.$$</p>\n\n<p>Now by induction,</p>\n\n<p>$$t\\ge3\\implies e^{-t-1}(t+1)=\\frac{t+1}{et}e^{-t}t&lt;\\frac12e^{-t}t$$ and the limit is $0$.</p>\n" }, { "answer_id": 2496092, "score": 2, "text": "<p>Write <span class=\"math-container\">$$x^m\\ln^nx=e^{m\\ln x+\\ln\\ln^n x}:=e^{\\varphi(x)},\\quad \\:\\varphi(x)=m\\ln x\\left(1+{\\ln \\ln^n x\\over m\\ln x}\\right).$$</span>\nWe have that</p>\n<p><span class=\"math-container\">$$\\forall x&gt;0\\:\\:\\:\\forall \\varepsilon&gt;0\\:\\:\\exists N_{\\varepsilon,x}&gt;0\\:\\:\\forall n\\ge N_{\\varepsilon,x}\\implies \\left|m\\ln x-\\varphi (x)\\right|\\le \\varepsilon\\:|m\\ln x|.$$</span> <span class=\"math-container\">$$$$</span></p>\n<p>So that <span class=\"math-container\">$\\:\\varphi(x)\\underset{\\:0^+}{\\sim}\\:m\\ln x\\:$</span>, since <span class=\"math-container\">$\\:\\lim_{x\\to 0^+}\\Large(\\normalsize\\varphi(x)/(m\\ln x)-1\\Large)\\normalsize\\:=0.$</span> <span class=\"math-container\">$$$$</span></p>\n<p>By now it's a piece of cake to prove that</p>\n<p><span class=\"math-container\">$$ \\forall (m,n)\\in \\mathbf N^2\\:,\\:\\: e^{\\varphi(x)}\\underset {\\:0^+}\\sim\\:e^{\\ln x^m}=\\:x^m\\overset{\\:x\\to 0^+}\\longrightarrow \\:0.$$</span></p>\n" }, { "answer_id": 2080993, "score": 1, "text": "<p>The ideal solution here is highly dependent on the mathematical toolbox available to you. I present a solution here which is not too technical (easing off on calculus where possible), at the cost of a little rigor:</p>\n\n<p><strong>Claim:</strong></p>\n\n<p>$$\\lim\\limits_{x\\to 0^+}x^m (\\ln x)^n = 0, \\,\\text{for} \\quad m,n \\in \\mathbb N$$</p>\n\n<p><em>Remark:</em> Taking $n^{th}$ roots, we note that, if we can show $x^\\alpha\\ln x \\to0$ for all (rational) $\\alpha&gt;0$, then we have our result.</p>\n\n<p><em>Observation:</em> $$\\lim_{x\\to0^+}x^\\alpha\\ln x=\\lim_{1/x\\to0}(1/x)^\\alpha\\ln(1/x)=\\lim_{x\\to\\infty}\\left[-\\frac{\\ln x}{x^\\alpha}\\right]$$</p>\n\n<p>Now, admittedly this is not totally trivial - it relies on the continuity of the functions as hand, as well as being able to convince yourself that we should be allowed to shift limits at $0$ to limits at infinity this way. However, once you can come to terms with this being true (perhaps luckily, it is), we're not too far from a solution.</p>\n\n<p><strong>Lemma:</strong></p>\n\n<p>$$\\ln x \\le x-1 &lt; x \\text{ for } x&gt;0$$</p>\n\n<p>There are many proofs of this, though in honesty, most use calculus or differentiation at some point or another. Still, this is a fairly uncontroversial inequality, so we use it to establish:</p>\n\n<p><em>Corollary:</em></p>\n\n<p>$$\\ln\\left(x^{1/m}\\right)&lt;x^{1/m}\\implies \\ln x &lt; mx^{1/m} \\text{ for } m&gt;0$$</p>\n\n<p><strong>Bringing it all together:</strong></p>\n\n<p>From this, we can see that $\\frac{\\ln x}{x^\\alpha}&lt;mx^{1/m-\\alpha}$ for any positive $m$.</p>\n\n<p>Thus for sufficiently large $m$, this ratio is bounded above by a multiple of $x^{-r}$ for some positive $r$, which tends to 0. </p>\n\n<p>Given that the ratio is nonnegative for $x&gt;1$, we see that $\\frac{\\ln x}{x^\\alpha}\\to 0$ as $x\\to\\infty$.</p>\n\n<p>Retracing our steps, we can see that our claimed inequality must be true.</p>\n" }, { "answer_id": 2081081, "score": 1, "text": "<p>We want to show $x^m(\\ln x)^n \\to 0,$ which is the same as showing $x^m|\\ln x|^n \\to 0.$ Apply $\\ln $ to see this is the same as showing</p>\n\n<p>$$\\tag 1 m\\ln x + n \\ln (|\\ln x|) \\to -\\infty.$$</p>\n\n<p>Now for $u\\ge 4,$ $\\ln u \\le u^{1/2}.^*$ And for small $x&gt;0,$ $|\\ln x| \\ge 4.$ For such $x$ the left side of $(1)$ is bounded above by</p>\n\n<p>$$m\\ln x + n |\\ln x|^{1/2} = -m|\\ln x| + n |\\ln x|^{1/2} = |\\ln x|^{1/2}(-m|\\ln x|^{1/2} + n).$$ </p>\n\n<p>As $x\\to 0^+,$ the last expression is looking like $\\infty\\cdot (-\\infty)=-\\infty.$ This proves $(1)$ and we're done.</p>\n\n<p>$^*$To prove this, note that it's true at $4,$ and then compare derivatives.</p>\n" }, { "answer_id": 2080989, "score": 6, "text": "<p>Changing variable $y=-\\ln x$ the limit becomes\n$$(-1)^n\\lim_{y\\to+\\infty}e^{-my}y^n.$$\nSince for positive $z$\n$$e^z=\\sum_{k=0}^\\infty \\frac{z^k}{k!}\\geq \\frac{z^{n+1}}{(n+1)!}$$\nputting $z=my$ we obtain\n$$ e^{-my}y^n=\\frac{y^n}{e^{my}}\\leq\\frac{y^n}{\\frac{1}{(n+1)!}(my)^{n+1}}=\\frac{(n+1)!}{m^{n+1}}\\frac1y$$\nand this goes to zero as $y\\to+\\infty$. Since $e^{-my}y^n$ is positive, by the squeeze theorem the original limit is zero.</p>\n" }, { "answer_id": 2080969, "score": 2, "text": "<p>The issue here is that <span class=\"math-container\">$\\lim_{x\\rightarrow 0} x = 0$</span>, but isn't it possible that <span class=\"math-container\">$\\ln(x)$</span> approaches <span class=\"math-container\">$-\\infty$</span> <em>faster</em> than <span class=\"math-container\">$x$</span> goes to <span class=\"math-container\">$0$</span>? For instance, consider <span class=\"math-container\">$e^x/x$</span>. Certainly this function approaches <span class=\"math-container\">$\\infty$</span> as <span class=\"math-container\">$x$</span> goes to infinity, but <span class=\"math-container\">$1/x$</span> approaches <span class=\"math-container\">$0$</span> as <span class=\"math-container\">$x$</span> goes to infinity. So <span class=\"math-container\">$\\lim_{x\\rightarrow 0}=0$</span> is not a sufficient reason for the larger limit to be <span class=\"math-container\">$0$</span>.</p>\n<p>The real question to consider: what tools do you have for solving limits? Do you have the <span class=\"math-container\">$\\epsilon$</span>-<span class=\"math-container\">$\\delta$</span> definition? L'Hopital's Rule (as asked and answered above)? Basically, what are the rules of the game for this question? (I ask, because there are probably multiple ways to solve that problem.)</p>\n" }, { "answer_id": 2080997, "score": 2, "text": "<p>Let $x=e^{-t}$. Then</p>\n\n<p>$$L=\\lim_{t\\to\\infty}{e^{-mt}t^n}=\\left(\\lim_{t\\to\\infty}{e^{-mt/n}t}\\right)^n=\\left(\\frac nm\\lim_{t\\to\\infty}{e^{-t}t}\\right)^n.$$</p>\n\n<p>Now by induction,</p>\n\n<p>$$t\\ge3\\implies e^{-t-1}(t+1)=\\frac{t+1}{et}e^{-t}t&lt;\\frac12e^{-t}t$$ and the limit is $0$.</p>\n" }, { "answer_id": 2496092, "score": 2, "text": "<p>Write <span class=\"math-container\">$$x^m\\ln^nx=e^{m\\ln x+\\ln\\ln^n x}:=e^{\\varphi(x)},\\quad \\:\\varphi(x)=m\\ln x\\left(1+{\\ln \\ln^n x\\over m\\ln x}\\right).$$</span>\nWe have that</p>\n<p><span class=\"math-container\">$$\\forall x&gt;0\\:\\:\\:\\forall \\varepsilon&gt;0\\:\\:\\exists N_{\\varepsilon,x}&gt;0\\:\\:\\forall n\\ge N_{\\varepsilon,x}\\implies \\left|m\\ln x-\\varphi (x)\\right|\\le \\varepsilon\\:|m\\ln x|.$$</span> <span class=\"math-container\">$$$$</span></p>\n<p>So that <span class=\"math-container\">$\\:\\varphi(x)\\underset{\\:0^+}{\\sim}\\:m\\ln x\\:$</span>, since <span class=\"math-container\">$\\:\\lim_{x\\to 0^+}\\Large(\\normalsize\\varphi(x)/(m\\ln x)-1\\Large)\\normalsize\\:=0.$</span> <span class=\"math-container\">$$$$</span></p>\n<p>By now it's a piece of cake to prove that</p>\n<p><span class=\"math-container\">$$ \\forall (m,n)\\in \\mathbf N^2\\:,\\:\\: e^{\\varphi(x)}\\underset {\\:0^+}\\sim\\:e^{\\ln x^m}=\\:x^m\\overset{\\:x\\to 0^+}\\longrightarrow \\:0.$$</span></p>\n" }, { "answer_id": 2080993, "score": 1, "text": "<p>The ideal solution here is highly dependent on the mathematical toolbox available to you. I present a solution here which is not too technical (easing off on calculus where possible), at the cost of a little rigor:</p>\n\n<p><strong>Claim:</strong></p>\n\n<p>$$\\lim\\limits_{x\\to 0^+}x^m (\\ln x)^n = 0, \\,\\text{for} \\quad m,n \\in \\mathbb N$$</p>\n\n<p><em>Remark:</em> Taking $n^{th}$ roots, we note that, if we can show $x^\\alpha\\ln x \\to0$ for all (rational) $\\alpha&gt;0$, then we have our result.</p>\n\n<p><em>Observation:</em> $$\\lim_{x\\to0^+}x^\\alpha\\ln x=\\lim_{1/x\\to0}(1/x)^\\alpha\\ln(1/x)=\\lim_{x\\to\\infty}\\left[-\\frac{\\ln x}{x^\\alpha}\\right]$$</p>\n\n<p>Now, admittedly this is not totally trivial - it relies on the continuity of the functions as hand, as well as being able to convince yourself that we should be allowed to shift limits at $0$ to limits at infinity this way. However, once you can come to terms with this being true (perhaps luckily, it is), we're not too far from a solution.</p>\n\n<p><strong>Lemma:</strong></p>\n\n<p>$$\\ln x \\le x-1 &lt; x \\text{ for } x&gt;0$$</p>\n\n<p>There are many proofs of this, though in honesty, most use calculus or differentiation at some point or another. Still, this is a fairly uncontroversial inequality, so we use it to establish:</p>\n\n<p><em>Corollary:</em></p>\n\n<p>$$\\ln\\left(x^{1/m}\\right)&lt;x^{1/m}\\implies \\ln x &lt; mx^{1/m} \\text{ for } m&gt;0$$</p>\n\n<p><strong>Bringing it all together:</strong></p>\n\n<p>From this, we can see that $\\frac{\\ln x}{x^\\alpha}&lt;mx^{1/m-\\alpha}$ for any positive $m$.</p>\n\n<p>Thus for sufficiently large $m$, this ratio is bounded above by a multiple of $x^{-r}$ for some positive $r$, which tends to 0. </p>\n\n<p>Given that the ratio is nonnegative for $x&gt;1$, we see that $\\frac{\\ln x}{x^\\alpha}\\to 0$ as $x\\to\\infty$.</p>\n\n<p>Retracing our steps, we can see that our claimed inequality must be true.</p>\n" }, { "answer_id": 2081081, "score": 1, "text": "<p>We want to show $x^m(\\ln x)^n \\to 0,$ which is the same as showing $x^m|\\ln x|^n \\to 0.$ Apply $\\ln $ to see this is the same as showing</p>\n\n<p>$$\\tag 1 m\\ln x + n \\ln (|\\ln x|) \\to -\\infty.$$</p>\n\n<p>Now for $u\\ge 4,$ $\\ln u \\le u^{1/2}.^*$ And for small $x&gt;0,$ $|\\ln x| \\ge 4.$ For such $x$ the left side of $(1)$ is bounded above by</p>\n\n<p>$$m\\ln x + n |\\ln x|^{1/2} = -m|\\ln x| + n |\\ln x|^{1/2} = |\\ln x|^{1/2}(-m|\\ln x|^{1/2} + n).$$ </p>\n\n<p>As $x\\to 0^+,$ the last expression is looking like $\\infty\\cdot (-\\infty)=-\\infty.$ This proves $(1)$ and we're done.</p>\n\n<p>$^*$To prove this, note that it's true at $4,$ and then compare derivatives.</p>\n" }, { "answer_id": 2080989, "score": 6, "text": "<p>Changing variable $y=-\\ln x$ the limit becomes\n$$(-1)^n\\lim_{y\\to+\\infty}e^{-my}y^n.$$\nSince for positive $z$\n$$e^z=\\sum_{k=0}^\\infty \\frac{z^k}{k!}\\geq \\frac{z^{n+1}}{(n+1)!}$$\nputting $z=my$ we obtain\n$$ e^{-my}y^n=\\frac{y^n}{e^{my}}\\leq\\frac{y^n}{\\frac{1}{(n+1)!}(my)^{n+1}}=\\frac{(n+1)!}{m^{n+1}}\\frac1y$$\nand this goes to zero as $y\\to+\\infty$. Since $e^{-my}y^n$ is positive, by the squeeze theorem the original limit is zero.</p>\n" }, { "answer_id": 2080969, "score": 2, "text": "<p>The issue here is that <span class=\"math-container\">$\\lim_{x\\rightarrow 0} x = 0$</span>, but isn't it possible that <span class=\"math-container\">$\\ln(x)$</span> approaches <span class=\"math-container\">$-\\infty$</span> <em>faster</em> than <span class=\"math-container\">$x$</span> goes to <span class=\"math-container\">$0$</span>? For instance, consider <span class=\"math-container\">$e^x/x$</span>. Certainly this function approaches <span class=\"math-container\">$\\infty$</span> as <span class=\"math-container\">$x$</span> goes to infinity, but <span class=\"math-container\">$1/x$</span> approaches <span class=\"math-container\">$0$</span> as <span class=\"math-container\">$x$</span> goes to infinity. So <span class=\"math-container\">$\\lim_{x\\rightarrow 0}=0$</span> is not a sufficient reason for the larger limit to be <span class=\"math-container\">$0$</span>.</p>\n<p>The real question to consider: what tools do you have for solving limits? Do you have the <span class=\"math-container\">$\\epsilon$</span>-<span class=\"math-container\">$\\delta$</span> definition? L'Hopital's Rule (as asked and answered above)? Basically, what are the rules of the game for this question? (I ask, because there are probably multiple ways to solve that problem.)</p>\n" }, { "answer_id": 2080997, "score": 2, "text": "<p>Let $x=e^{-t}$. Then</p>\n\n<p>$$L=\\lim_{t\\to\\infty}{e^{-mt}t^n}=\\left(\\lim_{t\\to\\infty}{e^{-mt/n}t}\\right)^n=\\left(\\frac nm\\lim_{t\\to\\infty}{e^{-t}t}\\right)^n.$$</p>\n\n<p>Now by induction,</p>\n\n<p>$$t\\ge3\\implies e^{-t-1}(t+1)=\\frac{t+1}{et}e^{-t}t&lt;\\frac12e^{-t}t$$ and the limit is $0$.</p>\n" }, { "answer_id": 2496092, "score": 2, "text": "<p>Write <span class=\"math-container\">$$x^m\\ln^nx=e^{m\\ln x+\\ln\\ln^n x}:=e^{\\varphi(x)},\\quad \\:\\varphi(x)=m\\ln x\\left(1+{\\ln \\ln^n x\\over m\\ln x}\\right).$$</span>\nWe have that</p>\n<p><span class=\"math-container\">$$\\forall x&gt;0\\:\\:\\:\\forall \\varepsilon&gt;0\\:\\:\\exists N_{\\varepsilon,x}&gt;0\\:\\:\\forall n\\ge N_{\\varepsilon,x}\\implies \\left|m\\ln x-\\varphi (x)\\right|\\le \\varepsilon\\:|m\\ln x|.$$</span> <span class=\"math-container\">$$$$</span></p>\n<p>So that <span class=\"math-container\">$\\:\\varphi(x)\\underset{\\:0^+}{\\sim}\\:m\\ln x\\:$</span>, since <span class=\"math-container\">$\\:\\lim_{x\\to 0^+}\\Large(\\normalsize\\varphi(x)/(m\\ln x)-1\\Large)\\normalsize\\:=0.$</span> <span class=\"math-container\">$$$$</span></p>\n<p>By now it's a piece of cake to prove that</p>\n<p><span class=\"math-container\">$$ \\forall (m,n)\\in \\mathbf N^2\\:,\\:\\: e^{\\varphi(x)}\\underset {\\:0^+}\\sim\\:e^{\\ln x^m}=\\:x^m\\overset{\\:x\\to 0^+}\\longrightarrow \\:0.$$</span></p>\n" }, { "answer_id": 2080993, "score": 1, "text": "<p>The ideal solution here is highly dependent on the mathematical toolbox available to you. I present a solution here which is not too technical (easing off on calculus where possible), at the cost of a little rigor:</p>\n\n<p><strong>Claim:</strong></p>\n\n<p>$$\\lim\\limits_{x\\to 0^+}x^m (\\ln x)^n = 0, \\,\\text{for} \\quad m,n \\in \\mathbb N$$</p>\n\n<p><em>Remark:</em> Taking $n^{th}$ roots, we note that, if we can show $x^\\alpha\\ln x \\to0$ for all (rational) $\\alpha&gt;0$, then we have our result.</p>\n\n<p><em>Observation:</em> $$\\lim_{x\\to0^+}x^\\alpha\\ln x=\\lim_{1/x\\to0}(1/x)^\\alpha\\ln(1/x)=\\lim_{x\\to\\infty}\\left[-\\frac{\\ln x}{x^\\alpha}\\right]$$</p>\n\n<p>Now, admittedly this is not totally trivial - it relies on the continuity of the functions as hand, as well as being able to convince yourself that we should be allowed to shift limits at $0$ to limits at infinity this way. However, once you can come to terms with this being true (perhaps luckily, it is), we're not too far from a solution.</p>\n\n<p><strong>Lemma:</strong></p>\n\n<p>$$\\ln x \\le x-1 &lt; x \\text{ for } x&gt;0$$</p>\n\n<p>There are many proofs of this, though in honesty, most use calculus or differentiation at some point or another. Still, this is a fairly uncontroversial inequality, so we use it to establish:</p>\n\n<p><em>Corollary:</em></p>\n\n<p>$$\\ln\\left(x^{1/m}\\right)&lt;x^{1/m}\\implies \\ln x &lt; mx^{1/m} \\text{ for } m&gt;0$$</p>\n\n<p><strong>Bringing it all together:</strong></p>\n\n<p>From this, we can see that $\\frac{\\ln x}{x^\\alpha}&lt;mx^{1/m-\\alpha}$ for any positive $m$.</p>\n\n<p>Thus for sufficiently large $m$, this ratio is bounded above by a multiple of $x^{-r}$ for some positive $r$, which tends to 0. </p>\n\n<p>Given that the ratio is nonnegative for $x&gt;1$, we see that $\\frac{\\ln x}{x^\\alpha}\\to 0$ as $x\\to\\infty$.</p>\n\n<p>Retracing our steps, we can see that our claimed inequality must be true.</p>\n" }, { "answer_id": 2081081, "score": 1, "text": "<p>We want to show $x^m(\\ln x)^n \\to 0,$ which is the same as showing $x^m|\\ln x|^n \\to 0.$ Apply $\\ln $ to see this is the same as showing</p>\n\n<p>$$\\tag 1 m\\ln x + n \\ln (|\\ln x|) \\to -\\infty.$$</p>\n\n<p>Now for $u\\ge 4,$ $\\ln u \\le u^{1/2}.^*$ And for small $x&gt;0,$ $|\\ln x| \\ge 4.$ For such $x$ the left side of $(1)$ is bounded above by</p>\n\n<p>$$m\\ln x + n |\\ln x|^{1/2} = -m|\\ln x| + n |\\ln x|^{1/2} = |\\ln x|^{1/2}(-m|\\ln x|^{1/2} + n).$$ </p>\n\n<p>As $x\\to 0^+,$ the last expression is looking like $\\infty\\cdot (-\\infty)=-\\infty.$ This proves $(1)$ and we're done.</p>\n\n<p>$^*$To prove this, note that it's true at $4,$ and then compare derivatives.</p>\n" } ]
Question: How to prove $\lim\limits_{x\to 0}x^m (\ln x)^n = 0$ Answer 1: <p>Changing variable $y=-\ln x$ the limit becomes $$(-1)^n\lim_{y\to+\infty}e^{-my}y^n.$$ Since for positive $z$ $$e^z=\sum_{k=0}^\infty \frac{z^k}{k!}\geq \frac{z^{n+1}}{(n+1)!}$$ putting $z=my$ we obtain $$ e^{-my}y^n=\frac{y^n}{e^{my}}\leq\frac{y^n}{\frac{1}{(n+1)!}(my)^{n+1}}=\frac{(n+1)!}{m^{n+1}}\frac1y$$ and this goes to zero as $y\to+\infty$. Since $e^{-my}y^n$ is positive, by the squeeze theorem the original limit is zero.</p> Answer 2: <p>The issue here is that <span class="math-container">$\lim_{x\rightarrow 0} x = 0$</span>, but isn't it possible that <span class="math-container">$\ln(x)$</span> approaches <span class="math-container">$-\infty$</span> <em>faster</em> than <span class="math-container">$x$</span> goes to <span class="math-container">$0$</span>? For instance, consider <span class="math-container">$e^x/x$</span>. Certainly this function approaches <span class="math-container">$\infty$</span> as <span class="math-container">$x$</span> goes to infinity, but <span class="math-container">$1/x$</span> approaches <span class="math-container">$0$</span> as <span class="math-container">$x$</span> goes to infinity. So <span class="math-container">$\lim_{x\rightarrow 0}=0$</span> is not a sufficient reason for the larger limit to be <span class="math-container">$0$</span>.</p> <p>The real question to consider: what tools do you have for solving limits? Do you have the <span class="math-container">$\epsilon$</span>-<span class="math-container">$\delta$</span> definition? L'Hopital's Rule (as asked and answered above)? Basically, what are the rules of the game for this question? (I ask, because there are probably multiple ways to solve that problem.)</p> Answer 3: <p>Let $x=e^{-t}$. Then</p> <p>$$L=\lim_{t\to\infty}{e^{-mt}t^n}=\left(\lim_{t\to\infty}{e^{-mt/n}t}\right)^n=\left(\frac nm\lim_{t\to\infty}{e^{-t}t}\right)^n.$$</p> <p>Now by induction,</p> <p>$$t\ge3\implies e^{-t-1}(t+1)=\frac{t+1}{et}e^{-t}t&lt;\frac12e^{-t}t$$ and the limit is $0$.</p> Answer 4: <p>Write <span class="math-container">$$x^m\ln^nx=e^{m\ln x+\ln\ln^n x}:=e^{\varphi(x)},\quad \:\varphi(x)=m\ln x\left(1+{\ln \ln^n x\over m\ln x}\right).$$</span> We have that</p> <p><span class="math-container">$$\forall x&gt;0\:\:\:\forall \varepsilon&gt;0\:\:\exists N_{\varepsilon,x}&gt;0\:\:\forall n\ge N_{\varepsilon,x}\implies \left|m\ln x-\varphi (x)\right|\le \varepsilon\:|m\ln x|.$$</span> <span class="math-container">$$$$</span></p> <p>So that <span class="math-container">$\:\varphi(x)\underset{\:0^+}{\sim}\:m\ln x\:$</span>, since <span class="math-container">$\:\lim_{x\to 0^+}\Large(\normalsize\varphi(x)/(m\ln x)-1\Large)\normalsize\:=0.$</span> <span class="math-container">$$$$</span></p> <p>By now it's a piece of cake to prove that</p> <p><span class="math-container">$$ \forall (m,n)\in \mathbf N^2\:,\:\: e^{\varphi(x)}\underset {\:0^+}\sim\:e^{\ln x^m}=\:x^m\overset{\:x\to 0^+}\longrightarrow \:0.$$</span></p> Answer 5: <p>The ideal solution here is highly dependent on the mathematical toolbox available to you. I present a solution here which is not too technical (easing off on calculus where possible), at the cost of a little rigor:</p> <p><strong>Claim:</strong></p> <p>$$\lim\limits_{x\to 0^+}x^m (\ln x)^n = 0, \,\text{for} \quad m,n \in \mathbb N$$</p> <p><em>Remark:</em> Taking $n^{th}$ roots, we note that, if we can show $x^\alpha\ln x \to0$ for all (rational) $\alpha&gt;0$, then we have our result.</p> <p><em>Observation:</em> $$\lim_{x\to0^+}x^\alpha\ln x=\lim_{1/x\to0}(1/x)^\alpha\ln(1/x)=\lim_{x\to\infty}\left[-\frac{\ln x}{x^\alpha}\right]$$</p> <p>Now, admittedly this is not totally trivial - it relies on the continuity of the functions as hand, as well as being able to convince yourself that we should be allowed to shift limits at $0$ to limits at infinity this way. However, once you can come to terms with this being true (perhaps luckily, it is), we're not too far from a solution.</p> <p><strong>Lemma:</strong></p> <p>$$\ln x \le x-1 &lt; x \text{ for } x&gt;0$$</p> <p>There are many proofs of this, though in honesty, most use calculus or differentiation at some point or another. Still, this is a fairly uncontroversial inequality, so we use it to establish:</p> <p><em>Corollary:</em></p> <p>$$\ln\left(x^{1/m}\right)&lt;x^{1/m}\implies \ln x &lt; mx^{1/m} \text{ for } m&gt;0$$</p> <p><strong>Bringing it all together:</strong></p> <p>From this, we can see that $\frac{\ln x}{x^\alpha}&lt;mx^{1/m-\alpha}$ for any positive $m$.</p> <p>Thus for sufficiently large $m$, this ratio is bounded above by a multiple of $x^{-r}$ for some positive $r$, which tends to 0. </p> <p>Given that the ratio is nonnegative for $x&gt;1$, we see that $\frac{\ln x}{x^\alpha}\to 0$ as $x\to\infty$.</p> <p>Retracing our steps, we can see that our claimed inequality must be true.</p> Answer 6: <p>We want to show $x^m(\ln x)^n \to 0,$ which is the same as showing $x^m|\ln x|^n \to 0.$ Apply $\ln $ to see this is the same as showing</p> <p>$$\tag 1 m\ln x + n \ln (|\ln x|) \to -\infty.$$</p> <p>Now for $u\ge 4,$ $\ln u \le u^{1/2}.^*$ And for small $x&gt;0,$ $|\ln x| \ge 4.$ For such $x$ the left side of $(1)$ is bounded above by</p> <p>$$m\ln x + n |\ln x|^{1/2} = -m|\ln x| + n |\ln x|^{1/2} = |\ln x|^{1/2}(-m|\ln x|^{1/2} + n).$$ </p> <p>As $x\to 0^+,$ the last expression is looking like $\infty\cdot (-\infty)=-\infty.$ This proves $(1)$ and we're done.</p> <p>$^*$To prove this, note that it's true at $4,$ and then compare derivatives.</p> Answer 7: <p>Changing variable $y=-\ln x$ the limit becomes $$(-1)^n\lim_{y\to+\infty}e^{-my}y^n.$$ Since for positive $z$ $$e^z=\sum_{k=0}^\infty \frac{z^k}{k!}\geq \frac{z^{n+1}}{(n+1)!}$$ putting $z=my$ we obtain $$ e^{-my}y^n=\frac{y^n}{e^{my}}\leq\frac{y^n}{\frac{1}{(n+1)!}(my)^{n+1}}=\frac{(n+1)!}{m^{n+1}}\frac1y$$ and this goes to zero as $y\to+\infty$. Since $e^{-my}y^n$ is positive, by the squeeze theorem the original limit is zero.</p> Answer 8: <p>The issue here is that <span class="math-container">$\lim_{x\rightarrow 0} x = 0$</span>, but isn't it possible that <span class="math-container">$\ln(x)$</span> approaches <span class="math-container">$-\infty$</span> <em>faster</em> than <span class="math-container">$x$</span> goes to <span class="math-container">$0$</span>? For instance, consider <span class="math-container">$e^x/x$</span>. Certainly this function approaches <span class="math-container">$\infty$</span> as <span class="math-container">$x$</span> goes to infinity, but <span class="math-container">$1/x$</span> approaches <span class="math-container">$0$</span> as <span class="math-container">$x$</span> goes to infinity. So <span class="math-container">$\lim_{x\rightarrow 0}=0$</span> is not a sufficient reason for the larger limit to be <span class="math-container">$0$</span>.</p> <p>The real question to consider: what tools do you have for solving limits? Do you have the <span class="math-container">$\epsilon$</span>-<span class="math-container">$\delta$</span> definition? L'Hopital's Rule (as asked and answered above)? Basically, what are the rules of the game for this question? (I ask, because there are probably multiple ways to solve that problem.)</p> Answer 9: <p>Let $x=e^{-t}$. Then</p> <p>$$L=\lim_{t\to\infty}{e^{-mt}t^n}=\left(\lim_{t\to\infty}{e^{-mt/n}t}\right)^n=\left(\frac nm\lim_{t\to\infty}{e^{-t}t}\right)^n.$$</p> <p>Now by induction,</p> <p>$$t\ge3\implies e^{-t-1}(t+1)=\frac{t+1}{et}e^{-t}t&lt;\frac12e^{-t}t$$ and the limit is $0$.</p> Answer 10: <p>Write <span class="math-container">$$x^m\ln^nx=e^{m\ln x+\ln\ln^n x}:=e^{\varphi(x)},\quad \:\varphi(x)=m\ln x\left(1+{\ln \ln^n x\over m\ln x}\right).$$</span> We have that</p> <p><span class="math-container">$$\forall x&gt;0\:\:\:\forall \varepsilon&gt;0\:\:\exists N_{\varepsilon,x}&gt;0\:\:\forall n\ge N_{\varepsilon,x}\implies \left|m\ln x-\varphi (x)\right|\le \varepsilon\:|m\ln x|.$$</span> <span class="math-container">$$$$</span></p> <p>So that <span class="math-container">$\:\varphi(x)\underset{\:0^+}{\sim}\:m\ln x\:$</span>, since <span class="math-container">$\:\lim_{x\to 0^+}\Large(\normalsize\varphi(x)/(m\ln x)-1\Large)\normalsize\:=0.$</span> <span class="math-container">$$$$</span></p> <p>By now it's a piece of cake to prove that</p> <p><span class="math-container">$$ \forall (m,n)\in \mathbf N^2\:,\:\: e^{\varphi(x)}\underset {\:0^+}\sim\:e^{\ln x^m}=\:x^m\overset{\:x\to 0^+}\longrightarrow \:0.$$</span></p> Answer 11: <p>The ideal solution here is highly dependent on the mathematical toolbox available to you. I present a solution here which is not too technical (easing off on calculus where possible), at the cost of a little rigor:</p> <p><strong>Claim:</strong></p> <p>$$\lim\limits_{x\to 0^+}x^m (\ln x)^n = 0, \,\text{for} \quad m,n \in \mathbb N$$</p> <p><em>Remark:</em> Taking $n^{th}$ roots, we note that, if we can show $x^\alpha\ln x \to0$ for all (rational) $\alpha&gt;0$, then we have our result.</p> <p><em>Observation:</em> $$\lim_{x\to0^+}x^\alpha\ln x=\lim_{1/x\to0}(1/x)^\alpha\ln(1/x)=\lim_{x\to\infty}\left[-\frac{\ln x}{x^\alpha}\right]$$</p> <p>Now, admittedly this is not totally trivial - it relies on the continuity of the functions as hand, as well as being able to convince yourself that we should be allowed to shift limits at $0$ to limits at infinity this way. However, once you can come to terms with this being true (perhaps luckily, it is), we're not too far from a solution.</p> <p><strong>Lemma:</strong></p> <p>$$\ln x \le x-1 &lt; x \text{ for } x&gt;0$$</p> <p>There are many proofs of this, though in honesty, most use calculus or differentiation at some point or another. Still, this is a fairly uncontroversial inequality, so we use it to establish:</p> <p><em>Corollary:</em></p> <p>$$\ln\left(x^{1/m}\right)&lt;x^{1/m}\implies \ln x &lt; mx^{1/m} \text{ for } m&gt;0$$</p> <p><strong>Bringing it all together:</strong></p> <p>From this, we can see that $\frac{\ln x}{x^\alpha}&lt;mx^{1/m-\alpha}$ for any positive $m$.</p> <p>Thus for sufficiently large $m$, this ratio is bounded above by a multiple of $x^{-r}$ for some positive $r$, which tends to 0. </p> <p>Given that the ratio is nonnegative for $x&gt;1$, we see that $\frac{\ln x}{x^\alpha}\to 0$ as $x\to\infty$.</p> <p>Retracing our steps, we can see that our claimed inequality must be true.</p> Answer 12: <p>We want to show $x^m(\ln x)^n \to 0,$ which is the same as showing $x^m|\ln x|^n \to 0.$ Apply $\ln $ to see this is the same as showing</p> <p>$$\tag 1 m\ln x + n \ln (|\ln x|) \to -\infty.$$</p> <p>Now for $u\ge 4,$ $\ln u \le u^{1/2}.^*$ And for small $x&gt;0,$ $|\ln x| \ge 4.$ For such $x$ the left side of $(1)$ is bounded above by</p> <p>$$m\ln x + n |\ln x|^{1/2} = -m|\ln x| + n |\ln x|^{1/2} = |\ln x|^{1/2}(-m|\ln x|^{1/2} + n).$$ </p> <p>As $x\to 0^+,$ the last expression is looking like $\infty\cdot (-\infty)=-\infty.$ This proves $(1)$ and we're done.</p> <p>$^*$To prove this, note that it's true at $4,$ and then compare derivatives.</p> Answer 13: <p>Changing variable $y=-\ln x$ the limit becomes $$(-1)^n\lim_{y\to+\infty}e^{-my}y^n.$$ Since for positive $z$ $$e^z=\sum_{k=0}^\infty \frac{z^k}{k!}\geq \frac{z^{n+1}}{(n+1)!}$$ putting $z=my$ we obtain $$ e^{-my}y^n=\frac{y^n}{e^{my}}\leq\frac{y^n}{\frac{1}{(n+1)!}(my)^{n+1}}=\frac{(n+1)!}{m^{n+1}}\frac1y$$ and this goes to zero as $y\to+\infty$. Since $e^{-my}y^n$ is positive, by the squeeze theorem the original limit is zero.</p> Answer 14: <p>The issue here is that <span class="math-container">$\lim_{x\rightarrow 0} x = 0$</span>, but isn't it possible that <span class="math-container">$\ln(x)$</span> approaches <span class="math-container">$-\infty$</span> <em>faster</em> than <span class="math-container">$x$</span> goes to <span class="math-container">$0$</span>? For instance, consider <span class="math-container">$e^x/x$</span>. Certainly this function approaches <span class="math-container">$\infty$</span> as <span class="math-container">$x$</span> goes to infinity, but <span class="math-container">$1/x$</span> approaches <span class="math-container">$0$</span> as <span class="math-container">$x$</span> goes to infinity. So <span class="math-container">$\lim_{x\rightarrow 0}=0$</span> is not a sufficient reason for the larger limit to be <span class="math-container">$0$</span>.</p> <p>The real question to consider: what tools do you have for solving limits? Do you have the <span class="math-container">$\epsilon$</span>-<span class="math-container">$\delta$</span> definition? L'Hopital's Rule (as asked and answered above)? Basically, what are the rules of the game for this question? (I ask, because there are probably multiple ways to solve that problem.)</p> Answer 15: <p>Let $x=e^{-t}$. Then</p> <p>$$L=\lim_{t\to\infty}{e^{-mt}t^n}=\left(\lim_{t\to\infty}{e^{-mt/n}t}\right)^n=\left(\frac nm\lim_{t\to\infty}{e^{-t}t}\right)^n.$$</p> <p>Now by induction,</p> <p>$$t\ge3\implies e^{-t-1}(t+1)=\frac{t+1}{et}e^{-t}t&lt;\frac12e^{-t}t$$ and the limit is $0$.</p> Answer 16: <p>Write <span class="math-container">$$x^m\ln^nx=e^{m\ln x+\ln\ln^n x}:=e^{\varphi(x)},\quad \:\varphi(x)=m\ln x\left(1+{\ln \ln^n x\over m\ln x}\right).$$</span> We have that</p> <p><span class="math-container">$$\forall x&gt;0\:\:\:\forall \varepsilon&gt;0\:\:\exists N_{\varepsilon,x}&gt;0\:\:\forall n\ge N_{\varepsilon,x}\implies \left|m\ln x-\varphi (x)\right|\le \varepsilon\:|m\ln x|.$$</span> <span class="math-container">$$$$</span></p> <p>So that <span class="math-container">$\:\varphi(x)\underset{\:0^+}{\sim}\:m\ln x\:$</span>, since <span class="math-container">$\:\lim_{x\to 0^+}\Large(\normalsize\varphi(x)/(m\ln x)-1\Large)\normalsize\:=0.$</span> <span class="math-container">$$$$</span></p> <p>By now it's a piece of cake to prove that</p> <p><span class="math-container">$$ \forall (m,n)\in \mathbf N^2\:,\:\: e^{\varphi(x)}\underset {\:0^+}\sim\:e^{\ln x^m}=\:x^m\overset{\:x\to 0^+}\longrightarrow \:0.$$</span></p> Answer 17: <p>The ideal solution here is highly dependent on the mathematical toolbox available to you. I present a solution here which is not too technical (easing off on calculus where possible), at the cost of a little rigor:</p> <p><strong>Claim:</strong></p> <p>$$\lim\limits_{x\to 0^+}x^m (\ln x)^n = 0, \,\text{for} \quad m,n \in \mathbb N$$</p> <p><em>Remark:</em> Taking $n^{th}$ roots, we note that, if we can show $x^\alpha\ln x \to0$ for all (rational) $\alpha&gt;0$, then we have our result.</p> <p><em>Observation:</em> $$\lim_{x\to0^+}x^\alpha\ln x=\lim_{1/x\to0}(1/x)^\alpha\ln(1/x)=\lim_{x\to\infty}\left[-\frac{\ln x}{x^\alpha}\right]$$</p> <p>Now, admittedly this is not totally trivial - it relies on the continuity of the functions as hand, as well as being able to convince yourself that we should be allowed to shift limits at $0$ to limits at infinity this way. However, once you can come to terms with this being true (perhaps luckily, it is), we're not too far from a solution.</p> <p><strong>Lemma:</strong></p> <p>$$\ln x \le x-1 &lt; x \text{ for } x&gt;0$$</p> <p>There are many proofs of this, though in honesty, most use calculus or differentiation at some point or another. Still, this is a fairly uncontroversial inequality, so we use it to establish:</p> <p><em>Corollary:</em></p> <p>$$\ln\left(x^{1/m}\right)&lt;x^{1/m}\implies \ln x &lt; mx^{1/m} \text{ for } m&gt;0$$</p> <p><strong>Bringing it all together:</strong></p> <p>From this, we can see that $\frac{\ln x}{x^\alpha}&lt;mx^{1/m-\alpha}$ for any positive $m$.</p> <p>Thus for sufficiently large $m$, this ratio is bounded above by a multiple of $x^{-r}$ for some positive $r$, which tends to 0. </p> <p>Given that the ratio is nonnegative for $x&gt;1$, we see that $\frac{\ln x}{x^\alpha}\to 0$ as $x\to\infty$.</p> <p>Retracing our steps, we can see that our claimed inequality must be true.</p> Answer 18: <p>We want to show $x^m(\ln x)^n \to 0,$ which is the same as showing $x^m|\ln x|^n \to 0.$ Apply $\ln $ to see this is the same as showing</p> <p>$$\tag 1 m\ln x + n \ln (|\ln x|) \to -\infty.$$</p> <p>Now for $u\ge 4,$ $\ln u \le u^{1/2}.^*$ And for small $x&gt;0,$ $|\ln x| \ge 4.$ For such $x$ the left side of $(1)$ is bounded above by</p> <p>$$m\ln x + n |\ln x|^{1/2} = -m|\ln x| + n |\ln x|^{1/2} = |\ln x|^{1/2}(-m|\ln x|^{1/2} + n).$$ </p> <p>As $x\to 0^+,$ the last expression is looking like $\infty\cdot (-\infty)=-\infty.$ This proves $(1)$ and we're done.</p> <p>$^*$To prove this, note that it's true at $4,$ and then compare derivatives.</p> Answer 19: <p>Changing variable $y=-\ln x$ the limit becomes $$(-1)^n\lim_{y\to+\infty}e^{-my}y^n.$$ Since for positive $z$ $$e^z=\sum_{k=0}^\infty \frac{z^k}{k!}\geq \frac{z^{n+1}}{(n+1)!}$$ putting $z=my$ we obtain $$ e^{-my}y^n=\frac{y^n}{e^{my}}\leq\frac{y^n}{\frac{1}{(n+1)!}(my)^{n+1}}=\frac{(n+1)!}{m^{n+1}}\frac1y$$ and this goes to zero as $y\to+\infty$. Since $e^{-my}y^n$ is positive, by the squeeze theorem the original limit is zero.</p> Answer 20: <p>The issue here is that <span class="math-container">$\lim_{x\rightarrow 0} x = 0$</span>, but isn't it possible that <span class="math-container">$\ln(x)$</span> approaches <span class="math-container">$-\infty$</span> <em>faster</em> than <span class="math-container">$x$</span> goes to <span class="math-container">$0$</span>? For instance, consider <span class="math-container">$e^x/x$</span>. Certainly this function approaches <span class="math-container">$\infty$</span> as <span class="math-container">$x$</span> goes to infinity, but <span class="math-container">$1/x$</span> approaches <span class="math-container">$0$</span> as <span class="math-container">$x$</span> goes to infinity. So <span class="math-container">$\lim_{x\rightarrow 0}=0$</span> is not a sufficient reason for the larger limit to be <span class="math-container">$0$</span>.</p> <p>The real question to consider: what tools do you have for solving limits? Do you have the <span class="math-container">$\epsilon$</span>-<span class="math-container">$\delta$</span> definition? L'Hopital's Rule (as asked and answered above)? Basically, what are the rules of the game for this question? (I ask, because there are probably multiple ways to solve that problem.)</p> Answer 21: <p>Let $x=e^{-t}$. Then</p> <p>$$L=\lim_{t\to\infty}{e^{-mt}t^n}=\left(\lim_{t\to\infty}{e^{-mt/n}t}\right)^n=\left(\frac nm\lim_{t\to\infty}{e^{-t}t}\right)^n.$$</p> <p>Now by induction,</p> <p>$$t\ge3\implies e^{-t-1}(t+1)=\frac{t+1}{et}e^{-t}t&lt;\frac12e^{-t}t$$ and the limit is $0$.</p> Answer 22: <p>Write <span class="math-container">$$x^m\ln^nx=e^{m\ln x+\ln\ln^n x}:=e^{\varphi(x)},\quad \:\varphi(x)=m\ln x\left(1+{\ln \ln^n x\over m\ln x}\right).$$</span> We have that</p> <p><span class="math-container">$$\forall x&gt;0\:\:\:\forall \varepsilon&gt;0\:\:\exists N_{\varepsilon,x}&gt;0\:\:\forall n\ge N_{\varepsilon,x}\implies \left|m\ln x-\varphi (x)\right|\le \varepsilon\:|m\ln x|.$$</span> <span class="math-container">$$$$</span></p> <p>So that <span class="math-container">$\:\varphi(x)\underset{\:0^+}{\sim}\:m\ln x\:$</span>, since <span class="math-container">$\:\lim_{x\to 0^+}\Large(\normalsize\varphi(x)/(m\ln x)-1\Large)\normalsize\:=0.$</span> <span class="math-container">$$$$</span></p> <p>By now it's a piece of cake to prove that</p> <p><span class="math-container">$$ \forall (m,n)\in \mathbf N^2\:,\:\: e^{\varphi(x)}\underset {\:0^+}\sim\:e^{\ln x^m}=\:x^m\overset{\:x\to 0^+}\longrightarrow \:0.$$</span></p> Answer 23: <p>The ideal solution here is highly dependent on the mathematical toolbox available to you. I present a solution here which is not too technical (easing off on calculus where possible), at the cost of a little rigor:</p> <p><strong>Claim:</strong></p> <p>$$\lim\limits_{x\to 0^+}x^m (\ln x)^n = 0, \,\text{for} \quad m,n \in \mathbb N$$</p> <p><em>Remark:</em> Taking $n^{th}$ roots, we note that, if we can show $x^\alpha\ln x \to0$ for all (rational) $\alpha&gt;0$, then we have our result.</p> <p><em>Observation:</em> $$\lim_{x\to0^+}x^\alpha\ln x=\lim_{1/x\to0}(1/x)^\alpha\ln(1/x)=\lim_{x\to\infty}\left[-\frac{\ln x}{x^\alpha}\right]$$</p> <p>Now, admittedly this is not totally trivial - it relies on the continuity of the functions as hand, as well as being able to convince yourself that we should be allowed to shift limits at $0$ to limits at infinity this way. However, once you can come to terms with this being true (perhaps luckily, it is), we're not too far from a solution.</p> <p><strong>Lemma:</strong></p> <p>$$\ln x \le x-1 &lt; x \text{ for } x&gt;0$$</p> <p>There are many proofs of this, though in honesty, most use calculus or differentiation at some point or another. Still, this is a fairly uncontroversial inequality, so we use it to establish:</p> <p><em>Corollary:</em></p> <p>$$\ln\left(x^{1/m}\right)&lt;x^{1/m}\implies \ln x &lt; mx^{1/m} \text{ for } m&gt;0$$</p> <p><strong>Bringing it all together:</strong></p> <p>From this, we can see that $\frac{\ln x}{x^\alpha}&lt;mx^{1/m-\alpha}$ for any positive $m$.</p> <p>Thus for sufficiently large $m$, this ratio is bounded above by a multiple of $x^{-r}$ for some positive $r$, which tends to 0. </p> <p>Given that the ratio is nonnegative for $x&gt;1$, we see that $\frac{\ln x}{x^\alpha}\to 0$ as $x\to\infty$.</p> <p>Retracing our steps, we can see that our claimed inequality must be true.</p> Answer 24: <p>We want to show $x^m(\ln x)^n \to 0,$ which is the same as showing $x^m|\ln x|^n \to 0.$ Apply $\ln $ to see this is the same as showing</p> <p>$$\tag 1 m\ln x + n \ln (|\ln x|) \to -\infty.$$</p> <p>Now for $u\ge 4,$ $\ln u \le u^{1/2}.^*$ And for small $x&gt;0,$ $|\ln x| \ge 4.$ For such $x$ the left side of $(1)$ is bounded above by</p> <p>$$m\ln x + n |\ln x|^{1/2} = -m|\ln x| + n |\ln x|^{1/2} = |\ln x|^{1/2}(-m|\ln x|^{1/2} + n).$$ </p> <p>As $x\to 0^+,$ the last expression is looking like $\infty\cdot (-\infty)=-\infty.$ This proves $(1)$ and we're done.</p> <p>$^*$To prove this, note that it's true at $4,$ and then compare derivatives.</p>
Prove $p\mid\frac{x^{a}-1}{x-1}$ using Fermat&#39;s little theorem
2,080,961
4
https://math.stackexchange.com/questions/2080961/prove-p-mid-fracxa-1x-1-using-fermats-little-theorem
<p>Fermat's little theorem states that if $p$ is a prime number, then for any integer $a$, the number $a^{p} − a $ is an integer multiple of $p$. In the notation of modular arithmetic, this is expressed as $a^{p} \equiv a \pmod p.$</p> <p>Using this theorem prove:</p> <p>Given a prime number $p$, show that if there are a positive integer $x$ and a prime number $a$ such that $p$ divides $\frac{x^{a}-1}{x-1}$, then either $a = p$ or $p \equiv 1 \pmod a$.</p> <p>$$p\mid\frac{x^{a}-1}{x-1}$$</p> <p>So, I'm thinking: $$\frac{x^{a}-1}{x-1} = x^{a-1}+x^{a-2}+...+1$$ I tried the telescoping technique but that doesn't work, assuming $a = p$, shows that $x^{p-1}\equiv 1 \pmod p$.</p> <p>So, what else can I do?</p>
[ { "answer_id": 2081006, "score": 2, "text": "<p>You have:</p>\n\n<p>$p|\\frac{x^a-1}{x-1}\\Rightarrow p|(\\frac{x^a-1}{x-1})(x-1) \\Rightarrow p|x^a-1 \\Rightarrow x^a \\equiv 1 (\\mod p)$</p>\n\n<p>Hence if $m$ is the order of $x$ with respect to $p$, then $m|a \\Rightarrow m=1$ or $m=a$, since $a$ is prime.</p>\n\n<p>If $m=1$, then by the definition of the order, we have that\n$x\\equiv 1(\\mod p)$ and so by the hypothesis we get:</p>\n\n<p>$0\\equiv \\frac{x^a-1}{x-1}\\equiv x^{(a-1)}+...+1\\equiv 1^{(a-1)}+...+1\\equiv a(\\mod p)$</p>\n\n<p>Hence in this case $p|a \\Rightarrow p=1\\ \\text{or}\\ p=a$, since $a$ is a prime. The first equality is rejected, because $p&gt;1$ as a prime number.</p>\n\n<p>If $m=a$, then $a|p-1 \\Rightarrow p\\equiv 1 (\\mod a)$, because it is known that for the order $m$ we have $m|φ(p)$. $φ$ is the Euler totient function.</p>\n\n<p>The proof of the property that $m$ holds uses the Fermat's little Theorem.</p>\n\n<p><strong>Links:</strong></p>\n\n<p>1.<a href=\"http://en.wikipedia.org/wiki/Multiplicative_order\" rel=\"nofollow noreferrer\">Orders</a></p>\n\n<p>2.<a href=\"http://en.wikipedia.org/wiki/Euler&#39;s_totient_function\" rel=\"nofollow noreferrer\">Euler's Totient Function</a></p>\n" }, { "answer_id": 2081014, "score": 1, "text": "<p>If $p$ divides $\\frac{x^a-1}{x-1}$, it must also divide $x^a-1$, so we can conclude $$x^a\\equiv 1\\mod p$$</p>\n\n<p>Let $o$ be the smallest non-zero number $o$ with $x^o\\equiv 1\\mod p$, also called the order of $x$ modulo $p$</p>\n\n<p>Then $a$ must be a mutiple of $o$, but $a$ is prime. So, either we have $o=1$, in which case we have $p|x-1$ , in other words $x\\equiv 1\\mod p$ or we must have $a=o$, in which case we can conclude $a|p-1$ , in other words $p\\equiv 1\\mod a$</p>\n" }, { "answer_id": 2081994, "score": 1, "text": "<p>Since $p\\mid \\frac{x^a-1}{x-1}$ can be expressed as $\\frac{x^a-1}{x-1} = pk + 0$ thus operating we have $x^a = (x-1)kp + 1$ and in modular form $x^a \\equiv 1 \\pmod p$.</p>\n\n<p>Then through the last one we know that $Ord_p(x)=a$ and that $a|\\varphi(p)$ that is $p-1 \\equiv 0 \\pmod a$ so $p \\equiv 1 \\pmod a$ holds.</p>\n" }, { "answer_id": 2081006, "score": 2, "text": "<p>You have:</p>\n\n<p>$p|\\frac{x^a-1}{x-1}\\Rightarrow p|(\\frac{x^a-1}{x-1})(x-1) \\Rightarrow p|x^a-1 \\Rightarrow x^a \\equiv 1 (\\mod p)$</p>\n\n<p>Hence if $m$ is the order of $x$ with respect to $p$, then $m|a \\Rightarrow m=1$ or $m=a$, since $a$ is prime.</p>\n\n<p>If $m=1$, then by the definition of the order, we have that\n$x\\equiv 1(\\mod p)$ and so by the hypothesis we get:</p>\n\n<p>$0\\equiv \\frac{x^a-1}{x-1}\\equiv x^{(a-1)}+...+1\\equiv 1^{(a-1)}+...+1\\equiv a(\\mod p)$</p>\n\n<p>Hence in this case $p|a \\Rightarrow p=1\\ \\text{or}\\ p=a$, since $a$ is a prime. The first equality is rejected, because $p&gt;1$ as a prime number.</p>\n\n<p>If $m=a$, then $a|p-1 \\Rightarrow p\\equiv 1 (\\mod a)$, because it is known that for the order $m$ we have $m|φ(p)$. $φ$ is the Euler totient function.</p>\n\n<p>The proof of the property that $m$ holds uses the Fermat's little Theorem.</p>\n\n<p><strong>Links:</strong></p>\n\n<p>1.<a href=\"http://en.wikipedia.org/wiki/Multiplicative_order\" rel=\"nofollow noreferrer\">Orders</a></p>\n\n<p>2.<a href=\"http://en.wikipedia.org/wiki/Euler&#39;s_totient_function\" rel=\"nofollow noreferrer\">Euler's Totient Function</a></p>\n" }, { "answer_id": 2081014, "score": 1, "text": "<p>If $p$ divides $\\frac{x^a-1}{x-1}$, it must also divide $x^a-1$, so we can conclude $$x^a\\equiv 1\\mod p$$</p>\n\n<p>Let $o$ be the smallest non-zero number $o$ with $x^o\\equiv 1\\mod p$, also called the order of $x$ modulo $p$</p>\n\n<p>Then $a$ must be a mutiple of $o$, but $a$ is prime. So, either we have $o=1$, in which case we have $p|x-1$ , in other words $x\\equiv 1\\mod p$ or we must have $a=o$, in which case we can conclude $a|p-1$ , in other words $p\\equiv 1\\mod a$</p>\n" }, { "answer_id": 2081994, "score": 1, "text": "<p>Since $p\\mid \\frac{x^a-1}{x-1}$ can be expressed as $\\frac{x^a-1}{x-1} = pk + 0$ thus operating we have $x^a = (x-1)kp + 1$ and in modular form $x^a \\equiv 1 \\pmod p$.</p>\n\n<p>Then through the last one we know that $Ord_p(x)=a$ and that $a|\\varphi(p)$ that is $p-1 \\equiv 0 \\pmod a$ so $p \\equiv 1 \\pmod a$ holds.</p>\n" }, { "answer_id": 2081006, "score": 2, "text": "<p>You have:</p>\n\n<p>$p|\\frac{x^a-1}{x-1}\\Rightarrow p|(\\frac{x^a-1}{x-1})(x-1) \\Rightarrow p|x^a-1 \\Rightarrow x^a \\equiv 1 (\\mod p)$</p>\n\n<p>Hence if $m$ is the order of $x$ with respect to $p$, then $m|a \\Rightarrow m=1$ or $m=a$, since $a$ is prime.</p>\n\n<p>If $m=1$, then by the definition of the order, we have that\n$x\\equiv 1(\\mod p)$ and so by the hypothesis we get:</p>\n\n<p>$0\\equiv \\frac{x^a-1}{x-1}\\equiv x^{(a-1)}+...+1\\equiv 1^{(a-1)}+...+1\\equiv a(\\mod p)$</p>\n\n<p>Hence in this case $p|a \\Rightarrow p=1\\ \\text{or}\\ p=a$, since $a$ is a prime. The first equality is rejected, because $p&gt;1$ as a prime number.</p>\n\n<p>If $m=a$, then $a|p-1 \\Rightarrow p\\equiv 1 (\\mod a)$, because it is known that for the order $m$ we have $m|φ(p)$. $φ$ is the Euler totient function.</p>\n\n<p>The proof of the property that $m$ holds uses the Fermat's little Theorem.</p>\n\n<p><strong>Links:</strong></p>\n\n<p>1.<a href=\"http://en.wikipedia.org/wiki/Multiplicative_order\" rel=\"nofollow noreferrer\">Orders</a></p>\n\n<p>2.<a href=\"http://en.wikipedia.org/wiki/Euler&#39;s_totient_function\" rel=\"nofollow noreferrer\">Euler's Totient Function</a></p>\n" }, { "answer_id": 2081014, "score": 1, "text": "<p>If $p$ divides $\\frac{x^a-1}{x-1}$, it must also divide $x^a-1$, so we can conclude $$x^a\\equiv 1\\mod p$$</p>\n\n<p>Let $o$ be the smallest non-zero number $o$ with $x^o\\equiv 1\\mod p$, also called the order of $x$ modulo $p$</p>\n\n<p>Then $a$ must be a mutiple of $o$, but $a$ is prime. So, either we have $o=1$, in which case we have $p|x-1$ , in other words $x\\equiv 1\\mod p$ or we must have $a=o$, in which case we can conclude $a|p-1$ , in other words $p\\equiv 1\\mod a$</p>\n" }, { "answer_id": 2081994, "score": 1, "text": "<p>Since $p\\mid \\frac{x^a-1}{x-1}$ can be expressed as $\\frac{x^a-1}{x-1} = pk + 0$ thus operating we have $x^a = (x-1)kp + 1$ and in modular form $x^a \\equiv 1 \\pmod p$.</p>\n\n<p>Then through the last one we know that $Ord_p(x)=a$ and that $a|\\varphi(p)$ that is $p-1 \\equiv 0 \\pmod a$ so $p \\equiv 1 \\pmod a$ holds.</p>\n" }, { "answer_id": 2081006, "score": 2, "text": "<p>You have:</p>\n\n<p>$p|\\frac{x^a-1}{x-1}\\Rightarrow p|(\\frac{x^a-1}{x-1})(x-1) \\Rightarrow p|x^a-1 \\Rightarrow x^a \\equiv 1 (\\mod p)$</p>\n\n<p>Hence if $m$ is the order of $x$ with respect to $p$, then $m|a \\Rightarrow m=1$ or $m=a$, since $a$ is prime.</p>\n\n<p>If $m=1$, then by the definition of the order, we have that\n$x\\equiv 1(\\mod p)$ and so by the hypothesis we get:</p>\n\n<p>$0\\equiv \\frac{x^a-1}{x-1}\\equiv x^{(a-1)}+...+1\\equiv 1^{(a-1)}+...+1\\equiv a(\\mod p)$</p>\n\n<p>Hence in this case $p|a \\Rightarrow p=1\\ \\text{or}\\ p=a$, since $a$ is a prime. The first equality is rejected, because $p&gt;1$ as a prime number.</p>\n\n<p>If $m=a$, then $a|p-1 \\Rightarrow p\\equiv 1 (\\mod a)$, because it is known that for the order $m$ we have $m|φ(p)$. $φ$ is the Euler totient function.</p>\n\n<p>The proof of the property that $m$ holds uses the Fermat's little Theorem.</p>\n\n<p><strong>Links:</strong></p>\n\n<p>1.<a href=\"http://en.wikipedia.org/wiki/Multiplicative_order\" rel=\"nofollow noreferrer\">Orders</a></p>\n\n<p>2.<a href=\"http://en.wikipedia.org/wiki/Euler&#39;s_totient_function\" rel=\"nofollow noreferrer\">Euler's Totient Function</a></p>\n" }, { "answer_id": 2081014, "score": 1, "text": "<p>If $p$ divides $\\frac{x^a-1}{x-1}$, it must also divide $x^a-1$, so we can conclude $$x^a\\equiv 1\\mod p$$</p>\n\n<p>Let $o$ be the smallest non-zero number $o$ with $x^o\\equiv 1\\mod p$, also called the order of $x$ modulo $p$</p>\n\n<p>Then $a$ must be a mutiple of $o$, but $a$ is prime. So, either we have $o=1$, in which case we have $p|x-1$ , in other words $x\\equiv 1\\mod p$ or we must have $a=o$, in which case we can conclude $a|p-1$ , in other words $p\\equiv 1\\mod a$</p>\n" }, { "answer_id": 2081994, "score": 1, "text": "<p>Since $p\\mid \\frac{x^a-1}{x-1}$ can be expressed as $\\frac{x^a-1}{x-1} = pk + 0$ thus operating we have $x^a = (x-1)kp + 1$ and in modular form $x^a \\equiv 1 \\pmod p$.</p>\n\n<p>Then through the last one we know that $Ord_p(x)=a$ and that $a|\\varphi(p)$ that is $p-1 \\equiv 0 \\pmod a$ so $p \\equiv 1 \\pmod a$ holds.</p>\n" } ]
Question: Prove $p\mid\frac{x^{a}-1}{x-1}$ using Fermat&#39;s little theorem Answer 1: <p>You have:</p> <p>$p|\frac{x^a-1}{x-1}\Rightarrow p|(\frac{x^a-1}{x-1})(x-1) \Rightarrow p|x^a-1 \Rightarrow x^a \equiv 1 (\mod p)$</p> <p>Hence if $m$ is the order of $x$ with respect to $p$, then $m|a \Rightarrow m=1$ or $m=a$, since $a$ is prime.</p> <p>If $m=1$, then by the definition of the order, we have that $x\equiv 1(\mod p)$ and so by the hypothesis we get:</p> <p>$0\equiv \frac{x^a-1}{x-1}\equiv x^{(a-1)}+...+1\equiv 1^{(a-1)}+...+1\equiv a(\mod p)$</p> <p>Hence in this case $p|a \Rightarrow p=1\ \text{or}\ p=a$, since $a$ is a prime. The first equality is rejected, because $p&gt;1$ as a prime number.</p> <p>If $m=a$, then $a|p-1 \Rightarrow p\equiv 1 (\mod a)$, because it is known that for the order $m$ we have $m|φ(p)$. $φ$ is the Euler totient function.</p> <p>The proof of the property that $m$ holds uses the Fermat's little Theorem.</p> <p><strong>Links:</strong></p> <p>1.<a href="http://en.wikipedia.org/wiki/Multiplicative_order" rel="nofollow noreferrer">Orders</a></p> <p>2.<a href="http://en.wikipedia.org/wiki/Euler&#39;s_totient_function" rel="nofollow noreferrer">Euler's Totient Function</a></p> Answer 2: <p>If $p$ divides $\frac{x^a-1}{x-1}$, it must also divide $x^a-1$, so we can conclude $$x^a\equiv 1\mod p$$</p> <p>Let $o$ be the smallest non-zero number $o$ with $x^o\equiv 1\mod p$, also called the order of $x$ modulo $p$</p> <p>Then $a$ must be a mutiple of $o$, but $a$ is prime. So, either we have $o=1$, in which case we have $p|x-1$ , in other words $x\equiv 1\mod p$ or we must have $a=o$, in which case we can conclude $a|p-1$ , in other words $p\equiv 1\mod a$</p> Answer 3: <p>Since $p\mid \frac{x^a-1}{x-1}$ can be expressed as $\frac{x^a-1}{x-1} = pk + 0$ thus operating we have $x^a = (x-1)kp + 1$ and in modular form $x^a \equiv 1 \pmod p$.</p> <p>Then through the last one we know that $Ord_p(x)=a$ and that $a|\varphi(p)$ that is $p-1 \equiv 0 \pmod a$ so $p \equiv 1 \pmod a$ holds.</p> Answer 4: <p>You have:</p> <p>$p|\frac{x^a-1}{x-1}\Rightarrow p|(\frac{x^a-1}{x-1})(x-1) \Rightarrow p|x^a-1 \Rightarrow x^a \equiv 1 (\mod p)$</p> <p>Hence if $m$ is the order of $x$ with respect to $p$, then $m|a \Rightarrow m=1$ or $m=a$, since $a$ is prime.</p> <p>If $m=1$, then by the definition of the order, we have that $x\equiv 1(\mod p)$ and so by the hypothesis we get:</p> <p>$0\equiv \frac{x^a-1}{x-1}\equiv x^{(a-1)}+...+1\equiv 1^{(a-1)}+...+1\equiv a(\mod p)$</p> <p>Hence in this case $p|a \Rightarrow p=1\ \text{or}\ p=a$, since $a$ is a prime. The first equality is rejected, because $p&gt;1$ as a prime number.</p> <p>If $m=a$, then $a|p-1 \Rightarrow p\equiv 1 (\mod a)$, because it is known that for the order $m$ we have $m|φ(p)$. $φ$ is the Euler totient function.</p> <p>The proof of the property that $m$ holds uses the Fermat's little Theorem.</p> <p><strong>Links:</strong></p> <p>1.<a href="http://en.wikipedia.org/wiki/Multiplicative_order" rel="nofollow noreferrer">Orders</a></p> <p>2.<a href="http://en.wikipedia.org/wiki/Euler&#39;s_totient_function" rel="nofollow noreferrer">Euler's Totient Function</a></p> Answer 5: <p>If $p$ divides $\frac{x^a-1}{x-1}$, it must also divide $x^a-1$, so we can conclude $$x^a\equiv 1\mod p$$</p> <p>Let $o$ be the smallest non-zero number $o$ with $x^o\equiv 1\mod p$, also called the order of $x$ modulo $p$</p> <p>Then $a$ must be a mutiple of $o$, but $a$ is prime. So, either we have $o=1$, in which case we have $p|x-1$ , in other words $x\equiv 1\mod p$ or we must have $a=o$, in which case we can conclude $a|p-1$ , in other words $p\equiv 1\mod a$</p> Answer 6: <p>Since $p\mid \frac{x^a-1}{x-1}$ can be expressed as $\frac{x^a-1}{x-1} = pk + 0$ thus operating we have $x^a = (x-1)kp + 1$ and in modular form $x^a \equiv 1 \pmod p$.</p> <p>Then through the last one we know that $Ord_p(x)=a$ and that $a|\varphi(p)$ that is $p-1 \equiv 0 \pmod a$ so $p \equiv 1 \pmod a$ holds.</p> Answer 7: <p>You have:</p> <p>$p|\frac{x^a-1}{x-1}\Rightarrow p|(\frac{x^a-1}{x-1})(x-1) \Rightarrow p|x^a-1 \Rightarrow x^a \equiv 1 (\mod p)$</p> <p>Hence if $m$ is the order of $x$ with respect to $p$, then $m|a \Rightarrow m=1$ or $m=a$, since $a$ is prime.</p> <p>If $m=1$, then by the definition of the order, we have that $x\equiv 1(\mod p)$ and so by the hypothesis we get:</p> <p>$0\equiv \frac{x^a-1}{x-1}\equiv x^{(a-1)}+...+1\equiv 1^{(a-1)}+...+1\equiv a(\mod p)$</p> <p>Hence in this case $p|a \Rightarrow p=1\ \text{or}\ p=a$, since $a$ is a prime. The first equality is rejected, because $p&gt;1$ as a prime number.</p> <p>If $m=a$, then $a|p-1 \Rightarrow p\equiv 1 (\mod a)$, because it is known that for the order $m$ we have $m|φ(p)$. $φ$ is the Euler totient function.</p> <p>The proof of the property that $m$ holds uses the Fermat's little Theorem.</p> <p><strong>Links:</strong></p> <p>1.<a href="http://en.wikipedia.org/wiki/Multiplicative_order" rel="nofollow noreferrer">Orders</a></p> <p>2.<a href="http://en.wikipedia.org/wiki/Euler&#39;s_totient_function" rel="nofollow noreferrer">Euler's Totient Function</a></p> Answer 8: <p>If $p$ divides $\frac{x^a-1}{x-1}$, it must also divide $x^a-1$, so we can conclude $$x^a\equiv 1\mod p$$</p> <p>Let $o$ be the smallest non-zero number $o$ with $x^o\equiv 1\mod p$, also called the order of $x$ modulo $p$</p> <p>Then $a$ must be a mutiple of $o$, but $a$ is prime. So, either we have $o=1$, in which case we have $p|x-1$ , in other words $x\equiv 1\mod p$ or we must have $a=o$, in which case we can conclude $a|p-1$ , in other words $p\equiv 1\mod a$</p> Answer 9: <p>Since $p\mid \frac{x^a-1}{x-1}$ can be expressed as $\frac{x^a-1}{x-1} = pk + 0$ thus operating we have $x^a = (x-1)kp + 1$ and in modular form $x^a \equiv 1 \pmod p$.</p> <p>Then through the last one we know that $Ord_p(x)=a$ and that $a|\varphi(p)$ that is $p-1 \equiv 0 \pmod a$ so $p \equiv 1 \pmod a$ holds.</p> Answer 10: <p>You have:</p> <p>$p|\frac{x^a-1}{x-1}\Rightarrow p|(\frac{x^a-1}{x-1})(x-1) \Rightarrow p|x^a-1 \Rightarrow x^a \equiv 1 (\mod p)$</p> <p>Hence if $m$ is the order of $x$ with respect to $p$, then $m|a \Rightarrow m=1$ or $m=a$, since $a$ is prime.</p> <p>If $m=1$, then by the definition of the order, we have that $x\equiv 1(\mod p)$ and so by the hypothesis we get:</p> <p>$0\equiv \frac{x^a-1}{x-1}\equiv x^{(a-1)}+...+1\equiv 1^{(a-1)}+...+1\equiv a(\mod p)$</p> <p>Hence in this case $p|a \Rightarrow p=1\ \text{or}\ p=a$, since $a$ is a prime. The first equality is rejected, because $p&gt;1$ as a prime number.</p> <p>If $m=a$, then $a|p-1 \Rightarrow p\equiv 1 (\mod a)$, because it is known that for the order $m$ we have $m|φ(p)$. $φ$ is the Euler totient function.</p> <p>The proof of the property that $m$ holds uses the Fermat's little Theorem.</p> <p><strong>Links:</strong></p> <p>1.<a href="http://en.wikipedia.org/wiki/Multiplicative_order" rel="nofollow noreferrer">Orders</a></p> <p>2.<a href="http://en.wikipedia.org/wiki/Euler&#39;s_totient_function" rel="nofollow noreferrer">Euler's Totient Function</a></p> Answer 11: <p>If $p$ divides $\frac{x^a-1}{x-1}$, it must also divide $x^a-1$, so we can conclude $$x^a\equiv 1\mod p$$</p> <p>Let $o$ be the smallest non-zero number $o$ with $x^o\equiv 1\mod p$, also called the order of $x$ modulo $p$</p> <p>Then $a$ must be a mutiple of $o$, but $a$ is prime. So, either we have $o=1$, in which case we have $p|x-1$ , in other words $x\equiv 1\mod p$ or we must have $a=o$, in which case we can conclude $a|p-1$ , in other words $p\equiv 1\mod a$</p> Answer 12: <p>Since $p\mid \frac{x^a-1}{x-1}$ can be expressed as $\frac{x^a-1}{x-1} = pk + 0$ thus operating we have $x^a = (x-1)kp + 1$ and in modular form $x^a \equiv 1 \pmod p$.</p> <p>Then through the last one we know that $Ord_p(x)=a$ and that $a|\varphi(p)$ that is $p-1 \equiv 0 \pmod a$ so $p \equiv 1 \pmod a$ holds.</p>
Blow-up of the affine cone
2,081,022
4
https://math.stackexchange.com/questions/2081022/blow-up-of-the-affine-cone
<p>I was trying to blow-up at the origin of the cone <span class="math-container">$X$</span> given by the equation <span class="math-container">$x^2 + y^2 = z^2$</span>. So by definition, the blow-up is given by the solutions of the equation <span class="math-container">$x^2 + y^2 = z^2, xt = zu, xv = yu, yt = zv$</span> where <span class="math-container">$(x,y,z) \in X$</span> and <span class="math-container">$(u,v,t) \in \mathbb P^2$</span>.</p> <p>I take the usual affine charts <span class="math-container">$ u = 1, v = 1, t = 1$</span>.</p> <p>In the first chart with coordinates <span class="math-container">$(x,v,t)$</span>, the equations become <span class="math-container">$z = xt, y = xv$</span> so <span class="math-container">$x^2(1+v^2) = x^2t^2$</span>. So we obtain <span class="math-container">$x=0$</span> and <span class="math-container">$v,t$</span> are free or <span class="math-container">$1 + v^2 = t^2$</span>. Similarly the second chart gives <span class="math-container">$\{(0,u,t) \mid u,t \in \mathbb A^1\} \cup \{ (y,u,t) \in \mathbb A^3 \mid 1 + u^2 = t^2 \}$</span>. Finally if <span class="math-container">$t = 1$</span> we get <span class="math-container">$z=0$</span> or <span class="math-container">$u^2 + v^2 = 1$</span>.</p> <p>Is this correct? If yes I don't understand how to get a &quot;global picture&quot; of how the blow-up should look like. It should be a cylinder but I can't see why. I also can't compute <span class="math-container">$E$</span>: from the equations I wrote, it seems to me that any <span class="math-container">$(0,0,0,u,v,t)$</span> is a solution of these equations.</p>
[ { "answer_id": 2343444, "score": 4, "text": "<p>If you think of a blow-up as replacing your blown-up point by a copy of $\\mathbb P^2$, the strict transform (even its real picture analog) will not really be a cylinder because of the topology of $\\mathbb P^2$~; informally speaking, a line passing through the point you blew up will have both its ends identified, so in topological terms, the real picture is a cylinder in which you took a circle (which would be the intersection of your cylinder with a plane passing through the origin) and you glued points on that circle lying on opposite ends, as in the classical construction of $\\mathbb{RP}^2$. </p>\n\n<p>In the complex picture, since you blew up a cone given by one equation, the strict transform (i.e. the blow-up of your cone $C$ at the origin) is very easy to compute : it's the equation itself! In other words, the strict transform looks like this :\n$$\n\\mathrm{Bl}_0(C) = \\{ ((x,y,z),[p_x,p_y,p_z]) \\in \\mathbb A^3 \\times \\mathbb P^2 \\mid x^2+y^2=z^2 \\text{ and } p_x^2+p_y^2=p_z^2 \\}. \n$$\nSo the blow-up is obtained by replacing the origin by the set of lines through the origin, and because your equation is given by homogeneous polynomials, the strict transform is given exactly by the same equations ; this is because if you think of it as blowing up $\\mathbb P^3$ at the point $[0:0:0:1]$ instead of blowing-up $\\mathbb A^3$ at the origin, the definition of the blow-up tells you that the coordinates of the point $([x,y,z,w],[p_x,p_y,p_z])$ have to satisfy equations like\n$$\np_x y = p_y x, \\quad p_x z = p_z x, \\quad (\\cdots)\n$$\nwhich essentially means that $[p_x,p_y,p_z]$ and $(x,y,z)$ have to satisfy the same homogeneous polynomial equations.</p>\n\n<p>Perhaps it is now easier to see that the intersection of $\\mathrm{Bl}_0(C)$ with $0 \\times \\mathbb P^2$ is just the projectivization of your cone, or in other words, a smooth conic in $\\mathbb P^2$. It is not quite enlightening to think of the complex variety $C$ as a cone, but even though the real picture of that equation looks like a cone, its blow-up definitely is not a cylinder. </p>\n\n<p>As for $E$, I assume you are referring to the exceptional divisor as in Hartshorne. It is not complicated to determine~: by definition, it is just $0 \\times \\mathbb P^2 \\subseteq \\mathrm{Bl}_0(\\mathbb A^3)$. The smooth conic we obtained earlier was just $\\mathrm{Bl}_0(C) \\cap E$. </p>\n\n<p>Hope that helps,</p>\n" }, { "answer_id": 2081566, "score": 3, "text": "<p>I agree with your computations. Conceptually, what happened is that you found the total transform of $X$ after blowing up the origin inside $\\mathbb{A}^3$. This consists of $X'\\cup E$, where $X'$ is the strict transform and $E$ is the exceptional divisor, in this case, the $\\mathbb{P}^2$ lying above the origin. (You don't say what $E$ is in the question, and you should, but I'm taking a guess here.)</p>\n\n<p>As for the visualization question, the strict transform of your space might not exactly be a cylinder, but it's isomorphic to one in some sense which I don't want to make precise- consider that it might be kind of like a hyperboloid of one sheet, where the exceptional divisor is a plane through the narrowest circle on the hyperboloid. \"Blowing up\" should be physically reminding you of inflating a balloon at the origin and things separate a bit after you inflate.</p>\n" }, { "answer_id": 2343444, "score": 4, "text": "<p>If you think of a blow-up as replacing your blown-up point by a copy of $\\mathbb P^2$, the strict transform (even its real picture analog) will not really be a cylinder because of the topology of $\\mathbb P^2$~; informally speaking, a line passing through the point you blew up will have both its ends identified, so in topological terms, the real picture is a cylinder in which you took a circle (which would be the intersection of your cylinder with a plane passing through the origin) and you glued points on that circle lying on opposite ends, as in the classical construction of $\\mathbb{RP}^2$. </p>\n\n<p>In the complex picture, since you blew up a cone given by one equation, the strict transform (i.e. the blow-up of your cone $C$ at the origin) is very easy to compute : it's the equation itself! In other words, the strict transform looks like this :\n$$\n\\mathrm{Bl}_0(C) = \\{ ((x,y,z),[p_x,p_y,p_z]) \\in \\mathbb A^3 \\times \\mathbb P^2 \\mid x^2+y^2=z^2 \\text{ and } p_x^2+p_y^2=p_z^2 \\}. \n$$\nSo the blow-up is obtained by replacing the origin by the set of lines through the origin, and because your equation is given by homogeneous polynomials, the strict transform is given exactly by the same equations ; this is because if you think of it as blowing up $\\mathbb P^3$ at the point $[0:0:0:1]$ instead of blowing-up $\\mathbb A^3$ at the origin, the definition of the blow-up tells you that the coordinates of the point $([x,y,z,w],[p_x,p_y,p_z])$ have to satisfy equations like\n$$\np_x y = p_y x, \\quad p_x z = p_z x, \\quad (\\cdots)\n$$\nwhich essentially means that $[p_x,p_y,p_z]$ and $(x,y,z)$ have to satisfy the same homogeneous polynomial equations.</p>\n\n<p>Perhaps it is now easier to see that the intersection of $\\mathrm{Bl}_0(C)$ with $0 \\times \\mathbb P^2$ is just the projectivization of your cone, or in other words, a smooth conic in $\\mathbb P^2$. It is not quite enlightening to think of the complex variety $C$ as a cone, but even though the real picture of that equation looks like a cone, its blow-up definitely is not a cylinder. </p>\n\n<p>As for $E$, I assume you are referring to the exceptional divisor as in Hartshorne. It is not complicated to determine~: by definition, it is just $0 \\times \\mathbb P^2 \\subseteq \\mathrm{Bl}_0(\\mathbb A^3)$. The smooth conic we obtained earlier was just $\\mathrm{Bl}_0(C) \\cap E$. </p>\n\n<p>Hope that helps,</p>\n" }, { "answer_id": 2081566, "score": 3, "text": "<p>I agree with your computations. Conceptually, what happened is that you found the total transform of $X$ after blowing up the origin inside $\\mathbb{A}^3$. This consists of $X'\\cup E$, where $X'$ is the strict transform and $E$ is the exceptional divisor, in this case, the $\\mathbb{P}^2$ lying above the origin. (You don't say what $E$ is in the question, and you should, but I'm taking a guess here.)</p>\n\n<p>As for the visualization question, the strict transform of your space might not exactly be a cylinder, but it's isomorphic to one in some sense which I don't want to make precise- consider that it might be kind of like a hyperboloid of one sheet, where the exceptional divisor is a plane through the narrowest circle on the hyperboloid. \"Blowing up\" should be physically reminding you of inflating a balloon at the origin and things separate a bit after you inflate.</p>\n" }, { "answer_id": 2343444, "score": 4, "text": "<p>If you think of a blow-up as replacing your blown-up point by a copy of $\\mathbb P^2$, the strict transform (even its real picture analog) will not really be a cylinder because of the topology of $\\mathbb P^2$~; informally speaking, a line passing through the point you blew up will have both its ends identified, so in topological terms, the real picture is a cylinder in which you took a circle (which would be the intersection of your cylinder with a plane passing through the origin) and you glued points on that circle lying on opposite ends, as in the classical construction of $\\mathbb{RP}^2$. </p>\n\n<p>In the complex picture, since you blew up a cone given by one equation, the strict transform (i.e. the blow-up of your cone $C$ at the origin) is very easy to compute : it's the equation itself! In other words, the strict transform looks like this :\n$$\n\\mathrm{Bl}_0(C) = \\{ ((x,y,z),[p_x,p_y,p_z]) \\in \\mathbb A^3 \\times \\mathbb P^2 \\mid x^2+y^2=z^2 \\text{ and } p_x^2+p_y^2=p_z^2 \\}. \n$$\nSo the blow-up is obtained by replacing the origin by the set of lines through the origin, and because your equation is given by homogeneous polynomials, the strict transform is given exactly by the same equations ; this is because if you think of it as blowing up $\\mathbb P^3$ at the point $[0:0:0:1]$ instead of blowing-up $\\mathbb A^3$ at the origin, the definition of the blow-up tells you that the coordinates of the point $([x,y,z,w],[p_x,p_y,p_z])$ have to satisfy equations like\n$$\np_x y = p_y x, \\quad p_x z = p_z x, \\quad (\\cdots)\n$$\nwhich essentially means that $[p_x,p_y,p_z]$ and $(x,y,z)$ have to satisfy the same homogeneous polynomial equations.</p>\n\n<p>Perhaps it is now easier to see that the intersection of $\\mathrm{Bl}_0(C)$ with $0 \\times \\mathbb P^2$ is just the projectivization of your cone, or in other words, a smooth conic in $\\mathbb P^2$. It is not quite enlightening to think of the complex variety $C$ as a cone, but even though the real picture of that equation looks like a cone, its blow-up definitely is not a cylinder. </p>\n\n<p>As for $E$, I assume you are referring to the exceptional divisor as in Hartshorne. It is not complicated to determine~: by definition, it is just $0 \\times \\mathbb P^2 \\subseteq \\mathrm{Bl}_0(\\mathbb A^3)$. The smooth conic we obtained earlier was just $\\mathrm{Bl}_0(C) \\cap E$. </p>\n\n<p>Hope that helps,</p>\n" }, { "answer_id": 2081566, "score": 3, "text": "<p>I agree with your computations. Conceptually, what happened is that you found the total transform of $X$ after blowing up the origin inside $\\mathbb{A}^3$. This consists of $X'\\cup E$, where $X'$ is the strict transform and $E$ is the exceptional divisor, in this case, the $\\mathbb{P}^2$ lying above the origin. (You don't say what $E$ is in the question, and you should, but I'm taking a guess here.)</p>\n\n<p>As for the visualization question, the strict transform of your space might not exactly be a cylinder, but it's isomorphic to one in some sense which I don't want to make precise- consider that it might be kind of like a hyperboloid of one sheet, where the exceptional divisor is a plane through the narrowest circle on the hyperboloid. \"Blowing up\" should be physically reminding you of inflating a balloon at the origin and things separate a bit after you inflate.</p>\n" }, { "answer_id": 2343444, "score": 4, "text": "<p>If you think of a blow-up as replacing your blown-up point by a copy of $\\mathbb P^2$, the strict transform (even its real picture analog) will not really be a cylinder because of the topology of $\\mathbb P^2$~; informally speaking, a line passing through the point you blew up will have both its ends identified, so in topological terms, the real picture is a cylinder in which you took a circle (which would be the intersection of your cylinder with a plane passing through the origin) and you glued points on that circle lying on opposite ends, as in the classical construction of $\\mathbb{RP}^2$. </p>\n\n<p>In the complex picture, since you blew up a cone given by one equation, the strict transform (i.e. the blow-up of your cone $C$ at the origin) is very easy to compute : it's the equation itself! In other words, the strict transform looks like this :\n$$\n\\mathrm{Bl}_0(C) = \\{ ((x,y,z),[p_x,p_y,p_z]) \\in \\mathbb A^3 \\times \\mathbb P^2 \\mid x^2+y^2=z^2 \\text{ and } p_x^2+p_y^2=p_z^2 \\}. \n$$\nSo the blow-up is obtained by replacing the origin by the set of lines through the origin, and because your equation is given by homogeneous polynomials, the strict transform is given exactly by the same equations ; this is because if you think of it as blowing up $\\mathbb P^3$ at the point $[0:0:0:1]$ instead of blowing-up $\\mathbb A^3$ at the origin, the definition of the blow-up tells you that the coordinates of the point $([x,y,z,w],[p_x,p_y,p_z])$ have to satisfy equations like\n$$\np_x y = p_y x, \\quad p_x z = p_z x, \\quad (\\cdots)\n$$\nwhich essentially means that $[p_x,p_y,p_z]$ and $(x,y,z)$ have to satisfy the same homogeneous polynomial equations.</p>\n\n<p>Perhaps it is now easier to see that the intersection of $\\mathrm{Bl}_0(C)$ with $0 \\times \\mathbb P^2$ is just the projectivization of your cone, or in other words, a smooth conic in $\\mathbb P^2$. It is not quite enlightening to think of the complex variety $C$ as a cone, but even though the real picture of that equation looks like a cone, its blow-up definitely is not a cylinder. </p>\n\n<p>As for $E$, I assume you are referring to the exceptional divisor as in Hartshorne. It is not complicated to determine~: by definition, it is just $0 \\times \\mathbb P^2 \\subseteq \\mathrm{Bl}_0(\\mathbb A^3)$. The smooth conic we obtained earlier was just $\\mathrm{Bl}_0(C) \\cap E$. </p>\n\n<p>Hope that helps,</p>\n" }, { "answer_id": 2081566, "score": 3, "text": "<p>I agree with your computations. Conceptually, what happened is that you found the total transform of $X$ after blowing up the origin inside $\\mathbb{A}^3$. This consists of $X'\\cup E$, where $X'$ is the strict transform and $E$ is the exceptional divisor, in this case, the $\\mathbb{P}^2$ lying above the origin. (You don't say what $E$ is in the question, and you should, but I'm taking a guess here.)</p>\n\n<p>As for the visualization question, the strict transform of your space might not exactly be a cylinder, but it's isomorphic to one in some sense which I don't want to make precise- consider that it might be kind of like a hyperboloid of one sheet, where the exceptional divisor is a plane through the narrowest circle on the hyperboloid. \"Blowing up\" should be physically reminding you of inflating a balloon at the origin and things separate a bit after you inflate.</p>\n" } ]
Question: Blow-up of the affine cone Answer 1: <p>If you think of a blow-up as replacing your blown-up point by a copy of $\mathbb P^2$, the strict transform (even its real picture analog) will not really be a cylinder because of the topology of $\mathbb P^2$~; informally speaking, a line passing through the point you blew up will have both its ends identified, so in topological terms, the real picture is a cylinder in which you took a circle (which would be the intersection of your cylinder with a plane passing through the origin) and you glued points on that circle lying on opposite ends, as in the classical construction of $\mathbb{RP}^2$. </p> <p>In the complex picture, since you blew up a cone given by one equation, the strict transform (i.e. the blow-up of your cone $C$ at the origin) is very easy to compute : it's the equation itself! In other words, the strict transform looks like this : $$ \mathrm{Bl}_0(C) = \{ ((x,y,z),[p_x,p_y,p_z]) \in \mathbb A^3 \times \mathbb P^2 \mid x^2+y^2=z^2 \text{ and } p_x^2+p_y^2=p_z^2 \}. $$ So the blow-up is obtained by replacing the origin by the set of lines through the origin, and because your equation is given by homogeneous polynomials, the strict transform is given exactly by the same equations ; this is because if you think of it as blowing up $\mathbb P^3$ at the point $[0:0:0:1]$ instead of blowing-up $\mathbb A^3$ at the origin, the definition of the blow-up tells you that the coordinates of the point $([x,y,z,w],[p_x,p_y,p_z])$ have to satisfy equations like $$ p_x y = p_y x, \quad p_x z = p_z x, \quad (\cdots) $$ which essentially means that $[p_x,p_y,p_z]$ and $(x,y,z)$ have to satisfy the same homogeneous polynomial equations.</p> <p>Perhaps it is now easier to see that the intersection of $\mathrm{Bl}_0(C)$ with $0 \times \mathbb P^2$ is just the projectivization of your cone, or in other words, a smooth conic in $\mathbb P^2$. It is not quite enlightening to think of the complex variety $C$ as a cone, but even though the real picture of that equation looks like a cone, its blow-up definitely is not a cylinder. </p> <p>As for $E$, I assume you are referring to the exceptional divisor as in Hartshorne. It is not complicated to determine~: by definition, it is just $0 \times \mathbb P^2 \subseteq \mathrm{Bl}_0(\mathbb A^3)$. The smooth conic we obtained earlier was just $\mathrm{Bl}_0(C) \cap E$. </p> <p>Hope that helps,</p> Answer 2: <p>I agree with your computations. Conceptually, what happened is that you found the total transform of $X$ after blowing up the origin inside $\mathbb{A}^3$. This consists of $X'\cup E$, where $X'$ is the strict transform and $E$ is the exceptional divisor, in this case, the $\mathbb{P}^2$ lying above the origin. (You don't say what $E$ is in the question, and you should, but I'm taking a guess here.)</p> <p>As for the visualization question, the strict transform of your space might not exactly be a cylinder, but it's isomorphic to one in some sense which I don't want to make precise- consider that it might be kind of like a hyperboloid of one sheet, where the exceptional divisor is a plane through the narrowest circle on the hyperboloid. "Blowing up" should be physically reminding you of inflating a balloon at the origin and things separate a bit after you inflate.</p> Answer 3: <p>If you think of a blow-up as replacing your blown-up point by a copy of $\mathbb P^2$, the strict transform (even its real picture analog) will not really be a cylinder because of the topology of $\mathbb P^2$~; informally speaking, a line passing through the point you blew up will have both its ends identified, so in topological terms, the real picture is a cylinder in which you took a circle (which would be the intersection of your cylinder with a plane passing through the origin) and you glued points on that circle lying on opposite ends, as in the classical construction of $\mathbb{RP}^2$. </p> <p>In the complex picture, since you blew up a cone given by one equation, the strict transform (i.e. the blow-up of your cone $C$ at the origin) is very easy to compute : it's the equation itself! In other words, the strict transform looks like this : $$ \mathrm{Bl}_0(C) = \{ ((x,y,z),[p_x,p_y,p_z]) \in \mathbb A^3 \times \mathbb P^2 \mid x^2+y^2=z^2 \text{ and } p_x^2+p_y^2=p_z^2 \}. $$ So the blow-up is obtained by replacing the origin by the set of lines through the origin, and because your equation is given by homogeneous polynomials, the strict transform is given exactly by the same equations ; this is because if you think of it as blowing up $\mathbb P^3$ at the point $[0:0:0:1]$ instead of blowing-up $\mathbb A^3$ at the origin, the definition of the blow-up tells you that the coordinates of the point $([x,y,z,w],[p_x,p_y,p_z])$ have to satisfy equations like $$ p_x y = p_y x, \quad p_x z = p_z x, \quad (\cdots) $$ which essentially means that $[p_x,p_y,p_z]$ and $(x,y,z)$ have to satisfy the same homogeneous polynomial equations.</p> <p>Perhaps it is now easier to see that the intersection of $\mathrm{Bl}_0(C)$ with $0 \times \mathbb P^2$ is just the projectivization of your cone, or in other words, a smooth conic in $\mathbb P^2$. It is not quite enlightening to think of the complex variety $C$ as a cone, but even though the real picture of that equation looks like a cone, its blow-up definitely is not a cylinder. </p> <p>As for $E$, I assume you are referring to the exceptional divisor as in Hartshorne. It is not complicated to determine~: by definition, it is just $0 \times \mathbb P^2 \subseteq \mathrm{Bl}_0(\mathbb A^3)$. The smooth conic we obtained earlier was just $\mathrm{Bl}_0(C) \cap E$. </p> <p>Hope that helps,</p> Answer 4: <p>I agree with your computations. Conceptually, what happened is that you found the total transform of $X$ after blowing up the origin inside $\mathbb{A}^3$. This consists of $X'\cup E$, where $X'$ is the strict transform and $E$ is the exceptional divisor, in this case, the $\mathbb{P}^2$ lying above the origin. (You don't say what $E$ is in the question, and you should, but I'm taking a guess here.)</p> <p>As for the visualization question, the strict transform of your space might not exactly be a cylinder, but it's isomorphic to one in some sense which I don't want to make precise- consider that it might be kind of like a hyperboloid of one sheet, where the exceptional divisor is a plane through the narrowest circle on the hyperboloid. "Blowing up" should be physically reminding you of inflating a balloon at the origin and things separate a bit after you inflate.</p> Answer 5: <p>If you think of a blow-up as replacing your blown-up point by a copy of $\mathbb P^2$, the strict transform (even its real picture analog) will not really be a cylinder because of the topology of $\mathbb P^2$~; informally speaking, a line passing through the point you blew up will have both its ends identified, so in topological terms, the real picture is a cylinder in which you took a circle (which would be the intersection of your cylinder with a plane passing through the origin) and you glued points on that circle lying on opposite ends, as in the classical construction of $\mathbb{RP}^2$. </p> <p>In the complex picture, since you blew up a cone given by one equation, the strict transform (i.e. the blow-up of your cone $C$ at the origin) is very easy to compute : it's the equation itself! In other words, the strict transform looks like this : $$ \mathrm{Bl}_0(C) = \{ ((x,y,z),[p_x,p_y,p_z]) \in \mathbb A^3 \times \mathbb P^2 \mid x^2+y^2=z^2 \text{ and } p_x^2+p_y^2=p_z^2 \}. $$ So the blow-up is obtained by replacing the origin by the set of lines through the origin, and because your equation is given by homogeneous polynomials, the strict transform is given exactly by the same equations ; this is because if you think of it as blowing up $\mathbb P^3$ at the point $[0:0:0:1]$ instead of blowing-up $\mathbb A^3$ at the origin, the definition of the blow-up tells you that the coordinates of the point $([x,y,z,w],[p_x,p_y,p_z])$ have to satisfy equations like $$ p_x y = p_y x, \quad p_x z = p_z x, \quad (\cdots) $$ which essentially means that $[p_x,p_y,p_z]$ and $(x,y,z)$ have to satisfy the same homogeneous polynomial equations.</p> <p>Perhaps it is now easier to see that the intersection of $\mathrm{Bl}_0(C)$ with $0 \times \mathbb P^2$ is just the projectivization of your cone, or in other words, a smooth conic in $\mathbb P^2$. It is not quite enlightening to think of the complex variety $C$ as a cone, but even though the real picture of that equation looks like a cone, its blow-up definitely is not a cylinder. </p> <p>As for $E$, I assume you are referring to the exceptional divisor as in Hartshorne. It is not complicated to determine~: by definition, it is just $0 \times \mathbb P^2 \subseteq \mathrm{Bl}_0(\mathbb A^3)$. The smooth conic we obtained earlier was just $\mathrm{Bl}_0(C) \cap E$. </p> <p>Hope that helps,</p> Answer 6: <p>I agree with your computations. Conceptually, what happened is that you found the total transform of $X$ after blowing up the origin inside $\mathbb{A}^3$. This consists of $X'\cup E$, where $X'$ is the strict transform and $E$ is the exceptional divisor, in this case, the $\mathbb{P}^2$ lying above the origin. (You don't say what $E$ is in the question, and you should, but I'm taking a guess here.)</p> <p>As for the visualization question, the strict transform of your space might not exactly be a cylinder, but it's isomorphic to one in some sense which I don't want to make precise- consider that it might be kind of like a hyperboloid of one sheet, where the exceptional divisor is a plane through the narrowest circle on the hyperboloid. "Blowing up" should be physically reminding you of inflating a balloon at the origin and things separate a bit after you inflate.</p> Answer 7: <p>If you think of a blow-up as replacing your blown-up point by a copy of $\mathbb P^2$, the strict transform (even its real picture analog) will not really be a cylinder because of the topology of $\mathbb P^2$~; informally speaking, a line passing through the point you blew up will have both its ends identified, so in topological terms, the real picture is a cylinder in which you took a circle (which would be the intersection of your cylinder with a plane passing through the origin) and you glued points on that circle lying on opposite ends, as in the classical construction of $\mathbb{RP}^2$. </p> <p>In the complex picture, since you blew up a cone given by one equation, the strict transform (i.e. the blow-up of your cone $C$ at the origin) is very easy to compute : it's the equation itself! In other words, the strict transform looks like this : $$ \mathrm{Bl}_0(C) = \{ ((x,y,z),[p_x,p_y,p_z]) \in \mathbb A^3 \times \mathbb P^2 \mid x^2+y^2=z^2 \text{ and } p_x^2+p_y^2=p_z^2 \}. $$ So the blow-up is obtained by replacing the origin by the set of lines through the origin, and because your equation is given by homogeneous polynomials, the strict transform is given exactly by the same equations ; this is because if you think of it as blowing up $\mathbb P^3$ at the point $[0:0:0:1]$ instead of blowing-up $\mathbb A^3$ at the origin, the definition of the blow-up tells you that the coordinates of the point $([x,y,z,w],[p_x,p_y,p_z])$ have to satisfy equations like $$ p_x y = p_y x, \quad p_x z = p_z x, \quad (\cdots) $$ which essentially means that $[p_x,p_y,p_z]$ and $(x,y,z)$ have to satisfy the same homogeneous polynomial equations.</p> <p>Perhaps it is now easier to see that the intersection of $\mathrm{Bl}_0(C)$ with $0 \times \mathbb P^2$ is just the projectivization of your cone, or in other words, a smooth conic in $\mathbb P^2$. It is not quite enlightening to think of the complex variety $C$ as a cone, but even though the real picture of that equation looks like a cone, its blow-up definitely is not a cylinder. </p> <p>As for $E$, I assume you are referring to the exceptional divisor as in Hartshorne. It is not complicated to determine~: by definition, it is just $0 \times \mathbb P^2 \subseteq \mathrm{Bl}_0(\mathbb A^3)$. The smooth conic we obtained earlier was just $\mathrm{Bl}_0(C) \cap E$. </p> <p>Hope that helps,</p> Answer 8: <p>I agree with your computations. Conceptually, what happened is that you found the total transform of $X$ after blowing up the origin inside $\mathbb{A}^3$. This consists of $X'\cup E$, where $X'$ is the strict transform and $E$ is the exceptional divisor, in this case, the $\mathbb{P}^2$ lying above the origin. (You don't say what $E$ is in the question, and you should, but I'm taking a guess here.)</p> <p>As for the visualization question, the strict transform of your space might not exactly be a cylinder, but it's isomorphic to one in some sense which I don't want to make precise- consider that it might be kind of like a hyperboloid of one sheet, where the exceptional divisor is a plane through the narrowest circle on the hyperboloid. "Blowing up" should be physically reminding you of inflating a balloon at the origin and things separate a bit after you inflate.</p>
Determining expected number of rounds
2,081,063
4
https://math.stackexchange.com/questions/2081063/determining-expected-number-of-rounds
<p>I'm trying to determine a formulation for the expected number of rounds for a game of chance based on dice.</p> <p>The details of the game are as follows:</p> <ol> <li><p>The game initially consists of $N$ players</p></li> <li><p>Each player has a fair die or access to one.</p></li> <li><p>For each round each player will roll their die once</p></li> <li><p>The mean of the rolls is determined</p></li> <li><p>Any player that rolled a value less than the mean is eliminated from the game</p></li> <li><p>The rounds are repeated until their is only player left.</p></li> </ol> <p>My question is how does one determined the expected number of round in terms of $N$.</p> <p>My thinking is that for any round the expectation is that half of the players will be eliminated. This leads to a continuous halving of the number of players, which seems to suggest that $\text{E}(\text{Rounds}) \approx \log_2 {N}$, or specifically:</p> <p>$$ \lim_{N \to \infty} \text{E}(\text{Rounds}) =\log_2 {N} $$</p> <p>I wasn't able to formulate a definitive relation, so I ran a simple monte-carlo of the game, for numbers of players ranging from 2 players to 200 players, one million simulations per game size. The results can be found here:</p> <p><a href="https://gist.github.com/anonymous/f0db85f06343070045b78f7494f19565" rel="nofollow noreferrer">https://gist.github.com/anonymous/f0db85f06343070045b78f7494f19565</a></p> <p>The graph for the results and $\log_2 {N}$ is shown in the following as red and blue respectively:</p> <p><a href="https://i.stack.imgur.com/fOnvL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fOnvL.png" alt="http://i.imgur.com/KvJxNJb.png"></a></p> <p>Where I'm having difficulty is explaining the continuous and consistent over-estimate of the result curve (via simulation) when compared to the $\log_2{N}$ curve.</p>
[ { "answer_id": 2081216, "score": 3, "text": "<p>Let e(n) be the expected number of rounds, starting with n players.</p>\n\n<p>For a given positive integer n, the exact value of e(n) can be found via a recursive procedure.\n<p>\nI implemented such a procedure in Maple, and the results, for $2 \\le n \\le 8$, are shown below (the fractions are the exact values).</p>\n\n<p>\\begin{align*}\ne(2)\t&amp;= \\dfrac {6} {5} \\approx 1.200000000\\\\\ne(3)\t&amp;= \\dfrac {303} {175}\t \\approx 1.731428571\\\\\ne(4)\t&amp;= \\dfrac {81486} {37625} \\approx 2.165740864\\\\\ne(5)\t&amp;= \\dfrac {654386} {263375} \\approx 2.484616991\\\\\ne(6)\t&amp;= \\dfrac {1123369554} {409548125} \\approx 2.742948839\\\\\ne(7)\t&amp;= \\dfrac {1155681595606} {389948321875} \\approx 2.963678854\\\\\ne(8)\t&amp;= \\dfrac {49218033279086166} {15594311926296875} \\approx 3.156152930\n\\end{align*}</p>\n\n<p>Here's my Maple implementation of e(n) ...\n<p>\n<a href=\"https://i.stack.imgur.com/oF5vp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oF5vp.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2084203, "score": 1, "text": "<p>I've run a C++ program very close in algorithm to the Maple program by @quasi. That is, it's based on</p>\n\n<p>$$ e(n) = \\frac{1+\\sum_{0 &lt; k &lt; n}p_{k,n}\\cdot e(k)}{1-p_{n,n}}\\enspace,$$</p>\n\n<p>where $p_{k,n}$ is the probability of there being $k$ survivors out of $n$ players.</p>\n\n<p>Here's a snippet of the code, which shows how one works with the C++ bindings of the GMP bignum library.\n<code><pre>\nstatic void print_expected_survivors(std::vector&lt;mpz_class&gt; const &amp; counts,\n long i, mpz_class outcomes)\n{\n mpq_class exp_surv = 0;\n for (long j = 0; j != i; ++j) {\n exp_surv += counts[j] * (j+1);\n }\n exp_surv /= outcomes;\n mpf_class ef = exp_surv;\n std::cout &lt;&lt; \"s(\" &lt;&lt; i &lt;&lt; \") = \" &lt;&lt; ef &lt;&lt; std::endl;\n}\n</pre></code>\nIn the GMP library, <code>mpz</code>, <code>mpq</code>, and <code>mpf</code> stand for multiple precision integer, rational, and floating-point number, respectively. This little function manipulates data of all three types as well as of type <code>long</code>. </p>\n\n<p>Here's a plot of $e(n) - \\log_2(n)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/pJzQA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pJzQA.png\" alt=\"enter image description here\"></a></p>\n\n<p>Not obvious from the data whether there is a horizontal asymptote different from $y=0$.</p>\n" }, { "answer_id": 2081216, "score": 3, "text": "<p>Let e(n) be the expected number of rounds, starting with n players.</p>\n\n<p>For a given positive integer n, the exact value of e(n) can be found via a recursive procedure.\n<p>\nI implemented such a procedure in Maple, and the results, for $2 \\le n \\le 8$, are shown below (the fractions are the exact values).</p>\n\n<p>\\begin{align*}\ne(2)\t&amp;= \\dfrac {6} {5} \\approx 1.200000000\\\\\ne(3)\t&amp;= \\dfrac {303} {175}\t \\approx 1.731428571\\\\\ne(4)\t&amp;= \\dfrac {81486} {37625} \\approx 2.165740864\\\\\ne(5)\t&amp;= \\dfrac {654386} {263375} \\approx 2.484616991\\\\\ne(6)\t&amp;= \\dfrac {1123369554} {409548125} \\approx 2.742948839\\\\\ne(7)\t&amp;= \\dfrac {1155681595606} {389948321875} \\approx 2.963678854\\\\\ne(8)\t&amp;= \\dfrac {49218033279086166} {15594311926296875} \\approx 3.156152930\n\\end{align*}</p>\n\n<p>Here's my Maple implementation of e(n) ...\n<p>\n<a href=\"https://i.stack.imgur.com/oF5vp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oF5vp.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2084203, "score": 1, "text": "<p>I've run a C++ program very close in algorithm to the Maple program by @quasi. That is, it's based on</p>\n\n<p>$$ e(n) = \\frac{1+\\sum_{0 &lt; k &lt; n}p_{k,n}\\cdot e(k)}{1-p_{n,n}}\\enspace,$$</p>\n\n<p>where $p_{k,n}$ is the probability of there being $k$ survivors out of $n$ players.</p>\n\n<p>Here's a snippet of the code, which shows how one works with the C++ bindings of the GMP bignum library.\n<code><pre>\nstatic void print_expected_survivors(std::vector&lt;mpz_class&gt; const &amp; counts,\n long i, mpz_class outcomes)\n{\n mpq_class exp_surv = 0;\n for (long j = 0; j != i; ++j) {\n exp_surv += counts[j] * (j+1);\n }\n exp_surv /= outcomes;\n mpf_class ef = exp_surv;\n std::cout &lt;&lt; \"s(\" &lt;&lt; i &lt;&lt; \") = \" &lt;&lt; ef &lt;&lt; std::endl;\n}\n</pre></code>\nIn the GMP library, <code>mpz</code>, <code>mpq</code>, and <code>mpf</code> stand for multiple precision integer, rational, and floating-point number, respectively. This little function manipulates data of all three types as well as of type <code>long</code>. </p>\n\n<p>Here's a plot of $e(n) - \\log_2(n)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/pJzQA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pJzQA.png\" alt=\"enter image description here\"></a></p>\n\n<p>Not obvious from the data whether there is a horizontal asymptote different from $y=0$.</p>\n" }, { "answer_id": 2081216, "score": 3, "text": "<p>Let e(n) be the expected number of rounds, starting with n players.</p>\n\n<p>For a given positive integer n, the exact value of e(n) can be found via a recursive procedure.\n<p>\nI implemented such a procedure in Maple, and the results, for $2 \\le n \\le 8$, are shown below (the fractions are the exact values).</p>\n\n<p>\\begin{align*}\ne(2)\t&amp;= \\dfrac {6} {5} \\approx 1.200000000\\\\\ne(3)\t&amp;= \\dfrac {303} {175}\t \\approx 1.731428571\\\\\ne(4)\t&amp;= \\dfrac {81486} {37625} \\approx 2.165740864\\\\\ne(5)\t&amp;= \\dfrac {654386} {263375} \\approx 2.484616991\\\\\ne(6)\t&amp;= \\dfrac {1123369554} {409548125} \\approx 2.742948839\\\\\ne(7)\t&amp;= \\dfrac {1155681595606} {389948321875} \\approx 2.963678854\\\\\ne(8)\t&amp;= \\dfrac {49218033279086166} {15594311926296875} \\approx 3.156152930\n\\end{align*}</p>\n\n<p>Here's my Maple implementation of e(n) ...\n<p>\n<a href=\"https://i.stack.imgur.com/oF5vp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oF5vp.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2084203, "score": 1, "text": "<p>I've run a C++ program very close in algorithm to the Maple program by @quasi. That is, it's based on</p>\n\n<p>$$ e(n) = \\frac{1+\\sum_{0 &lt; k &lt; n}p_{k,n}\\cdot e(k)}{1-p_{n,n}}\\enspace,$$</p>\n\n<p>where $p_{k,n}$ is the probability of there being $k$ survivors out of $n$ players.</p>\n\n<p>Here's a snippet of the code, which shows how one works with the C++ bindings of the GMP bignum library.\n<code><pre>\nstatic void print_expected_survivors(std::vector&lt;mpz_class&gt; const &amp; counts,\n long i, mpz_class outcomes)\n{\n mpq_class exp_surv = 0;\n for (long j = 0; j != i; ++j) {\n exp_surv += counts[j] * (j+1);\n }\n exp_surv /= outcomes;\n mpf_class ef = exp_surv;\n std::cout &lt;&lt; \"s(\" &lt;&lt; i &lt;&lt; \") = \" &lt;&lt; ef &lt;&lt; std::endl;\n}\n</pre></code>\nIn the GMP library, <code>mpz</code>, <code>mpq</code>, and <code>mpf</code> stand for multiple precision integer, rational, and floating-point number, respectively. This little function manipulates data of all three types as well as of type <code>long</code>. </p>\n\n<p>Here's a plot of $e(n) - \\log_2(n)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/pJzQA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pJzQA.png\" alt=\"enter image description here\"></a></p>\n\n<p>Not obvious from the data whether there is a horizontal asymptote different from $y=0$.</p>\n" }, { "answer_id": 2081216, "score": 3, "text": "<p>Let e(n) be the expected number of rounds, starting with n players.</p>\n\n<p>For a given positive integer n, the exact value of e(n) can be found via a recursive procedure.\n<p>\nI implemented such a procedure in Maple, and the results, for $2 \\le n \\le 8$, are shown below (the fractions are the exact values).</p>\n\n<p>\\begin{align*}\ne(2)\t&amp;= \\dfrac {6} {5} \\approx 1.200000000\\\\\ne(3)\t&amp;= \\dfrac {303} {175}\t \\approx 1.731428571\\\\\ne(4)\t&amp;= \\dfrac {81486} {37625} \\approx 2.165740864\\\\\ne(5)\t&amp;= \\dfrac {654386} {263375} \\approx 2.484616991\\\\\ne(6)\t&amp;= \\dfrac {1123369554} {409548125} \\approx 2.742948839\\\\\ne(7)\t&amp;= \\dfrac {1155681595606} {389948321875} \\approx 2.963678854\\\\\ne(8)\t&amp;= \\dfrac {49218033279086166} {15594311926296875} \\approx 3.156152930\n\\end{align*}</p>\n\n<p>Here's my Maple implementation of e(n) ...\n<p>\n<a href=\"https://i.stack.imgur.com/oF5vp.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oF5vp.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2084203, "score": 1, "text": "<p>I've run a C++ program very close in algorithm to the Maple program by @quasi. That is, it's based on</p>\n\n<p>$$ e(n) = \\frac{1+\\sum_{0 &lt; k &lt; n}p_{k,n}\\cdot e(k)}{1-p_{n,n}}\\enspace,$$</p>\n\n<p>where $p_{k,n}$ is the probability of there being $k$ survivors out of $n$ players.</p>\n\n<p>Here's a snippet of the code, which shows how one works with the C++ bindings of the GMP bignum library.\n<code><pre>\nstatic void print_expected_survivors(std::vector&lt;mpz_class&gt; const &amp; counts,\n long i, mpz_class outcomes)\n{\n mpq_class exp_surv = 0;\n for (long j = 0; j != i; ++j) {\n exp_surv += counts[j] * (j+1);\n }\n exp_surv /= outcomes;\n mpf_class ef = exp_surv;\n std::cout &lt;&lt; \"s(\" &lt;&lt; i &lt;&lt; \") = \" &lt;&lt; ef &lt;&lt; std::endl;\n}\n</pre></code>\nIn the GMP library, <code>mpz</code>, <code>mpq</code>, and <code>mpf</code> stand for multiple precision integer, rational, and floating-point number, respectively. This little function manipulates data of all three types as well as of type <code>long</code>. </p>\n\n<p>Here's a plot of $e(n) - \\log_2(n)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/pJzQA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pJzQA.png\" alt=\"enter image description here\"></a></p>\n\n<p>Not obvious from the data whether there is a horizontal asymptote different from $y=0$.</p>\n" } ]
Question: Determining expected number of rounds Answer 1: <p>Let e(n) be the expected number of rounds, starting with n players.</p> <p>For a given positive integer n, the exact value of e(n) can be found via a recursive procedure. <p> I implemented such a procedure in Maple, and the results, for $2 \le n \le 8$, are shown below (the fractions are the exact values).</p> <p>\begin{align*} e(2) &amp;= \dfrac {6} {5} \approx 1.200000000\\ e(3) &amp;= \dfrac {303} {175} \approx 1.731428571\\ e(4) &amp;= \dfrac {81486} {37625} \approx 2.165740864\\ e(5) &amp;= \dfrac {654386} {263375} \approx 2.484616991\\ e(6) &amp;= \dfrac {1123369554} {409548125} \approx 2.742948839\\ e(7) &amp;= \dfrac {1155681595606} {389948321875} \approx 2.963678854\\ e(8) &amp;= \dfrac {49218033279086166} {15594311926296875} \approx 3.156152930 \end{align*}</p> <p>Here's my Maple implementation of e(n) ... <p> <a href="https://i.stack.imgur.com/oF5vp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oF5vp.png" alt="enter image description here"></a></p> Answer 2: <p>I've run a C++ program very close in algorithm to the Maple program by @quasi. That is, it's based on</p> <p>$$ e(n) = \frac{1+\sum_{0 &lt; k &lt; n}p_{k,n}\cdot e(k)}{1-p_{n,n}}\enspace,$$</p> <p>where $p_{k,n}$ is the probability of there being $k$ survivors out of $n$ players.</p> <p>Here's a snippet of the code, which shows how one works with the C++ bindings of the GMP bignum library. <code><pre> static void print_expected_survivors(std::vector&lt;mpz_class&gt; const &amp; counts, long i, mpz_class outcomes) { mpq_class exp_surv = 0; for (long j = 0; j != i; ++j) { exp_surv += counts[j] * (j+1); } exp_surv /= outcomes; mpf_class ef = exp_surv; std::cout &lt;&lt; "s(" &lt;&lt; i &lt;&lt; ") = " &lt;&lt; ef &lt;&lt; std::endl; } </pre></code> In the GMP library, <code>mpz</code>, <code>mpq</code>, and <code>mpf</code> stand for multiple precision integer, rational, and floating-point number, respectively. This little function manipulates data of all three types as well as of type <code>long</code>. </p> <p>Here's a plot of $e(n) - \log_2(n)$:</p> <p><a href="https://i.stack.imgur.com/pJzQA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pJzQA.png" alt="enter image description here"></a></p> <p>Not obvious from the data whether there is a horizontal asymptote different from $y=0$.</p> Answer 3: <p>Let e(n) be the expected number of rounds, starting with n players.</p> <p>For a given positive integer n, the exact value of e(n) can be found via a recursive procedure. <p> I implemented such a procedure in Maple, and the results, for $2 \le n \le 8$, are shown below (the fractions are the exact values).</p> <p>\begin{align*} e(2) &amp;= \dfrac {6} {5} \approx 1.200000000\\ e(3) &amp;= \dfrac {303} {175} \approx 1.731428571\\ e(4) &amp;= \dfrac {81486} {37625} \approx 2.165740864\\ e(5) &amp;= \dfrac {654386} {263375} \approx 2.484616991\\ e(6) &amp;= \dfrac {1123369554} {409548125} \approx 2.742948839\\ e(7) &amp;= \dfrac {1155681595606} {389948321875} \approx 2.963678854\\ e(8) &amp;= \dfrac {49218033279086166} {15594311926296875} \approx 3.156152930 \end{align*}</p> <p>Here's my Maple implementation of e(n) ... <p> <a href="https://i.stack.imgur.com/oF5vp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oF5vp.png" alt="enter image description here"></a></p> Answer 4: <p>I've run a C++ program very close in algorithm to the Maple program by @quasi. That is, it's based on</p> <p>$$ e(n) = \frac{1+\sum_{0 &lt; k &lt; n}p_{k,n}\cdot e(k)}{1-p_{n,n}}\enspace,$$</p> <p>where $p_{k,n}$ is the probability of there being $k$ survivors out of $n$ players.</p> <p>Here's a snippet of the code, which shows how one works with the C++ bindings of the GMP bignum library. <code><pre> static void print_expected_survivors(std::vector&lt;mpz_class&gt; const &amp; counts, long i, mpz_class outcomes) { mpq_class exp_surv = 0; for (long j = 0; j != i; ++j) { exp_surv += counts[j] * (j+1); } exp_surv /= outcomes; mpf_class ef = exp_surv; std::cout &lt;&lt; "s(" &lt;&lt; i &lt;&lt; ") = " &lt;&lt; ef &lt;&lt; std::endl; } </pre></code> In the GMP library, <code>mpz</code>, <code>mpq</code>, and <code>mpf</code> stand for multiple precision integer, rational, and floating-point number, respectively. This little function manipulates data of all three types as well as of type <code>long</code>. </p> <p>Here's a plot of $e(n) - \log_2(n)$:</p> <p><a href="https://i.stack.imgur.com/pJzQA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pJzQA.png" alt="enter image description here"></a></p> <p>Not obvious from the data whether there is a horizontal asymptote different from $y=0$.</p> Answer 5: <p>Let e(n) be the expected number of rounds, starting with n players.</p> <p>For a given positive integer n, the exact value of e(n) can be found via a recursive procedure. <p> I implemented such a procedure in Maple, and the results, for $2 \le n \le 8$, are shown below (the fractions are the exact values).</p> <p>\begin{align*} e(2) &amp;= \dfrac {6} {5} \approx 1.200000000\\ e(3) &amp;= \dfrac {303} {175} \approx 1.731428571\\ e(4) &amp;= \dfrac {81486} {37625} \approx 2.165740864\\ e(5) &amp;= \dfrac {654386} {263375} \approx 2.484616991\\ e(6) &amp;= \dfrac {1123369554} {409548125} \approx 2.742948839\\ e(7) &amp;= \dfrac {1155681595606} {389948321875} \approx 2.963678854\\ e(8) &amp;= \dfrac {49218033279086166} {15594311926296875} \approx 3.156152930 \end{align*}</p> <p>Here's my Maple implementation of e(n) ... <p> <a href="https://i.stack.imgur.com/oF5vp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oF5vp.png" alt="enter image description here"></a></p> Answer 6: <p>I've run a C++ program very close in algorithm to the Maple program by @quasi. That is, it's based on</p> <p>$$ e(n) = \frac{1+\sum_{0 &lt; k &lt; n}p_{k,n}\cdot e(k)}{1-p_{n,n}}\enspace,$$</p> <p>where $p_{k,n}$ is the probability of there being $k$ survivors out of $n$ players.</p> <p>Here's a snippet of the code, which shows how one works with the C++ bindings of the GMP bignum library. <code><pre> static void print_expected_survivors(std::vector&lt;mpz_class&gt; const &amp; counts, long i, mpz_class outcomes) { mpq_class exp_surv = 0; for (long j = 0; j != i; ++j) { exp_surv += counts[j] * (j+1); } exp_surv /= outcomes; mpf_class ef = exp_surv; std::cout &lt;&lt; "s(" &lt;&lt; i &lt;&lt; ") = " &lt;&lt; ef &lt;&lt; std::endl; } </pre></code> In the GMP library, <code>mpz</code>, <code>mpq</code>, and <code>mpf</code> stand for multiple precision integer, rational, and floating-point number, respectively. This little function manipulates data of all three types as well as of type <code>long</code>. </p> <p>Here's a plot of $e(n) - \log_2(n)$:</p> <p><a href="https://i.stack.imgur.com/pJzQA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pJzQA.png" alt="enter image description here"></a></p> <p>Not obvious from the data whether there is a horizontal asymptote different from $y=0$.</p> Answer 7: <p>Let e(n) be the expected number of rounds, starting with n players.</p> <p>For a given positive integer n, the exact value of e(n) can be found via a recursive procedure. <p> I implemented such a procedure in Maple, and the results, for $2 \le n \le 8$, are shown below (the fractions are the exact values).</p> <p>\begin{align*} e(2) &amp;= \dfrac {6} {5} \approx 1.200000000\\ e(3) &amp;= \dfrac {303} {175} \approx 1.731428571\\ e(4) &amp;= \dfrac {81486} {37625} \approx 2.165740864\\ e(5) &amp;= \dfrac {654386} {263375} \approx 2.484616991\\ e(6) &amp;= \dfrac {1123369554} {409548125} \approx 2.742948839\\ e(7) &amp;= \dfrac {1155681595606} {389948321875} \approx 2.963678854\\ e(8) &amp;= \dfrac {49218033279086166} {15594311926296875} \approx 3.156152930 \end{align*}</p> <p>Here's my Maple implementation of e(n) ... <p> <a href="https://i.stack.imgur.com/oF5vp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oF5vp.png" alt="enter image description here"></a></p> Answer 8: <p>I've run a C++ program very close in algorithm to the Maple program by @quasi. That is, it's based on</p> <p>$$ e(n) = \frac{1+\sum_{0 &lt; k &lt; n}p_{k,n}\cdot e(k)}{1-p_{n,n}}\enspace,$$</p> <p>where $p_{k,n}$ is the probability of there being $k$ survivors out of $n$ players.</p> <p>Here's a snippet of the code, which shows how one works with the C++ bindings of the GMP bignum library. <code><pre> static void print_expected_survivors(std::vector&lt;mpz_class&gt; const &amp; counts, long i, mpz_class outcomes) { mpq_class exp_surv = 0; for (long j = 0; j != i; ++j) { exp_surv += counts[j] * (j+1); } exp_surv /= outcomes; mpf_class ef = exp_surv; std::cout &lt;&lt; "s(" &lt;&lt; i &lt;&lt; ") = " &lt;&lt; ef &lt;&lt; std::endl; } </pre></code> In the GMP library, <code>mpz</code>, <code>mpq</code>, and <code>mpf</code> stand for multiple precision integer, rational, and floating-point number, respectively. This little function manipulates data of all three types as well as of type <code>long</code>. </p> <p>Here's a plot of $e(n) - \log_2(n)$:</p> <p><a href="https://i.stack.imgur.com/pJzQA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pJzQA.png" alt="enter image description here"></a></p> <p>Not obvious from the data whether there is a horizontal asymptote different from $y=0$.</p>
Showing the centralizer of a conjugate is the conjugate of the centralizer
2,081,141
4
https://math.stackexchange.com/questions/2081141/showing-the-centralizer-of-a-conjugate-is-the-conjugate-of-the-centralizer
<p>I'm trying to show that if $Y \subseteq $ a group $G$ and $z \in G$, then $C_{G}(z^{-1}Yz)= z^{-1}C_{G}(Y)z$, where $C_{G}(Y)$ is the <strong>centralizer</strong> of $Y$ (i.e., $C_{G}(Y) = \{x \in G | xy = yx, \forall y \in Y \}$).</p> <p>I understand that since this is a "set equals", I need to show that $C_{G}(z^{-1}Yz)\subset z^{-1}C_{G}(Y) z$ and $z^{-1}C_{G}(Y) z \subset C_{G} (z^{-1}Yz)$.</p> <p>In the $z^{-1}C_{G}(Y)z \subset C_{G}(z^{-1}Yz)$ direction, let $w \in z^{-1} C_{G}(Y) z$. Then, $z^{-1}(wy)z = z^{-1}(yw)z$, $\forall y \in Y$. Then, we want to show that $w(z^{-1}Yz) = (z^{-1}Yz)w$, but I am very confused with how to show this. Multiplying both sides of $z^{-1}(wy)z = z^{-1}(yw)z$ by things isn't getting me anywhere, and I am a loss at what to try next. Sometimes these proofs involve clever little tricks I wouldn't have thought of, and so I am asking whether I am missing something here, and if so (and even if not!) how I should proceed.</p> <p>In the $C_{G}(z^{-1}Yz) \subset z^{-1}C_{G}(Y)z$ direction, if we let $u \in C_{G}(z^{-1}Yz)$, then $uz^{-1}Yz =z^{-1}Yzu $, and we want to show that this implies that $u \in z^{-1}C_{G}(Y)z$, or $z^{-1}(uy)z = z^{-1}(yu)z$ for all $y \in Y$. Again, same problem as before.</p> <p>I've seen proofs showing this for the Normalizer, but I haven't understood the inclusions in the $N_{G}(z^{-1}Yz) \subset z^{-1}N_{G}(Y)z$ direction. Anyway, even if I could understand them, in order to use that to help me prove this, since the centralizer is contained in the normalizer, I'd have $C_{G}(z^{-1}Yz) \subset N_{G} (z^{-1}Yz)$ and $z^{-1}C_{G}(Y) z \subset z^{-1}N_{G}(Y)z$, but then how would I determine anything about the relationship between $N_{G} (z^{-1}Yz)$ and $z^{-1}C_{G}(Y)z$? (And the relationships in the other direction as well?)</p>
[ { "answer_id": 2081370, "score": 3, "text": "<p>Suppose $w\\in zC_G(Y)z^{-1}$. Therefore $w=zxz^{-1}$ for some $x\\in C_G(Y)$. We want to verify that $w$ is also an element of $C_G(zYz^{-1})$, so let's pick an arbitrary $zyz^{-1}\\in zYz^{-1}$ (where $y\\in Y$) and,</p>\n\n<p>$$ \\begin{array}{ll} w(zyz^{-1}) &amp; =(zxz^{-1})(zyz^{-1}) \\\\ &amp; =z(xy)z^{-1} \\\\ &amp; =z(yx)z^{-1} \\\\ &amp; =(zyz^{-1})(zxz^{-1}) \\\\ &amp; =(zyz^{-1})w. \\end{array} $$</p>\n\n<p>Since $w$ commutes with any element $zyz^{-1}$ of $zYz^{-1}$, we have that all $w\\in zC_G(Y)z^{-1}$ are members of $C_G(zYz^{-1})$, or in other words we have the containment $zC_G(Y)z^{-1}\\subseteq C_G(zYz^{-1})$.</p>\n\n<p>To prove the other other direction, rewrite the containment we just did, replacing $Y$ with $z^{-1}Yz$ and then conjugating by $z$.</p>\n" }, { "answer_id": 2081348, "score": 2, "text": "<p>$$h\\in C_G(Y) \\leftrightarrow [h,Y]=1 \\leftrightarrow z^{-1}[h,Y]z=1 \\leftrightarrow [z^{-1}hz, z^{-1}Yz]=1\\leftrightarrow z^{-1}hz\\in C_G(z^{-1}Yz)$$ </p>\n" }, { "answer_id": 2081156, "score": 0, "text": "<p>$x \\in C(y) \\leftrightarrow x^{-1} y x = y.$\nlet $z = u y u^{-1},$ then $u y u^{-1} \\in C(z).$ The other direction is by definition.</p>\n" }, { "answer_id": 2081370, "score": 3, "text": "<p>Suppose $w\\in zC_G(Y)z^{-1}$. Therefore $w=zxz^{-1}$ for some $x\\in C_G(Y)$. We want to verify that $w$ is also an element of $C_G(zYz^{-1})$, so let's pick an arbitrary $zyz^{-1}\\in zYz^{-1}$ (where $y\\in Y$) and,</p>\n\n<p>$$ \\begin{array}{ll} w(zyz^{-1}) &amp; =(zxz^{-1})(zyz^{-1}) \\\\ &amp; =z(xy)z^{-1} \\\\ &amp; =z(yx)z^{-1} \\\\ &amp; =(zyz^{-1})(zxz^{-1}) \\\\ &amp; =(zyz^{-1})w. \\end{array} $$</p>\n\n<p>Since $w$ commutes with any element $zyz^{-1}$ of $zYz^{-1}$, we have that all $w\\in zC_G(Y)z^{-1}$ are members of $C_G(zYz^{-1})$, or in other words we have the containment $zC_G(Y)z^{-1}\\subseteq C_G(zYz^{-1})$.</p>\n\n<p>To prove the other other direction, rewrite the containment we just did, replacing $Y$ with $z^{-1}Yz$ and then conjugating by $z$.</p>\n" }, { "answer_id": 2081348, "score": 2, "text": "<p>$$h\\in C_G(Y) \\leftrightarrow [h,Y]=1 \\leftrightarrow z^{-1}[h,Y]z=1 \\leftrightarrow [z^{-1}hz, z^{-1}Yz]=1\\leftrightarrow z^{-1}hz\\in C_G(z^{-1}Yz)$$ </p>\n" }, { "answer_id": 2081156, "score": 0, "text": "<p>$x \\in C(y) \\leftrightarrow x^{-1} y x = y.$\nlet $z = u y u^{-1},$ then $u y u^{-1} \\in C(z).$ The other direction is by definition.</p>\n" }, { "answer_id": 2081370, "score": 3, "text": "<p>Suppose $w\\in zC_G(Y)z^{-1}$. Therefore $w=zxz^{-1}$ for some $x\\in C_G(Y)$. We want to verify that $w$ is also an element of $C_G(zYz^{-1})$, so let's pick an arbitrary $zyz^{-1}\\in zYz^{-1}$ (where $y\\in Y$) and,</p>\n\n<p>$$ \\begin{array}{ll} w(zyz^{-1}) &amp; =(zxz^{-1})(zyz^{-1}) \\\\ &amp; =z(xy)z^{-1} \\\\ &amp; =z(yx)z^{-1} \\\\ &amp; =(zyz^{-1})(zxz^{-1}) \\\\ &amp; =(zyz^{-1})w. \\end{array} $$</p>\n\n<p>Since $w$ commutes with any element $zyz^{-1}$ of $zYz^{-1}$, we have that all $w\\in zC_G(Y)z^{-1}$ are members of $C_G(zYz^{-1})$, or in other words we have the containment $zC_G(Y)z^{-1}\\subseteq C_G(zYz^{-1})$.</p>\n\n<p>To prove the other other direction, rewrite the containment we just did, replacing $Y$ with $z^{-1}Yz$ and then conjugating by $z$.</p>\n" }, { "answer_id": 2081348, "score": 2, "text": "<p>$$h\\in C_G(Y) \\leftrightarrow [h,Y]=1 \\leftrightarrow z^{-1}[h,Y]z=1 \\leftrightarrow [z^{-1}hz, z^{-1}Yz]=1\\leftrightarrow z^{-1}hz\\in C_G(z^{-1}Yz)$$ </p>\n" }, { "answer_id": 2081156, "score": 0, "text": "<p>$x \\in C(y) \\leftrightarrow x^{-1} y x = y.$\nlet $z = u y u^{-1},$ then $u y u^{-1} \\in C(z).$ The other direction is by definition.</p>\n" }, { "answer_id": 2081370, "score": 3, "text": "<p>Suppose $w\\in zC_G(Y)z^{-1}$. Therefore $w=zxz^{-1}$ for some $x\\in C_G(Y)$. We want to verify that $w$ is also an element of $C_G(zYz^{-1})$, so let's pick an arbitrary $zyz^{-1}\\in zYz^{-1}$ (where $y\\in Y$) and,</p>\n\n<p>$$ \\begin{array}{ll} w(zyz^{-1}) &amp; =(zxz^{-1})(zyz^{-1}) \\\\ &amp; =z(xy)z^{-1} \\\\ &amp; =z(yx)z^{-1} \\\\ &amp; =(zyz^{-1})(zxz^{-1}) \\\\ &amp; =(zyz^{-1})w. \\end{array} $$</p>\n\n<p>Since $w$ commutes with any element $zyz^{-1}$ of $zYz^{-1}$, we have that all $w\\in zC_G(Y)z^{-1}$ are members of $C_G(zYz^{-1})$, or in other words we have the containment $zC_G(Y)z^{-1}\\subseteq C_G(zYz^{-1})$.</p>\n\n<p>To prove the other other direction, rewrite the containment we just did, replacing $Y$ with $z^{-1}Yz$ and then conjugating by $z$.</p>\n" }, { "answer_id": 2081348, "score": 2, "text": "<p>$$h\\in C_G(Y) \\leftrightarrow [h,Y]=1 \\leftrightarrow z^{-1}[h,Y]z=1 \\leftrightarrow [z^{-1}hz, z^{-1}Yz]=1\\leftrightarrow z^{-1}hz\\in C_G(z^{-1}Yz)$$ </p>\n" }, { "answer_id": 2081156, "score": 0, "text": "<p>$x \\in C(y) \\leftrightarrow x^{-1} y x = y.$\nlet $z = u y u^{-1},$ then $u y u^{-1} \\in C(z).$ The other direction is by definition.</p>\n" } ]
Question: Showing the centralizer of a conjugate is the conjugate of the centralizer Answer 1: <p>Suppose $w\in zC_G(Y)z^{-1}$. Therefore $w=zxz^{-1}$ for some $x\in C_G(Y)$. We want to verify that $w$ is also an element of $C_G(zYz^{-1})$, so let's pick an arbitrary $zyz^{-1}\in zYz^{-1}$ (where $y\in Y$) and,</p> <p>$$ \begin{array}{ll} w(zyz^{-1}) &amp; =(zxz^{-1})(zyz^{-1}) \\ &amp; =z(xy)z^{-1} \\ &amp; =z(yx)z^{-1} \\ &amp; =(zyz^{-1})(zxz^{-1}) \\ &amp; =(zyz^{-1})w. \end{array} $$</p> <p>Since $w$ commutes with any element $zyz^{-1}$ of $zYz^{-1}$, we have that all $w\in zC_G(Y)z^{-1}$ are members of $C_G(zYz^{-1})$, or in other words we have the containment $zC_G(Y)z^{-1}\subseteq C_G(zYz^{-1})$.</p> <p>To prove the other other direction, rewrite the containment we just did, replacing $Y$ with $z^{-1}Yz$ and then conjugating by $z$.</p> Answer 2: <p>$$h\in C_G(Y) \leftrightarrow [h,Y]=1 \leftrightarrow z^{-1}[h,Y]z=1 \leftrightarrow [z^{-1}hz, z^{-1}Yz]=1\leftrightarrow z^{-1}hz\in C_G(z^{-1}Yz)$$ </p> Answer 3: <p>$x \in C(y) \leftrightarrow x^{-1} y x = y.$ let $z = u y u^{-1},$ then $u y u^{-1} \in C(z).$ The other direction is by definition.</p> Answer 4: <p>Suppose $w\in zC_G(Y)z^{-1}$. Therefore $w=zxz^{-1}$ for some $x\in C_G(Y)$. We want to verify that $w$ is also an element of $C_G(zYz^{-1})$, so let's pick an arbitrary $zyz^{-1}\in zYz^{-1}$ (where $y\in Y$) and,</p> <p>$$ \begin{array}{ll} w(zyz^{-1}) &amp; =(zxz^{-1})(zyz^{-1}) \\ &amp; =z(xy)z^{-1} \\ &amp; =z(yx)z^{-1} \\ &amp; =(zyz^{-1})(zxz^{-1}) \\ &amp; =(zyz^{-1})w. \end{array} $$</p> <p>Since $w$ commutes with any element $zyz^{-1}$ of $zYz^{-1}$, we have that all $w\in zC_G(Y)z^{-1}$ are members of $C_G(zYz^{-1})$, or in other words we have the containment $zC_G(Y)z^{-1}\subseteq C_G(zYz^{-1})$.</p> <p>To prove the other other direction, rewrite the containment we just did, replacing $Y$ with $z^{-1}Yz$ and then conjugating by $z$.</p> Answer 5: <p>$$h\in C_G(Y) \leftrightarrow [h,Y]=1 \leftrightarrow z^{-1}[h,Y]z=1 \leftrightarrow [z^{-1}hz, z^{-1}Yz]=1\leftrightarrow z^{-1}hz\in C_G(z^{-1}Yz)$$ </p> Answer 6: <p>$x \in C(y) \leftrightarrow x^{-1} y x = y.$ let $z = u y u^{-1},$ then $u y u^{-1} \in C(z).$ The other direction is by definition.</p> Answer 7: <p>Suppose $w\in zC_G(Y)z^{-1}$. Therefore $w=zxz^{-1}$ for some $x\in C_G(Y)$. We want to verify that $w$ is also an element of $C_G(zYz^{-1})$, so let's pick an arbitrary $zyz^{-1}\in zYz^{-1}$ (where $y\in Y$) and,</p> <p>$$ \begin{array}{ll} w(zyz^{-1}) &amp; =(zxz^{-1})(zyz^{-1}) \\ &amp; =z(xy)z^{-1} \\ &amp; =z(yx)z^{-1} \\ &amp; =(zyz^{-1})(zxz^{-1}) \\ &amp; =(zyz^{-1})w. \end{array} $$</p> <p>Since $w$ commutes with any element $zyz^{-1}$ of $zYz^{-1}$, we have that all $w\in zC_G(Y)z^{-1}$ are members of $C_G(zYz^{-1})$, or in other words we have the containment $zC_G(Y)z^{-1}\subseteq C_G(zYz^{-1})$.</p> <p>To prove the other other direction, rewrite the containment we just did, replacing $Y$ with $z^{-1}Yz$ and then conjugating by $z$.</p> Answer 8: <p>$$h\in C_G(Y) \leftrightarrow [h,Y]=1 \leftrightarrow z^{-1}[h,Y]z=1 \leftrightarrow [z^{-1}hz, z^{-1}Yz]=1\leftrightarrow z^{-1}hz\in C_G(z^{-1}Yz)$$ </p> Answer 9: <p>$x \in C(y) \leftrightarrow x^{-1} y x = y.$ let $z = u y u^{-1},$ then $u y u^{-1} \in C(z).$ The other direction is by definition.</p> Answer 10: <p>Suppose $w\in zC_G(Y)z^{-1}$. Therefore $w=zxz^{-1}$ for some $x\in C_G(Y)$. We want to verify that $w$ is also an element of $C_G(zYz^{-1})$, so let's pick an arbitrary $zyz^{-1}\in zYz^{-1}$ (where $y\in Y$) and,</p> <p>$$ \begin{array}{ll} w(zyz^{-1}) &amp; =(zxz^{-1})(zyz^{-1}) \\ &amp; =z(xy)z^{-1} \\ &amp; =z(yx)z^{-1} \\ &amp; =(zyz^{-1})(zxz^{-1}) \\ &amp; =(zyz^{-1})w. \end{array} $$</p> <p>Since $w$ commutes with any element $zyz^{-1}$ of $zYz^{-1}$, we have that all $w\in zC_G(Y)z^{-1}$ are members of $C_G(zYz^{-1})$, or in other words we have the containment $zC_G(Y)z^{-1}\subseteq C_G(zYz^{-1})$.</p> <p>To prove the other other direction, rewrite the containment we just did, replacing $Y$ with $z^{-1}Yz$ and then conjugating by $z$.</p> Answer 11: <p>$$h\in C_G(Y) \leftrightarrow [h,Y]=1 \leftrightarrow z^{-1}[h,Y]z=1 \leftrightarrow [z^{-1}hz, z^{-1}Yz]=1\leftrightarrow z^{-1}hz\in C_G(z^{-1}Yz)$$ </p> Answer 12: <p>$x \in C(y) \leftrightarrow x^{-1} y x = y.$ let $z = u y u^{-1},$ then $u y u^{-1} \in C(z).$ The other direction is by definition.</p>
Definition of a cube
2,081,148
4
https://math.stackexchange.com/questions/2081148/definition-of-a-cube
<p>I was thinking today, what the best definition of a cube is. Google defines it as such:</p> <blockquote> <p>a symmetrical three-dimensional shape, either solid or hollow, contained by six equal squares.</p> </blockquote> <p>And I was wondering if the "squares" part of the definition is necessary? </p> <blockquote> <p>a symmetrical three-dimensional shape, either solid or hollow, contained by six equal sides</p> </blockquote> <p>Would this definition be equivalent?</p> <p>Essentially the question is, is a 3 dimensional shape consisting of 6 equal sides, possible to be a non-cube? Is this provable?</p>
[ { "answer_id": 2081582, "score": 5, "text": "<p>No, it's not equivalent at all. You can for example fuse two triangular pyramids together to form a triangular bipyramid. This polyhedron has six equal faces which happens to be triangles instead (they could be equilateral). Since triangles are congruent if their side are the same and we can assign the edge lengths quite freely on such figure we don't need the triangles to be equilateral or even isosceles.</p>\n\n<p>Another counter example is a rhombohedron. Here all the edges are the same, but the faces are rhobi. Given the side length of a rhomb it's determined up to congruence by the angle at one corner. In the same way the rhombohedron is determined by its edge length and the angles at one corner and the angles at the oposite corner will match due to parallellity making all faces congruent.</p>\n\n<p>A more elaborate counter example could be to start with a rhombohedron and then \npick two opposite corners and then for each of the other we move them perpendicularily to or from the plane halfway the first corners (perpendicularily to the line connecting them) by equal amount.</p>\n\n<p>Apart from this there are vaugenesses in your proposed definition. The concepts of \"equal\", \"side\" is not as precise as one might think. Normally one talks about congruence in geometry (or \"similar\"). In addition what a \"side\" is is vaugue: do you mean surface or edge? do you require it to be flat or may it be curved? in that case if it may be a curved surface wheres the border of the surface (you could have a sphere and state that it's made up of six congruent surfaces)?</p>\n\n<p>Having that sorted out one could actually prove that there are basically only these two alternatives (the second is a special case of the third, and the cube is a special case of the second) and counter examples. Such a polyhedron would either have to be assembled the same way as a cube consisting of kite (quadrilateral where each edge has an adjacent edge of the same length) surfaces or assembled like a triangular bipyramid. </p>\n\n<p>The proof is by considering a corner and then rule out the possibility that other than three faces meet there. Then one rules out that they be anything else than triangles or quadrilateral. Last one rules out that the quadrilateral can be anything else than kites (by ruling out anything else than pairs of edges with the same length and that they can't be parallellograms unless they are rhomboids and thereby kites).</p>\n\n<p>After realizing the possibilities one could complete your proposal to apply only to cubes. Going along on the same theme it would be to require twelve equal edges and that all angles are the same.</p>\n" }, { "answer_id": 2081521, "score": 0, "text": "<p>If you talk about a “shape”, then I'm not sure the term “face” is well defined in this concept. In particular, there is no guarantee that it will be flat. So I'd say you need to define the shape as a polytope.</p>\n\n<p>The “symmetrical” in the definition you quote is pretty vague, too. You could easily come up with an object which has a mirror reflection but is pretty unsymmetric otherwise.</p>\n\n<p>If you require your polytope to be vertex-transitive and face-transitive, in addition to specifying the number of faces, then I guess you should be fine. One does usually require edge transitivity and convexity, too, to obtain the platonic solid, but in the case of the cube I guess those properties might be derivable from the others. Don't have a proof at hand, though.</p>\n" }, { "answer_id": 2081582, "score": 5, "text": "<p>No, it's not equivalent at all. You can for example fuse two triangular pyramids together to form a triangular bipyramid. This polyhedron has six equal faces which happens to be triangles instead (they could be equilateral). Since triangles are congruent if their side are the same and we can assign the edge lengths quite freely on such figure we don't need the triangles to be equilateral or even isosceles.</p>\n\n<p>Another counter example is a rhombohedron. Here all the edges are the same, but the faces are rhobi. Given the side length of a rhomb it's determined up to congruence by the angle at one corner. In the same way the rhombohedron is determined by its edge length and the angles at one corner and the angles at the oposite corner will match due to parallellity making all faces congruent.</p>\n\n<p>A more elaborate counter example could be to start with a rhombohedron and then \npick two opposite corners and then for each of the other we move them perpendicularily to or from the plane halfway the first corners (perpendicularily to the line connecting them) by equal amount.</p>\n\n<p>Apart from this there are vaugenesses in your proposed definition. The concepts of \"equal\", \"side\" is not as precise as one might think. Normally one talks about congruence in geometry (or \"similar\"). In addition what a \"side\" is is vaugue: do you mean surface or edge? do you require it to be flat or may it be curved? in that case if it may be a curved surface wheres the border of the surface (you could have a sphere and state that it's made up of six congruent surfaces)?</p>\n\n<p>Having that sorted out one could actually prove that there are basically only these two alternatives (the second is a special case of the third, and the cube is a special case of the second) and counter examples. Such a polyhedron would either have to be assembled the same way as a cube consisting of kite (quadrilateral where each edge has an adjacent edge of the same length) surfaces or assembled like a triangular bipyramid. </p>\n\n<p>The proof is by considering a corner and then rule out the possibility that other than three faces meet there. Then one rules out that they be anything else than triangles or quadrilateral. Last one rules out that the quadrilateral can be anything else than kites (by ruling out anything else than pairs of edges with the same length and that they can't be parallellograms unless they are rhomboids and thereby kites).</p>\n\n<p>After realizing the possibilities one could complete your proposal to apply only to cubes. Going along on the same theme it would be to require twelve equal edges and that all angles are the same.</p>\n" }, { "answer_id": 2081521, "score": 0, "text": "<p>If you talk about a “shape”, then I'm not sure the term “face” is well defined in this concept. In particular, there is no guarantee that it will be flat. So I'd say you need to define the shape as a polytope.</p>\n\n<p>The “symmetrical” in the definition you quote is pretty vague, too. You could easily come up with an object which has a mirror reflection but is pretty unsymmetric otherwise.</p>\n\n<p>If you require your polytope to be vertex-transitive and face-transitive, in addition to specifying the number of faces, then I guess you should be fine. One does usually require edge transitivity and convexity, too, to obtain the platonic solid, but in the case of the cube I guess those properties might be derivable from the others. Don't have a proof at hand, though.</p>\n" }, { "answer_id": 2081582, "score": 5, "text": "<p>No, it's not equivalent at all. You can for example fuse two triangular pyramids together to form a triangular bipyramid. This polyhedron has six equal faces which happens to be triangles instead (they could be equilateral). Since triangles are congruent if their side are the same and we can assign the edge lengths quite freely on such figure we don't need the triangles to be equilateral or even isosceles.</p>\n\n<p>Another counter example is a rhombohedron. Here all the edges are the same, but the faces are rhobi. Given the side length of a rhomb it's determined up to congruence by the angle at one corner. In the same way the rhombohedron is determined by its edge length and the angles at one corner and the angles at the oposite corner will match due to parallellity making all faces congruent.</p>\n\n<p>A more elaborate counter example could be to start with a rhombohedron and then \npick two opposite corners and then for each of the other we move them perpendicularily to or from the plane halfway the first corners (perpendicularily to the line connecting them) by equal amount.</p>\n\n<p>Apart from this there are vaugenesses in your proposed definition. The concepts of \"equal\", \"side\" is not as precise as one might think. Normally one talks about congruence in geometry (or \"similar\"). In addition what a \"side\" is is vaugue: do you mean surface or edge? do you require it to be flat or may it be curved? in that case if it may be a curved surface wheres the border of the surface (you could have a sphere and state that it's made up of six congruent surfaces)?</p>\n\n<p>Having that sorted out one could actually prove that there are basically only these two alternatives (the second is a special case of the third, and the cube is a special case of the second) and counter examples. Such a polyhedron would either have to be assembled the same way as a cube consisting of kite (quadrilateral where each edge has an adjacent edge of the same length) surfaces or assembled like a triangular bipyramid. </p>\n\n<p>The proof is by considering a corner and then rule out the possibility that other than three faces meet there. Then one rules out that they be anything else than triangles or quadrilateral. Last one rules out that the quadrilateral can be anything else than kites (by ruling out anything else than pairs of edges with the same length and that they can't be parallellograms unless they are rhomboids and thereby kites).</p>\n\n<p>After realizing the possibilities one could complete your proposal to apply only to cubes. Going along on the same theme it would be to require twelve equal edges and that all angles are the same.</p>\n" }, { "answer_id": 2081521, "score": 0, "text": "<p>If you talk about a “shape”, then I'm not sure the term “face” is well defined in this concept. In particular, there is no guarantee that it will be flat. So I'd say you need to define the shape as a polytope.</p>\n\n<p>The “symmetrical” in the definition you quote is pretty vague, too. You could easily come up with an object which has a mirror reflection but is pretty unsymmetric otherwise.</p>\n\n<p>If you require your polytope to be vertex-transitive and face-transitive, in addition to specifying the number of faces, then I guess you should be fine. One does usually require edge transitivity and convexity, too, to obtain the platonic solid, but in the case of the cube I guess those properties might be derivable from the others. Don't have a proof at hand, though.</p>\n" }, { "answer_id": 2081582, "score": 5, "text": "<p>No, it's not equivalent at all. You can for example fuse two triangular pyramids together to form a triangular bipyramid. This polyhedron has six equal faces which happens to be triangles instead (they could be equilateral). Since triangles are congruent if their side are the same and we can assign the edge lengths quite freely on such figure we don't need the triangles to be equilateral or even isosceles.</p>\n\n<p>Another counter example is a rhombohedron. Here all the edges are the same, but the faces are rhobi. Given the side length of a rhomb it's determined up to congruence by the angle at one corner. In the same way the rhombohedron is determined by its edge length and the angles at one corner and the angles at the oposite corner will match due to parallellity making all faces congruent.</p>\n\n<p>A more elaborate counter example could be to start with a rhombohedron and then \npick two opposite corners and then for each of the other we move them perpendicularily to or from the plane halfway the first corners (perpendicularily to the line connecting them) by equal amount.</p>\n\n<p>Apart from this there are vaugenesses in your proposed definition. The concepts of \"equal\", \"side\" is not as precise as one might think. Normally one talks about congruence in geometry (or \"similar\"). In addition what a \"side\" is is vaugue: do you mean surface or edge? do you require it to be flat or may it be curved? in that case if it may be a curved surface wheres the border of the surface (you could have a sphere and state that it's made up of six congruent surfaces)?</p>\n\n<p>Having that sorted out one could actually prove that there are basically only these two alternatives (the second is a special case of the third, and the cube is a special case of the second) and counter examples. Such a polyhedron would either have to be assembled the same way as a cube consisting of kite (quadrilateral where each edge has an adjacent edge of the same length) surfaces or assembled like a triangular bipyramid. </p>\n\n<p>The proof is by considering a corner and then rule out the possibility that other than three faces meet there. Then one rules out that they be anything else than triangles or quadrilateral. Last one rules out that the quadrilateral can be anything else than kites (by ruling out anything else than pairs of edges with the same length and that they can't be parallellograms unless they are rhomboids and thereby kites).</p>\n\n<p>After realizing the possibilities one could complete your proposal to apply only to cubes. Going along on the same theme it would be to require twelve equal edges and that all angles are the same.</p>\n" }, { "answer_id": 2081521, "score": 0, "text": "<p>If you talk about a “shape”, then I'm not sure the term “face” is well defined in this concept. In particular, there is no guarantee that it will be flat. So I'd say you need to define the shape as a polytope.</p>\n\n<p>The “symmetrical” in the definition you quote is pretty vague, too. You could easily come up with an object which has a mirror reflection but is pretty unsymmetric otherwise.</p>\n\n<p>If you require your polytope to be vertex-transitive and face-transitive, in addition to specifying the number of faces, then I guess you should be fine. One does usually require edge transitivity and convexity, too, to obtain the platonic solid, but in the case of the cube I guess those properties might be derivable from the others. Don't have a proof at hand, though.</p>\n" } ]
Question: Definition of a cube Answer 1: <p>No, it's not equivalent at all. You can for example fuse two triangular pyramids together to form a triangular bipyramid. This polyhedron has six equal faces which happens to be triangles instead (they could be equilateral). Since triangles are congruent if their side are the same and we can assign the edge lengths quite freely on such figure we don't need the triangles to be equilateral or even isosceles.</p> <p>Another counter example is a rhombohedron. Here all the edges are the same, but the faces are rhobi. Given the side length of a rhomb it's determined up to congruence by the angle at one corner. In the same way the rhombohedron is determined by its edge length and the angles at one corner and the angles at the oposite corner will match due to parallellity making all faces congruent.</p> <p>A more elaborate counter example could be to start with a rhombohedron and then pick two opposite corners and then for each of the other we move them perpendicularily to or from the plane halfway the first corners (perpendicularily to the line connecting them) by equal amount.</p> <p>Apart from this there are vaugenesses in your proposed definition. The concepts of "equal", "side" is not as precise as one might think. Normally one talks about congruence in geometry (or "similar"). In addition what a "side" is is vaugue: do you mean surface or edge? do you require it to be flat or may it be curved? in that case if it may be a curved surface wheres the border of the surface (you could have a sphere and state that it's made up of six congruent surfaces)?</p> <p>Having that sorted out one could actually prove that there are basically only these two alternatives (the second is a special case of the third, and the cube is a special case of the second) and counter examples. Such a polyhedron would either have to be assembled the same way as a cube consisting of kite (quadrilateral where each edge has an adjacent edge of the same length) surfaces or assembled like a triangular bipyramid. </p> <p>The proof is by considering a corner and then rule out the possibility that other than three faces meet there. Then one rules out that they be anything else than triangles or quadrilateral. Last one rules out that the quadrilateral can be anything else than kites (by ruling out anything else than pairs of edges with the same length and that they can't be parallellograms unless they are rhomboids and thereby kites).</p> <p>After realizing the possibilities one could complete your proposal to apply only to cubes. Going along on the same theme it would be to require twelve equal edges and that all angles are the same.</p> Answer 2: <p>If you talk about a “shape”, then I'm not sure the term “face” is well defined in this concept. In particular, there is no guarantee that it will be flat. So I'd say you need to define the shape as a polytope.</p> <p>The “symmetrical” in the definition you quote is pretty vague, too. You could easily come up with an object which has a mirror reflection but is pretty unsymmetric otherwise.</p> <p>If you require your polytope to be vertex-transitive and face-transitive, in addition to specifying the number of faces, then I guess you should be fine. One does usually require edge transitivity and convexity, too, to obtain the platonic solid, but in the case of the cube I guess those properties might be derivable from the others. Don't have a proof at hand, though.</p> Answer 3: <p>No, it's not equivalent at all. You can for example fuse two triangular pyramids together to form a triangular bipyramid. This polyhedron has six equal faces which happens to be triangles instead (they could be equilateral). Since triangles are congruent if their side are the same and we can assign the edge lengths quite freely on such figure we don't need the triangles to be equilateral or even isosceles.</p> <p>Another counter example is a rhombohedron. Here all the edges are the same, but the faces are rhobi. Given the side length of a rhomb it's determined up to congruence by the angle at one corner. In the same way the rhombohedron is determined by its edge length and the angles at one corner and the angles at the oposite corner will match due to parallellity making all faces congruent.</p> <p>A more elaborate counter example could be to start with a rhombohedron and then pick two opposite corners and then for each of the other we move them perpendicularily to or from the plane halfway the first corners (perpendicularily to the line connecting them) by equal amount.</p> <p>Apart from this there are vaugenesses in your proposed definition. The concepts of "equal", "side" is not as precise as one might think. Normally one talks about congruence in geometry (or "similar"). In addition what a "side" is is vaugue: do you mean surface or edge? do you require it to be flat or may it be curved? in that case if it may be a curved surface wheres the border of the surface (you could have a sphere and state that it's made up of six congruent surfaces)?</p> <p>Having that sorted out one could actually prove that there are basically only these two alternatives (the second is a special case of the third, and the cube is a special case of the second) and counter examples. Such a polyhedron would either have to be assembled the same way as a cube consisting of kite (quadrilateral where each edge has an adjacent edge of the same length) surfaces or assembled like a triangular bipyramid. </p> <p>The proof is by considering a corner and then rule out the possibility that other than three faces meet there. Then one rules out that they be anything else than triangles or quadrilateral. Last one rules out that the quadrilateral can be anything else than kites (by ruling out anything else than pairs of edges with the same length and that they can't be parallellograms unless they are rhomboids and thereby kites).</p> <p>After realizing the possibilities one could complete your proposal to apply only to cubes. Going along on the same theme it would be to require twelve equal edges and that all angles are the same.</p> Answer 4: <p>If you talk about a “shape”, then I'm not sure the term “face” is well defined in this concept. In particular, there is no guarantee that it will be flat. So I'd say you need to define the shape as a polytope.</p> <p>The “symmetrical” in the definition you quote is pretty vague, too. You could easily come up with an object which has a mirror reflection but is pretty unsymmetric otherwise.</p> <p>If you require your polytope to be vertex-transitive and face-transitive, in addition to specifying the number of faces, then I guess you should be fine. One does usually require edge transitivity and convexity, too, to obtain the platonic solid, but in the case of the cube I guess those properties might be derivable from the others. Don't have a proof at hand, though.</p> Answer 5: <p>No, it's not equivalent at all. You can for example fuse two triangular pyramids together to form a triangular bipyramid. This polyhedron has six equal faces which happens to be triangles instead (they could be equilateral). Since triangles are congruent if their side are the same and we can assign the edge lengths quite freely on such figure we don't need the triangles to be equilateral or even isosceles.</p> <p>Another counter example is a rhombohedron. Here all the edges are the same, but the faces are rhobi. Given the side length of a rhomb it's determined up to congruence by the angle at one corner. In the same way the rhombohedron is determined by its edge length and the angles at one corner and the angles at the oposite corner will match due to parallellity making all faces congruent.</p> <p>A more elaborate counter example could be to start with a rhombohedron and then pick two opposite corners and then for each of the other we move them perpendicularily to or from the plane halfway the first corners (perpendicularily to the line connecting them) by equal amount.</p> <p>Apart from this there are vaugenesses in your proposed definition. The concepts of "equal", "side" is not as precise as one might think. Normally one talks about congruence in geometry (or "similar"). In addition what a "side" is is vaugue: do you mean surface or edge? do you require it to be flat or may it be curved? in that case if it may be a curved surface wheres the border of the surface (you could have a sphere and state that it's made up of six congruent surfaces)?</p> <p>Having that sorted out one could actually prove that there are basically only these two alternatives (the second is a special case of the third, and the cube is a special case of the second) and counter examples. Such a polyhedron would either have to be assembled the same way as a cube consisting of kite (quadrilateral where each edge has an adjacent edge of the same length) surfaces or assembled like a triangular bipyramid. </p> <p>The proof is by considering a corner and then rule out the possibility that other than three faces meet there. Then one rules out that they be anything else than triangles or quadrilateral. Last one rules out that the quadrilateral can be anything else than kites (by ruling out anything else than pairs of edges with the same length and that they can't be parallellograms unless they are rhomboids and thereby kites).</p> <p>After realizing the possibilities one could complete your proposal to apply only to cubes. Going along on the same theme it would be to require twelve equal edges and that all angles are the same.</p> Answer 6: <p>If you talk about a “shape”, then I'm not sure the term “face” is well defined in this concept. In particular, there is no guarantee that it will be flat. So I'd say you need to define the shape as a polytope.</p> <p>The “symmetrical” in the definition you quote is pretty vague, too. You could easily come up with an object which has a mirror reflection but is pretty unsymmetric otherwise.</p> <p>If you require your polytope to be vertex-transitive and face-transitive, in addition to specifying the number of faces, then I guess you should be fine. One does usually require edge transitivity and convexity, too, to obtain the platonic solid, but in the case of the cube I guess those properties might be derivable from the others. Don't have a proof at hand, though.</p> Answer 7: <p>No, it's not equivalent at all. You can for example fuse two triangular pyramids together to form a triangular bipyramid. This polyhedron has six equal faces which happens to be triangles instead (they could be equilateral). Since triangles are congruent if their side are the same and we can assign the edge lengths quite freely on such figure we don't need the triangles to be equilateral or even isosceles.</p> <p>Another counter example is a rhombohedron. Here all the edges are the same, but the faces are rhobi. Given the side length of a rhomb it's determined up to congruence by the angle at one corner. In the same way the rhombohedron is determined by its edge length and the angles at one corner and the angles at the oposite corner will match due to parallellity making all faces congruent.</p> <p>A more elaborate counter example could be to start with a rhombohedron and then pick two opposite corners and then for each of the other we move them perpendicularily to or from the plane halfway the first corners (perpendicularily to the line connecting them) by equal amount.</p> <p>Apart from this there are vaugenesses in your proposed definition. The concepts of "equal", "side" is not as precise as one might think. Normally one talks about congruence in geometry (or "similar"). In addition what a "side" is is vaugue: do you mean surface or edge? do you require it to be flat or may it be curved? in that case if it may be a curved surface wheres the border of the surface (you could have a sphere and state that it's made up of six congruent surfaces)?</p> <p>Having that sorted out one could actually prove that there are basically only these two alternatives (the second is a special case of the third, and the cube is a special case of the second) and counter examples. Such a polyhedron would either have to be assembled the same way as a cube consisting of kite (quadrilateral where each edge has an adjacent edge of the same length) surfaces or assembled like a triangular bipyramid. </p> <p>The proof is by considering a corner and then rule out the possibility that other than three faces meet there. Then one rules out that they be anything else than triangles or quadrilateral. Last one rules out that the quadrilateral can be anything else than kites (by ruling out anything else than pairs of edges with the same length and that they can't be parallellograms unless they are rhomboids and thereby kites).</p> <p>After realizing the possibilities one could complete your proposal to apply only to cubes. Going along on the same theme it would be to require twelve equal edges and that all angles are the same.</p> Answer 8: <p>If you talk about a “shape”, then I'm not sure the term “face” is well defined in this concept. In particular, there is no guarantee that it will be flat. So I'd say you need to define the shape as a polytope.</p> <p>The “symmetrical” in the definition you quote is pretty vague, too. You could easily come up with an object which has a mirror reflection but is pretty unsymmetric otherwise.</p> <p>If you require your polytope to be vertex-transitive and face-transitive, in addition to specifying the number of faces, then I guess you should be fine. One does usually require edge transitivity and convexity, too, to obtain the platonic solid, but in the case of the cube I guess those properties might be derivable from the others. Don't have a proof at hand, though.</p>
How to integrate $\int \frac{(ar)}{\sqrt{a^2 - r^2}} dr $ ?
2,081,175
4
https://math.stackexchange.com/questions/2081175/how-to-integrate-int-fracar-sqrta2-r2-dr
<p>How to integrate $\int \frac{(ar)}{\sqrt{a^2 - r^2}} dr $ ? </p> <p>I tried making $ u = a^2 - r^2 $ but I can't seem to get $ -a\sqrt{a^2 - r^2} $ </p> <p>Any help is appreciated! Thank you. </p>
[ { "answer_id": 2081178, "score": 1, "text": "<p><strong>Hint</strong>. Make the change of variable\n$$\nu=a^2 - r^2,\\quad du=-2rdr\n$$ giving\n$$\n\\int \\frac{(ar)}{\\sqrt{a^2 - r^2}} dr=-\\frac a2\\int \\frac{du}{\\sqrt{u}} .\n$$</p>\n" }, { "answer_id": 2081179, "score": 1, "text": "<p>That is absolutely the right choice to substitute. You get </p>\n\n<p>$$\\int \\frac{ar}{\\sqrt{a^2-r^2}} dr=-\\frac{a}{2} \\int \\frac{1}{\\sqrt{u}} du$$</p>\n" }, { "answer_id": 2081180, "score": 1, "text": "<p>Well $u=a^2-r^2$ is a good idea. With this substitution, we get $du = -2r dr$ and finally $$\\int \\frac{ar}{\\sqrt{a^2 - r^2}} dr= -\\frac{a}{2}\\int \\frac{1}{\\sqrt{u}} du = -a\\sqrt{u} +C = -a\\sqrt{a^2-r^2}+C$$ as expected.</p>\n" }, { "answer_id": 2081190, "score": 0, "text": "<p>Although this type of integral is usually done by trigonometric or $u$-substitution, in this case one should notice that</p>\n\n<p>$$ \\frac{d}{dr}\\sqrt{a^2-r^2}=-\\frac{r}{\\sqrt{a^2-r^2}} $$</p>\n\n<p>Thus one finds, without using substitution that</p>\n\n<p>$$ \\int\\frac{ar}{\\sqrt{a^2-r^2}}dr= -a\\sqrt{a^2-r^2}+c$$</p>\n" }, { "answer_id": 2081178, "score": 1, "text": "<p><strong>Hint</strong>. Make the change of variable\n$$\nu=a^2 - r^2,\\quad du=-2rdr\n$$ giving\n$$\n\\int \\frac{(ar)}{\\sqrt{a^2 - r^2}} dr=-\\frac a2\\int \\frac{du}{\\sqrt{u}} .\n$$</p>\n" }, { "answer_id": 2081179, "score": 1, "text": "<p>That is absolutely the right choice to substitute. You get </p>\n\n<p>$$\\int \\frac{ar}{\\sqrt{a^2-r^2}} dr=-\\frac{a}{2} \\int \\frac{1}{\\sqrt{u}} du$$</p>\n" }, { "answer_id": 2081180, "score": 1, "text": "<p>Well $u=a^2-r^2$ is a good idea. With this substitution, we get $du = -2r dr$ and finally $$\\int \\frac{ar}{\\sqrt{a^2 - r^2}} dr= -\\frac{a}{2}\\int \\frac{1}{\\sqrt{u}} du = -a\\sqrt{u} +C = -a\\sqrt{a^2-r^2}+C$$ as expected.</p>\n" }, { "answer_id": 2081190, "score": 0, "text": "<p>Although this type of integral is usually done by trigonometric or $u$-substitution, in this case one should notice that</p>\n\n<p>$$ \\frac{d}{dr}\\sqrt{a^2-r^2}=-\\frac{r}{\\sqrt{a^2-r^2}} $$</p>\n\n<p>Thus one finds, without using substitution that</p>\n\n<p>$$ \\int\\frac{ar}{\\sqrt{a^2-r^2}}dr= -a\\sqrt{a^2-r^2}+c$$</p>\n" }, { "answer_id": 2081178, "score": 1, "text": "<p><strong>Hint</strong>. Make the change of variable\n$$\nu=a^2 - r^2,\\quad du=-2rdr\n$$ giving\n$$\n\\int \\frac{(ar)}{\\sqrt{a^2 - r^2}} dr=-\\frac a2\\int \\frac{du}{\\sqrt{u}} .\n$$</p>\n" }, { "answer_id": 2081179, "score": 1, "text": "<p>That is absolutely the right choice to substitute. You get </p>\n\n<p>$$\\int \\frac{ar}{\\sqrt{a^2-r^2}} dr=-\\frac{a}{2} \\int \\frac{1}{\\sqrt{u}} du$$</p>\n" }, { "answer_id": 2081180, "score": 1, "text": "<p>Well $u=a^2-r^2$ is a good idea. With this substitution, we get $du = -2r dr$ and finally $$\\int \\frac{ar}{\\sqrt{a^2 - r^2}} dr= -\\frac{a}{2}\\int \\frac{1}{\\sqrt{u}} du = -a\\sqrt{u} +C = -a\\sqrt{a^2-r^2}+C$$ as expected.</p>\n" }, { "answer_id": 2081190, "score": 0, "text": "<p>Although this type of integral is usually done by trigonometric or $u$-substitution, in this case one should notice that</p>\n\n<p>$$ \\frac{d}{dr}\\sqrt{a^2-r^2}=-\\frac{r}{\\sqrt{a^2-r^2}} $$</p>\n\n<p>Thus one finds, without using substitution that</p>\n\n<p>$$ \\int\\frac{ar}{\\sqrt{a^2-r^2}}dr= -a\\sqrt{a^2-r^2}+c$$</p>\n" }, { "answer_id": 2081178, "score": 1, "text": "<p><strong>Hint</strong>. Make the change of variable\n$$\nu=a^2 - r^2,\\quad du=-2rdr\n$$ giving\n$$\n\\int \\frac{(ar)}{\\sqrt{a^2 - r^2}} dr=-\\frac a2\\int \\frac{du}{\\sqrt{u}} .\n$$</p>\n" }, { "answer_id": 2081179, "score": 1, "text": "<p>That is absolutely the right choice to substitute. You get </p>\n\n<p>$$\\int \\frac{ar}{\\sqrt{a^2-r^2}} dr=-\\frac{a}{2} \\int \\frac{1}{\\sqrt{u}} du$$</p>\n" }, { "answer_id": 2081180, "score": 1, "text": "<p>Well $u=a^2-r^2$ is a good idea. With this substitution, we get $du = -2r dr$ and finally $$\\int \\frac{ar}{\\sqrt{a^2 - r^2}} dr= -\\frac{a}{2}\\int \\frac{1}{\\sqrt{u}} du = -a\\sqrt{u} +C = -a\\sqrt{a^2-r^2}+C$$ as expected.</p>\n" }, { "answer_id": 2081190, "score": 0, "text": "<p>Although this type of integral is usually done by trigonometric or $u$-substitution, in this case one should notice that</p>\n\n<p>$$ \\frac{d}{dr}\\sqrt{a^2-r^2}=-\\frac{r}{\\sqrt{a^2-r^2}} $$</p>\n\n<p>Thus one finds, without using substitution that</p>\n\n<p>$$ \\int\\frac{ar}{\\sqrt{a^2-r^2}}dr= -a\\sqrt{a^2-r^2}+c$$</p>\n" } ]
Question: How to integrate $\int \frac{(ar)}{\sqrt{a^2 - r^2}} dr $ ? Answer 1: <p><strong>Hint</strong>. Make the change of variable $$ u=a^2 - r^2,\quad du=-2rdr $$ giving $$ \int \frac{(ar)}{\sqrt{a^2 - r^2}} dr=-\frac a2\int \frac{du}{\sqrt{u}} . $$</p> Answer 2: <p>That is absolutely the right choice to substitute. You get </p> <p>$$\int \frac{ar}{\sqrt{a^2-r^2}} dr=-\frac{a}{2} \int \frac{1}{\sqrt{u}} du$$</p> Answer 3: <p>Well $u=a^2-r^2$ is a good idea. With this substitution, we get $du = -2r dr$ and finally $$\int \frac{ar}{\sqrt{a^2 - r^2}} dr= -\frac{a}{2}\int \frac{1}{\sqrt{u}} du = -a\sqrt{u} +C = -a\sqrt{a^2-r^2}+C$$ as expected.</p> Answer 4: <p>Although this type of integral is usually done by trigonometric or $u$-substitution, in this case one should notice that</p> <p>$$ \frac{d}{dr}\sqrt{a^2-r^2}=-\frac{r}{\sqrt{a^2-r^2}} $$</p> <p>Thus one finds, without using substitution that</p> <p>$$ \int\frac{ar}{\sqrt{a^2-r^2}}dr= -a\sqrt{a^2-r^2}+c$$</p> Answer 5: <p><strong>Hint</strong>. Make the change of variable $$ u=a^2 - r^2,\quad du=-2rdr $$ giving $$ \int \frac{(ar)}{\sqrt{a^2 - r^2}} dr=-\frac a2\int \frac{du}{\sqrt{u}} . $$</p> Answer 6: <p>That is absolutely the right choice to substitute. You get </p> <p>$$\int \frac{ar}{\sqrt{a^2-r^2}} dr=-\frac{a}{2} \int \frac{1}{\sqrt{u}} du$$</p> Answer 7: <p>Well $u=a^2-r^2$ is a good idea. With this substitution, we get $du = -2r dr$ and finally $$\int \frac{ar}{\sqrt{a^2 - r^2}} dr= -\frac{a}{2}\int \frac{1}{\sqrt{u}} du = -a\sqrt{u} +C = -a\sqrt{a^2-r^2}+C$$ as expected.</p> Answer 8: <p>Although this type of integral is usually done by trigonometric or $u$-substitution, in this case one should notice that</p> <p>$$ \frac{d}{dr}\sqrt{a^2-r^2}=-\frac{r}{\sqrt{a^2-r^2}} $$</p> <p>Thus one finds, without using substitution that</p> <p>$$ \int\frac{ar}{\sqrt{a^2-r^2}}dr= -a\sqrt{a^2-r^2}+c$$</p> Answer 9: <p><strong>Hint</strong>. Make the change of variable $$ u=a^2 - r^2,\quad du=-2rdr $$ giving $$ \int \frac{(ar)}{\sqrt{a^2 - r^2}} dr=-\frac a2\int \frac{du}{\sqrt{u}} . $$</p> Answer 10: <p>That is absolutely the right choice to substitute. You get </p> <p>$$\int \frac{ar}{\sqrt{a^2-r^2}} dr=-\frac{a}{2} \int \frac{1}{\sqrt{u}} du$$</p> Answer 11: <p>Well $u=a^2-r^2$ is a good idea. With this substitution, we get $du = -2r dr$ and finally $$\int \frac{ar}{\sqrt{a^2 - r^2}} dr= -\frac{a}{2}\int \frac{1}{\sqrt{u}} du = -a\sqrt{u} +C = -a\sqrt{a^2-r^2}+C$$ as expected.</p> Answer 12: <p>Although this type of integral is usually done by trigonometric or $u$-substitution, in this case one should notice that</p> <p>$$ \frac{d}{dr}\sqrt{a^2-r^2}=-\frac{r}{\sqrt{a^2-r^2}} $$</p> <p>Thus one finds, without using substitution that</p> <p>$$ \int\frac{ar}{\sqrt{a^2-r^2}}dr= -a\sqrt{a^2-r^2}+c$$</p> Answer 13: <p><strong>Hint</strong>. Make the change of variable $$ u=a^2 - r^2,\quad du=-2rdr $$ giving $$ \int \frac{(ar)}{\sqrt{a^2 - r^2}} dr=-\frac a2\int \frac{du}{\sqrt{u}} . $$</p> Answer 14: <p>That is absolutely the right choice to substitute. You get </p> <p>$$\int \frac{ar}{\sqrt{a^2-r^2}} dr=-\frac{a}{2} \int \frac{1}{\sqrt{u}} du$$</p> Answer 15: <p>Well $u=a^2-r^2$ is a good idea. With this substitution, we get $du = -2r dr$ and finally $$\int \frac{ar}{\sqrt{a^2 - r^2}} dr= -\frac{a}{2}\int \frac{1}{\sqrt{u}} du = -a\sqrt{u} +C = -a\sqrt{a^2-r^2}+C$$ as expected.</p> Answer 16: <p>Although this type of integral is usually done by trigonometric or $u$-substitution, in this case one should notice that</p> <p>$$ \frac{d}{dr}\sqrt{a^2-r^2}=-\frac{r}{\sqrt{a^2-r^2}} $$</p> <p>Thus one finds, without using substitution that</p> <p>$$ \int\frac{ar}{\sqrt{a^2-r^2}}dr= -a\sqrt{a^2-r^2}+c$$</p>
How many digits of accuracy will an answer have?
2,081,213
4
https://math.stackexchange.com/questions/2081213/how-many-digits-of-accuracy-will-an-answer-have
<p>I was doing a project Euler problem where I needed to find several Fibonacci numbers, but their index was so large that I could not use the typical recursive method. Instead, I used Binet's rule:</p> <p>$$ F_n = \frac{\phi^n - \psi^n}{\sqrt{5}}$$</p> <p>But since phi and psi and root 5 are all irrational, I didn't know how many digits past the decimal place I needed to drag them out, especially since the Fibonacci numbers were hundreds or thousands of digits long. I ended up making the irrational numbers obscenely accurate to 1000 or 10000 decimal places just to be safe.</p> <p>Is there some kind of rule like "x amount of precision before will result in y amount of precision in the answer"?</p> <p>I have already solved the problem just fine, I was just wondering about the precision part.</p>
[ { "answer_id": 2081266, "score": 5, "text": "<p>You do not really need to work in floating point: Fibonacci numbers fulfill interesting duplication identities, so there are fast algorithms in integer arithmetic, as Brevan Ellefsen mentioned.</p>\n\n<p>Here it is some $\\text{Maxima}$ code for the fast computation of the $n$-th Fibonacci number.</p>\n\n<pre>\n /* Fast Fibonacci Numbers Computation */\n fastfib(n):=block([O:2,L:?integer\\-length(n),i,f0:2,f1:1,u,v],\n for i:1 thru L do (\n u:f0^2, v:f1^2,\n if ?logbitp(L-i,n) then (f1:v+O, f0:f1-u+O, O:-2 )\n else (f0:u-O, f1:v+O-f0, O:2 )),\n return((?ash(f1,1)-f0)/5)\n );\n</pre>\n\n<p>This is included in the $\\text{gf}$ package. In order to compute $F_n$, we just need to compute $2\\log_2(n)$ squares. With <a href=\"https://en.wikipedia.org/wiki/Karatsuba_algorithm\" rel=\"nofollow noreferrer\">Karatsuba algorithm</a> (invoked in $\\text{Maxima}$ through the $\\text{fasttimes}$ command) or <a href=\"https://www.hindawi.com/journals/jam/2014/107109/alg5/\" rel=\"nofollow noreferrer\">similar approaches</a> the computation of $m^2$ requires just $\\log_2(m)^{1.585}$ elementary operations.</p>\n\n<p>This is pretty efficient. Here it is some test on my machine:</p>\n\n<pre><code>Maxima 5.32.1 http://maxima.sourceforge.net\nusing Lisp GNU Common Lisp (GCL) GCL 2.6.10 (a.k.a. GCL)\nDistributed under the GNU Public License. See the file COPYING.\nDedicated to the memory of William Schelter.\nThe function bug_report() provides bug reporting information.\n(%i1) load(\"gfpatch.mac\");\n(%o1) gfpatch.mac\n(%i2) showtime:true;\nEvaluation took 0.0000 seconds (0.0000 elapsed)\n(%o2) true\n(%i3) fastfib(10000000)$\nEvaluation took 0.0800 seconds (0.0900 elapsed)\n</code></pre>\n\n<p>Back to the actual question: since $\\phi^n=\\exp(n\\log\\phi)$ and $\\frac{d}{dx}x^n = n x^{n-1}$, we roughly have\n$(\\phi+\\varepsilon)^n \\approx \\phi^n+n\\varepsilon\\phi^{n-1}$, so a good accuracy is achieved as soon as $\\varepsilon\\ll\\frac{1}{n\\phi^{n-1}}$, and we need to store $\\approx 0.7n$ binary digits of $\\varphi$ to compute $\\varphi^n$ with decent accuracy.</p>\n" }, { "answer_id": 2081288, "score": 2, "text": "<p>You could definitely improve the calculations <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding\" rel=\"nofollow noreferrer\">by considering only $\\phi$</a>\n$$F_n=\\left[\\frac{\\phi^n}{\\sqrt{5}}\\right]$$\nRe the error, taking a $\\phi^{*} \\in \\mathbb{Q}$ such that $|\\phi - \\phi^{*}|&lt;\\frac{1}{n2^{n-1}10^m}$, leads to:\n$$|\\phi^n - (\\phi^{*})^n|=|\\phi - \\phi^{*}|\\cdot |\\phi^{n-1}+\\phi^{n-2}\\phi^{*}+...+(\\phi^{*})^{n-1}|\\leq |\\phi - \\phi^{*}|\\cdot n2^{n-1}&lt;\\frac{1}{10^m}$$</p>\n\n<p>On another note, the very same wikipedia article suggests a procedure that:</p>\n\n<blockquote>\n <p>These last two identities provide a way to compute Fibonacci numbers\n recursively in $O(\\log{n})$ arithmetic operations ...</p>\n</blockquote>\n" }, { "answer_id": 2081266, "score": 5, "text": "<p>You do not really need to work in floating point: Fibonacci numbers fulfill interesting duplication identities, so there are fast algorithms in integer arithmetic, as Brevan Ellefsen mentioned.</p>\n\n<p>Here it is some $\\text{Maxima}$ code for the fast computation of the $n$-th Fibonacci number.</p>\n\n<pre>\n /* Fast Fibonacci Numbers Computation */\n fastfib(n):=block([O:2,L:?integer\\-length(n),i,f0:2,f1:1,u,v],\n for i:1 thru L do (\n u:f0^2, v:f1^2,\n if ?logbitp(L-i,n) then (f1:v+O, f0:f1-u+O, O:-2 )\n else (f0:u-O, f1:v+O-f0, O:2 )),\n return((?ash(f1,1)-f0)/5)\n );\n</pre>\n\n<p>This is included in the $\\text{gf}$ package. In order to compute $F_n$, we just need to compute $2\\log_2(n)$ squares. With <a href=\"https://en.wikipedia.org/wiki/Karatsuba_algorithm\" rel=\"nofollow noreferrer\">Karatsuba algorithm</a> (invoked in $\\text{Maxima}$ through the $\\text{fasttimes}$ command) or <a href=\"https://www.hindawi.com/journals/jam/2014/107109/alg5/\" rel=\"nofollow noreferrer\">similar approaches</a> the computation of $m^2$ requires just $\\log_2(m)^{1.585}$ elementary operations.</p>\n\n<p>This is pretty efficient. Here it is some test on my machine:</p>\n\n<pre><code>Maxima 5.32.1 http://maxima.sourceforge.net\nusing Lisp GNU Common Lisp (GCL) GCL 2.6.10 (a.k.a. GCL)\nDistributed under the GNU Public License. See the file COPYING.\nDedicated to the memory of William Schelter.\nThe function bug_report() provides bug reporting information.\n(%i1) load(\"gfpatch.mac\");\n(%o1) gfpatch.mac\n(%i2) showtime:true;\nEvaluation took 0.0000 seconds (0.0000 elapsed)\n(%o2) true\n(%i3) fastfib(10000000)$\nEvaluation took 0.0800 seconds (0.0900 elapsed)\n</code></pre>\n\n<p>Back to the actual question: since $\\phi^n=\\exp(n\\log\\phi)$ and $\\frac{d}{dx}x^n = n x^{n-1}$, we roughly have\n$(\\phi+\\varepsilon)^n \\approx \\phi^n+n\\varepsilon\\phi^{n-1}$, so a good accuracy is achieved as soon as $\\varepsilon\\ll\\frac{1}{n\\phi^{n-1}}$, and we need to store $\\approx 0.7n$ binary digits of $\\varphi$ to compute $\\varphi^n$ with decent accuracy.</p>\n" }, { "answer_id": 2081288, "score": 2, "text": "<p>You could definitely improve the calculations <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding\" rel=\"nofollow noreferrer\">by considering only $\\phi$</a>\n$$F_n=\\left[\\frac{\\phi^n}{\\sqrt{5}}\\right]$$\nRe the error, taking a $\\phi^{*} \\in \\mathbb{Q}$ such that $|\\phi - \\phi^{*}|&lt;\\frac{1}{n2^{n-1}10^m}$, leads to:\n$$|\\phi^n - (\\phi^{*})^n|=|\\phi - \\phi^{*}|\\cdot |\\phi^{n-1}+\\phi^{n-2}\\phi^{*}+...+(\\phi^{*})^{n-1}|\\leq |\\phi - \\phi^{*}|\\cdot n2^{n-1}&lt;\\frac{1}{10^m}$$</p>\n\n<p>On another note, the very same wikipedia article suggests a procedure that:</p>\n\n<blockquote>\n <p>These last two identities provide a way to compute Fibonacci numbers\n recursively in $O(\\log{n})$ arithmetic operations ...</p>\n</blockquote>\n" }, { "answer_id": 2081266, "score": 5, "text": "<p>You do not really need to work in floating point: Fibonacci numbers fulfill interesting duplication identities, so there are fast algorithms in integer arithmetic, as Brevan Ellefsen mentioned.</p>\n\n<p>Here it is some $\\text{Maxima}$ code for the fast computation of the $n$-th Fibonacci number.</p>\n\n<pre>\n /* Fast Fibonacci Numbers Computation */\n fastfib(n):=block([O:2,L:?integer\\-length(n),i,f0:2,f1:1,u,v],\n for i:1 thru L do (\n u:f0^2, v:f1^2,\n if ?logbitp(L-i,n) then (f1:v+O, f0:f1-u+O, O:-2 )\n else (f0:u-O, f1:v+O-f0, O:2 )),\n return((?ash(f1,1)-f0)/5)\n );\n</pre>\n\n<p>This is included in the $\\text{gf}$ package. In order to compute $F_n$, we just need to compute $2\\log_2(n)$ squares. With <a href=\"https://en.wikipedia.org/wiki/Karatsuba_algorithm\" rel=\"nofollow noreferrer\">Karatsuba algorithm</a> (invoked in $\\text{Maxima}$ through the $\\text{fasttimes}$ command) or <a href=\"https://www.hindawi.com/journals/jam/2014/107109/alg5/\" rel=\"nofollow noreferrer\">similar approaches</a> the computation of $m^2$ requires just $\\log_2(m)^{1.585}$ elementary operations.</p>\n\n<p>This is pretty efficient. Here it is some test on my machine:</p>\n\n<pre><code>Maxima 5.32.1 http://maxima.sourceforge.net\nusing Lisp GNU Common Lisp (GCL) GCL 2.6.10 (a.k.a. GCL)\nDistributed under the GNU Public License. See the file COPYING.\nDedicated to the memory of William Schelter.\nThe function bug_report() provides bug reporting information.\n(%i1) load(\"gfpatch.mac\");\n(%o1) gfpatch.mac\n(%i2) showtime:true;\nEvaluation took 0.0000 seconds (0.0000 elapsed)\n(%o2) true\n(%i3) fastfib(10000000)$\nEvaluation took 0.0800 seconds (0.0900 elapsed)\n</code></pre>\n\n<p>Back to the actual question: since $\\phi^n=\\exp(n\\log\\phi)$ and $\\frac{d}{dx}x^n = n x^{n-1}$, we roughly have\n$(\\phi+\\varepsilon)^n \\approx \\phi^n+n\\varepsilon\\phi^{n-1}$, so a good accuracy is achieved as soon as $\\varepsilon\\ll\\frac{1}{n\\phi^{n-1}}$, and we need to store $\\approx 0.7n$ binary digits of $\\varphi$ to compute $\\varphi^n$ with decent accuracy.</p>\n" }, { "answer_id": 2081288, "score": 2, "text": "<p>You could definitely improve the calculations <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding\" rel=\"nofollow noreferrer\">by considering only $\\phi$</a>\n$$F_n=\\left[\\frac{\\phi^n}{\\sqrt{5}}\\right]$$\nRe the error, taking a $\\phi^{*} \\in \\mathbb{Q}$ such that $|\\phi - \\phi^{*}|&lt;\\frac{1}{n2^{n-1}10^m}$, leads to:\n$$|\\phi^n - (\\phi^{*})^n|=|\\phi - \\phi^{*}|\\cdot |\\phi^{n-1}+\\phi^{n-2}\\phi^{*}+...+(\\phi^{*})^{n-1}|\\leq |\\phi - \\phi^{*}|\\cdot n2^{n-1}&lt;\\frac{1}{10^m}$$</p>\n\n<p>On another note, the very same wikipedia article suggests a procedure that:</p>\n\n<blockquote>\n <p>These last two identities provide a way to compute Fibonacci numbers\n recursively in $O(\\log{n})$ arithmetic operations ...</p>\n</blockquote>\n" }, { "answer_id": 2081266, "score": 5, "text": "<p>You do not really need to work in floating point: Fibonacci numbers fulfill interesting duplication identities, so there are fast algorithms in integer arithmetic, as Brevan Ellefsen mentioned.</p>\n\n<p>Here it is some $\\text{Maxima}$ code for the fast computation of the $n$-th Fibonacci number.</p>\n\n<pre>\n /* Fast Fibonacci Numbers Computation */\n fastfib(n):=block([O:2,L:?integer\\-length(n),i,f0:2,f1:1,u,v],\n for i:1 thru L do (\n u:f0^2, v:f1^2,\n if ?logbitp(L-i,n) then (f1:v+O, f0:f1-u+O, O:-2 )\n else (f0:u-O, f1:v+O-f0, O:2 )),\n return((?ash(f1,1)-f0)/5)\n );\n</pre>\n\n<p>This is included in the $\\text{gf}$ package. In order to compute $F_n$, we just need to compute $2\\log_2(n)$ squares. With <a href=\"https://en.wikipedia.org/wiki/Karatsuba_algorithm\" rel=\"nofollow noreferrer\">Karatsuba algorithm</a> (invoked in $\\text{Maxima}$ through the $\\text{fasttimes}$ command) or <a href=\"https://www.hindawi.com/journals/jam/2014/107109/alg5/\" rel=\"nofollow noreferrer\">similar approaches</a> the computation of $m^2$ requires just $\\log_2(m)^{1.585}$ elementary operations.</p>\n\n<p>This is pretty efficient. Here it is some test on my machine:</p>\n\n<pre><code>Maxima 5.32.1 http://maxima.sourceforge.net\nusing Lisp GNU Common Lisp (GCL) GCL 2.6.10 (a.k.a. GCL)\nDistributed under the GNU Public License. See the file COPYING.\nDedicated to the memory of William Schelter.\nThe function bug_report() provides bug reporting information.\n(%i1) load(\"gfpatch.mac\");\n(%o1) gfpatch.mac\n(%i2) showtime:true;\nEvaluation took 0.0000 seconds (0.0000 elapsed)\n(%o2) true\n(%i3) fastfib(10000000)$\nEvaluation took 0.0800 seconds (0.0900 elapsed)\n</code></pre>\n\n<p>Back to the actual question: since $\\phi^n=\\exp(n\\log\\phi)$ and $\\frac{d}{dx}x^n = n x^{n-1}$, we roughly have\n$(\\phi+\\varepsilon)^n \\approx \\phi^n+n\\varepsilon\\phi^{n-1}$, so a good accuracy is achieved as soon as $\\varepsilon\\ll\\frac{1}{n\\phi^{n-1}}$, and we need to store $\\approx 0.7n$ binary digits of $\\varphi$ to compute $\\varphi^n$ with decent accuracy.</p>\n" }, { "answer_id": 2081288, "score": 2, "text": "<p>You could definitely improve the calculations <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding\" rel=\"nofollow noreferrer\">by considering only $\\phi$</a>\n$$F_n=\\left[\\frac{\\phi^n}{\\sqrt{5}}\\right]$$\nRe the error, taking a $\\phi^{*} \\in \\mathbb{Q}$ such that $|\\phi - \\phi^{*}|&lt;\\frac{1}{n2^{n-1}10^m}$, leads to:\n$$|\\phi^n - (\\phi^{*})^n|=|\\phi - \\phi^{*}|\\cdot |\\phi^{n-1}+\\phi^{n-2}\\phi^{*}+...+(\\phi^{*})^{n-1}|\\leq |\\phi - \\phi^{*}|\\cdot n2^{n-1}&lt;\\frac{1}{10^m}$$</p>\n\n<p>On another note, the very same wikipedia article suggests a procedure that:</p>\n\n<blockquote>\n <p>These last two identities provide a way to compute Fibonacci numbers\n recursively in $O(\\log{n})$ arithmetic operations ...</p>\n</blockquote>\n" } ]
Question: How many digits of accuracy will an answer have? Answer 1: <p>You do not really need to work in floating point: Fibonacci numbers fulfill interesting duplication identities, so there are fast algorithms in integer arithmetic, as Brevan Ellefsen mentioned.</p> <p>Here it is some $\text{Maxima}$ code for the fast computation of the $n$-th Fibonacci number.</p> <pre> /* Fast Fibonacci Numbers Computation */ fastfib(n):=block([O:2,L:?integer\-length(n),i,f0:2,f1:1,u,v], for i:1 thru L do ( u:f0^2, v:f1^2, if ?logbitp(L-i,n) then (f1:v+O, f0:f1-u+O, O:-2 ) else (f0:u-O, f1:v+O-f0, O:2 )), return((?ash(f1,1)-f0)/5) ); </pre> <p>This is included in the $\text{gf}$ package. In order to compute $F_n$, we just need to compute $2\log_2(n)$ squares. With <a href="https://en.wikipedia.org/wiki/Karatsuba_algorithm" rel="nofollow noreferrer">Karatsuba algorithm</a> (invoked in $\text{Maxima}$ through the $\text{fasttimes}$ command) or <a href="https://www.hindawi.com/journals/jam/2014/107109/alg5/" rel="nofollow noreferrer">similar approaches</a> the computation of $m^2$ requires just $\log_2(m)^{1.585}$ elementary operations.</p> <p>This is pretty efficient. Here it is some test on my machine:</p> <pre><code>Maxima 5.32.1 http://maxima.sourceforge.net using Lisp GNU Common Lisp (GCL) GCL 2.6.10 (a.k.a. GCL) Distributed under the GNU Public License. See the file COPYING. Dedicated to the memory of William Schelter. The function bug_report() provides bug reporting information. (%i1) load("gfpatch.mac"); (%o1) gfpatch.mac (%i2) showtime:true; Evaluation took 0.0000 seconds (0.0000 elapsed) (%o2) true (%i3) fastfib(10000000)$ Evaluation took 0.0800 seconds (0.0900 elapsed) </code></pre> <p>Back to the actual question: since $\phi^n=\exp(n\log\phi)$ and $\frac{d}{dx}x^n = n x^{n-1}$, we roughly have $(\phi+\varepsilon)^n \approx \phi^n+n\varepsilon\phi^{n-1}$, so a good accuracy is achieved as soon as $\varepsilon\ll\frac{1}{n\phi^{n-1}}$, and we need to store $\approx 0.7n$ binary digits of $\varphi$ to compute $\varphi^n$ with decent accuracy.</p> Answer 2: <p>You could definitely improve the calculations <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding" rel="nofollow noreferrer">by considering only $\phi$</a> $$F_n=\left[\frac{\phi^n}{\sqrt{5}}\right]$$ Re the error, taking a $\phi^{*} \in \mathbb{Q}$ such that $|\phi - \phi^{*}|&lt;\frac{1}{n2^{n-1}10^m}$, leads to: $$|\phi^n - (\phi^{*})^n|=|\phi - \phi^{*}|\cdot |\phi^{n-1}+\phi^{n-2}\phi^{*}+...+(\phi^{*})^{n-1}|\leq |\phi - \phi^{*}|\cdot n2^{n-1}&lt;\frac{1}{10^m}$$</p> <p>On another note, the very same wikipedia article suggests a procedure that:</p> <blockquote> <p>These last two identities provide a way to compute Fibonacci numbers recursively in $O(\log{n})$ arithmetic operations ...</p> </blockquote> Answer 3: <p>You do not really need to work in floating point: Fibonacci numbers fulfill interesting duplication identities, so there are fast algorithms in integer arithmetic, as Brevan Ellefsen mentioned.</p> <p>Here it is some $\text{Maxima}$ code for the fast computation of the $n$-th Fibonacci number.</p> <pre> /* Fast Fibonacci Numbers Computation */ fastfib(n):=block([O:2,L:?integer\-length(n),i,f0:2,f1:1,u,v], for i:1 thru L do ( u:f0^2, v:f1^2, if ?logbitp(L-i,n) then (f1:v+O, f0:f1-u+O, O:-2 ) else (f0:u-O, f1:v+O-f0, O:2 )), return((?ash(f1,1)-f0)/5) ); </pre> <p>This is included in the $\text{gf}$ package. In order to compute $F_n$, we just need to compute $2\log_2(n)$ squares. With <a href="https://en.wikipedia.org/wiki/Karatsuba_algorithm" rel="nofollow noreferrer">Karatsuba algorithm</a> (invoked in $\text{Maxima}$ through the $\text{fasttimes}$ command) or <a href="https://www.hindawi.com/journals/jam/2014/107109/alg5/" rel="nofollow noreferrer">similar approaches</a> the computation of $m^2$ requires just $\log_2(m)^{1.585}$ elementary operations.</p> <p>This is pretty efficient. Here it is some test on my machine:</p> <pre><code>Maxima 5.32.1 http://maxima.sourceforge.net using Lisp GNU Common Lisp (GCL) GCL 2.6.10 (a.k.a. GCL) Distributed under the GNU Public License. See the file COPYING. Dedicated to the memory of William Schelter. The function bug_report() provides bug reporting information. (%i1) load("gfpatch.mac"); (%o1) gfpatch.mac (%i2) showtime:true; Evaluation took 0.0000 seconds (0.0000 elapsed) (%o2) true (%i3) fastfib(10000000)$ Evaluation took 0.0800 seconds (0.0900 elapsed) </code></pre> <p>Back to the actual question: since $\phi^n=\exp(n\log\phi)$ and $\frac{d}{dx}x^n = n x^{n-1}$, we roughly have $(\phi+\varepsilon)^n \approx \phi^n+n\varepsilon\phi^{n-1}$, so a good accuracy is achieved as soon as $\varepsilon\ll\frac{1}{n\phi^{n-1}}$, and we need to store $\approx 0.7n$ binary digits of $\varphi$ to compute $\varphi^n$ with decent accuracy.</p> Answer 4: <p>You could definitely improve the calculations <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding" rel="nofollow noreferrer">by considering only $\phi$</a> $$F_n=\left[\frac{\phi^n}{\sqrt{5}}\right]$$ Re the error, taking a $\phi^{*} \in \mathbb{Q}$ such that $|\phi - \phi^{*}|&lt;\frac{1}{n2^{n-1}10^m}$, leads to: $$|\phi^n - (\phi^{*})^n|=|\phi - \phi^{*}|\cdot |\phi^{n-1}+\phi^{n-2}\phi^{*}+...+(\phi^{*})^{n-1}|\leq |\phi - \phi^{*}|\cdot n2^{n-1}&lt;\frac{1}{10^m}$$</p> <p>On another note, the very same wikipedia article suggests a procedure that:</p> <blockquote> <p>These last two identities provide a way to compute Fibonacci numbers recursively in $O(\log{n})$ arithmetic operations ...</p> </blockquote> Answer 5: <p>You do not really need to work in floating point: Fibonacci numbers fulfill interesting duplication identities, so there are fast algorithms in integer arithmetic, as Brevan Ellefsen mentioned.</p> <p>Here it is some $\text{Maxima}$ code for the fast computation of the $n$-th Fibonacci number.</p> <pre> /* Fast Fibonacci Numbers Computation */ fastfib(n):=block([O:2,L:?integer\-length(n),i,f0:2,f1:1,u,v], for i:1 thru L do ( u:f0^2, v:f1^2, if ?logbitp(L-i,n) then (f1:v+O, f0:f1-u+O, O:-2 ) else (f0:u-O, f1:v+O-f0, O:2 )), return((?ash(f1,1)-f0)/5) ); </pre> <p>This is included in the $\text{gf}$ package. In order to compute $F_n$, we just need to compute $2\log_2(n)$ squares. With <a href="https://en.wikipedia.org/wiki/Karatsuba_algorithm" rel="nofollow noreferrer">Karatsuba algorithm</a> (invoked in $\text{Maxima}$ through the $\text{fasttimes}$ command) or <a href="https://www.hindawi.com/journals/jam/2014/107109/alg5/" rel="nofollow noreferrer">similar approaches</a> the computation of $m^2$ requires just $\log_2(m)^{1.585}$ elementary operations.</p> <p>This is pretty efficient. Here it is some test on my machine:</p> <pre><code>Maxima 5.32.1 http://maxima.sourceforge.net using Lisp GNU Common Lisp (GCL) GCL 2.6.10 (a.k.a. GCL) Distributed under the GNU Public License. See the file COPYING. Dedicated to the memory of William Schelter. The function bug_report() provides bug reporting information. (%i1) load("gfpatch.mac"); (%o1) gfpatch.mac (%i2) showtime:true; Evaluation took 0.0000 seconds (0.0000 elapsed) (%o2) true (%i3) fastfib(10000000)$ Evaluation took 0.0800 seconds (0.0900 elapsed) </code></pre> <p>Back to the actual question: since $\phi^n=\exp(n\log\phi)$ and $\frac{d}{dx}x^n = n x^{n-1}$, we roughly have $(\phi+\varepsilon)^n \approx \phi^n+n\varepsilon\phi^{n-1}$, so a good accuracy is achieved as soon as $\varepsilon\ll\frac{1}{n\phi^{n-1}}$, and we need to store $\approx 0.7n$ binary digits of $\varphi$ to compute $\varphi^n$ with decent accuracy.</p> Answer 6: <p>You could definitely improve the calculations <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding" rel="nofollow noreferrer">by considering only $\phi$</a> $$F_n=\left[\frac{\phi^n}{\sqrt{5}}\right]$$ Re the error, taking a $\phi^{*} \in \mathbb{Q}$ such that $|\phi - \phi^{*}|&lt;\frac{1}{n2^{n-1}10^m}$, leads to: $$|\phi^n - (\phi^{*})^n|=|\phi - \phi^{*}|\cdot |\phi^{n-1}+\phi^{n-2}\phi^{*}+...+(\phi^{*})^{n-1}|\leq |\phi - \phi^{*}|\cdot n2^{n-1}&lt;\frac{1}{10^m}$$</p> <p>On another note, the very same wikipedia article suggests a procedure that:</p> <blockquote> <p>These last two identities provide a way to compute Fibonacci numbers recursively in $O(\log{n})$ arithmetic operations ...</p> </blockquote> Answer 7: <p>You do not really need to work in floating point: Fibonacci numbers fulfill interesting duplication identities, so there are fast algorithms in integer arithmetic, as Brevan Ellefsen mentioned.</p> <p>Here it is some $\text{Maxima}$ code for the fast computation of the $n$-th Fibonacci number.</p> <pre> /* Fast Fibonacci Numbers Computation */ fastfib(n):=block([O:2,L:?integer\-length(n),i,f0:2,f1:1,u,v], for i:1 thru L do ( u:f0^2, v:f1^2, if ?logbitp(L-i,n) then (f1:v+O, f0:f1-u+O, O:-2 ) else (f0:u-O, f1:v+O-f0, O:2 )), return((?ash(f1,1)-f0)/5) ); </pre> <p>This is included in the $\text{gf}$ package. In order to compute $F_n$, we just need to compute $2\log_2(n)$ squares. With <a href="https://en.wikipedia.org/wiki/Karatsuba_algorithm" rel="nofollow noreferrer">Karatsuba algorithm</a> (invoked in $\text{Maxima}$ through the $\text{fasttimes}$ command) or <a href="https://www.hindawi.com/journals/jam/2014/107109/alg5/" rel="nofollow noreferrer">similar approaches</a> the computation of $m^2$ requires just $\log_2(m)^{1.585}$ elementary operations.</p> <p>This is pretty efficient. Here it is some test on my machine:</p> <pre><code>Maxima 5.32.1 http://maxima.sourceforge.net using Lisp GNU Common Lisp (GCL) GCL 2.6.10 (a.k.a. GCL) Distributed under the GNU Public License. See the file COPYING. Dedicated to the memory of William Schelter. The function bug_report() provides bug reporting information. (%i1) load("gfpatch.mac"); (%o1) gfpatch.mac (%i2) showtime:true; Evaluation took 0.0000 seconds (0.0000 elapsed) (%o2) true (%i3) fastfib(10000000)$ Evaluation took 0.0800 seconds (0.0900 elapsed) </code></pre> <p>Back to the actual question: since $\phi^n=\exp(n\log\phi)$ and $\frac{d}{dx}x^n = n x^{n-1}$, we roughly have $(\phi+\varepsilon)^n \approx \phi^n+n\varepsilon\phi^{n-1}$, so a good accuracy is achieved as soon as $\varepsilon\ll\frac{1}{n\phi^{n-1}}$, and we need to store $\approx 0.7n$ binary digits of $\varphi$ to compute $\varphi^n$ with decent accuracy.</p> Answer 8: <p>You could definitely improve the calculations <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding" rel="nofollow noreferrer">by considering only $\phi$</a> $$F_n=\left[\frac{\phi^n}{\sqrt{5}}\right]$$ Re the error, taking a $\phi^{*} \in \mathbb{Q}$ such that $|\phi - \phi^{*}|&lt;\frac{1}{n2^{n-1}10^m}$, leads to: $$|\phi^n - (\phi^{*})^n|=|\phi - \phi^{*}|\cdot |\phi^{n-1}+\phi^{n-2}\phi^{*}+...+(\phi^{*})^{n-1}|\leq |\phi - \phi^{*}|\cdot n2^{n-1}&lt;\frac{1}{10^m}$$</p> <p>On another note, the very same wikipedia article suggests a procedure that:</p> <blockquote> <p>These last two identities provide a way to compute Fibonacci numbers recursively in $O(\log{n})$ arithmetic operations ...</p> </blockquote>
Polynomial approximation (Weierstrass&#39; Theorem) with equality at the endpoints
2,081,232
4
https://math.stackexchange.com/questions/2081232/polynomial-approximation-weierstrass-theorem-with-equality-at-the-endpoints
<p>Let $f:\mathbb R \rightarrow\mathbb R $ be continuous, and choose $-\infty &lt; a&lt;b&lt;\infty$ and $\epsilon &gt; 0$. Show there exists a polynomial $p$ such that:</p> <p>a) $p(a)=f(a),\; p(b)=f(b)$</p> <p>b) $|f(t)-p(t)|&lt;\epsilon$ for all $t\in [a,b]$</p> <p>The second part is just Weierstrass approximation theorem. But the first part I didn't manage to get. I tried conjuring up a polynomial in the form $$p=p_\epsilon+(\frac{a-x}{a-b})^Nf(b)+(\frac{b-x}{b-a})^Nf(a)$$ such that $p_\epsilon$ satisfies b by the theorem, and then play with constants to set bounds on the maximal error that the extra terms introduce (and also to cancel out $p_\epsilon$ at the bounds). I don't think this can work, since it eventually means that the extra terms are expected to approximate "some portion" of $f$, but given the fact that the proof to Weiersttrass' theorem is pretty complicated, I guess this isn't the way to go... How should I approach this properly? Thanks</p>
[ { "answer_id": 2081267, "score": 5, "text": "<p>Let's do it first for the case $f(a) = f(b)=0.$ Find polynomials $P_n \\to f$ uniformly on $[a,b].$ Then verfify that the polynomials</p>\n\n<p>$$Q_n(x)=P_n(x)- P_n(a) -\\frac{P_n(b)-P_n(a)}{b-a}(x-a)$$</p>\n\n<p>converge to $f$ uniformly on $[a,b],$ with $Q_n(a) = Q_n(b) = 0$ for all $n.$</p>\n\n<p>In the general case, let $l(x)$ be the linear function connecting $(a,f(a))$ to $(b,f(b)).$ Then $f(x) - l(x)$ equals $0$ at the endpoints. From the above we can find polynomials $Q_n \\to f-l$ uniformly on $[a,b],$ with $Q_n(a)= Q_n(b) = 0.$ Now check that $Q_n + l$ satisfies the requirements.</p>\n" }, { "answer_id": 2081349, "score": 2, "text": "<p>You can prescribe equality of the functions at an arbitrary finite set of points ( derivatives too if you like). Let $P$ approximating $f$ by at most $\\delta$. Your new polynomial should be \n$$P + Q$$ where $Q$ interpolates $f-P$ at the given points. The values of $f-P$ are all $\\le \\delta$. We can choose $\\delta$ so that $Q$ does not exceed $\\epsilon/2$ on the whole set. Make sure to take $\\delta \\le \\epsilon/2$ also. Then $P+Q$ does not differ from $f$ by nore than $\\epsilon$ and coincides with $f$ at the given points. </p>\n" }, { "answer_id": 2081267, "score": 5, "text": "<p>Let's do it first for the case $f(a) = f(b)=0.$ Find polynomials $P_n \\to f$ uniformly on $[a,b].$ Then verfify that the polynomials</p>\n\n<p>$$Q_n(x)=P_n(x)- P_n(a) -\\frac{P_n(b)-P_n(a)}{b-a}(x-a)$$</p>\n\n<p>converge to $f$ uniformly on $[a,b],$ with $Q_n(a) = Q_n(b) = 0$ for all $n.$</p>\n\n<p>In the general case, let $l(x)$ be the linear function connecting $(a,f(a))$ to $(b,f(b)).$ Then $f(x) - l(x)$ equals $0$ at the endpoints. From the above we can find polynomials $Q_n \\to f-l$ uniformly on $[a,b],$ with $Q_n(a)= Q_n(b) = 0.$ Now check that $Q_n + l$ satisfies the requirements.</p>\n" }, { "answer_id": 2081349, "score": 2, "text": "<p>You can prescribe equality of the functions at an arbitrary finite set of points ( derivatives too if you like). Let $P$ approximating $f$ by at most $\\delta$. Your new polynomial should be \n$$P + Q$$ where $Q$ interpolates $f-P$ at the given points. The values of $f-P$ are all $\\le \\delta$. We can choose $\\delta$ so that $Q$ does not exceed $\\epsilon/2$ on the whole set. Make sure to take $\\delta \\le \\epsilon/2$ also. Then $P+Q$ does not differ from $f$ by nore than $\\epsilon$ and coincides with $f$ at the given points. </p>\n" }, { "answer_id": 2081267, "score": 5, "text": "<p>Let's do it first for the case $f(a) = f(b)=0.$ Find polynomials $P_n \\to f$ uniformly on $[a,b].$ Then verfify that the polynomials</p>\n\n<p>$$Q_n(x)=P_n(x)- P_n(a) -\\frac{P_n(b)-P_n(a)}{b-a}(x-a)$$</p>\n\n<p>converge to $f$ uniformly on $[a,b],$ with $Q_n(a) = Q_n(b) = 0$ for all $n.$</p>\n\n<p>In the general case, let $l(x)$ be the linear function connecting $(a,f(a))$ to $(b,f(b)).$ Then $f(x) - l(x)$ equals $0$ at the endpoints. From the above we can find polynomials $Q_n \\to f-l$ uniformly on $[a,b],$ with $Q_n(a)= Q_n(b) = 0.$ Now check that $Q_n + l$ satisfies the requirements.</p>\n" }, { "answer_id": 2081349, "score": 2, "text": "<p>You can prescribe equality of the functions at an arbitrary finite set of points ( derivatives too if you like). Let $P$ approximating $f$ by at most $\\delta$. Your new polynomial should be \n$$P + Q$$ where $Q$ interpolates $f-P$ at the given points. The values of $f-P$ are all $\\le \\delta$. We can choose $\\delta$ so that $Q$ does not exceed $\\epsilon/2$ on the whole set. Make sure to take $\\delta \\le \\epsilon/2$ also. Then $P+Q$ does not differ from $f$ by nore than $\\epsilon$ and coincides with $f$ at the given points. </p>\n" }, { "answer_id": 2081267, "score": 5, "text": "<p>Let's do it first for the case $f(a) = f(b)=0.$ Find polynomials $P_n \\to f$ uniformly on $[a,b].$ Then verfify that the polynomials</p>\n\n<p>$$Q_n(x)=P_n(x)- P_n(a) -\\frac{P_n(b)-P_n(a)}{b-a}(x-a)$$</p>\n\n<p>converge to $f$ uniformly on $[a,b],$ with $Q_n(a) = Q_n(b) = 0$ for all $n.$</p>\n\n<p>In the general case, let $l(x)$ be the linear function connecting $(a,f(a))$ to $(b,f(b)).$ Then $f(x) - l(x)$ equals $0$ at the endpoints. From the above we can find polynomials $Q_n \\to f-l$ uniformly on $[a,b],$ with $Q_n(a)= Q_n(b) = 0.$ Now check that $Q_n + l$ satisfies the requirements.</p>\n" }, { "answer_id": 2081349, "score": 2, "text": "<p>You can prescribe equality of the functions at an arbitrary finite set of points ( derivatives too if you like). Let $P$ approximating $f$ by at most $\\delta$. Your new polynomial should be \n$$P + Q$$ where $Q$ interpolates $f-P$ at the given points. The values of $f-P$ are all $\\le \\delta$. We can choose $\\delta$ so that $Q$ does not exceed $\\epsilon/2$ on the whole set. Make sure to take $\\delta \\le \\epsilon/2$ also. Then $P+Q$ does not differ from $f$ by nore than $\\epsilon$ and coincides with $f$ at the given points. </p>\n" } ]
Question: Polynomial approximation (Weierstrass&#39; Theorem) with equality at the endpoints Answer 1: <p>Let's do it first for the case $f(a) = f(b)=0.$ Find polynomials $P_n \to f$ uniformly on $[a,b].$ Then verfify that the polynomials</p> <p>$$Q_n(x)=P_n(x)- P_n(a) -\frac{P_n(b)-P_n(a)}{b-a}(x-a)$$</p> <p>converge to $f$ uniformly on $[a,b],$ with $Q_n(a) = Q_n(b) = 0$ for all $n.$</p> <p>In the general case, let $l(x)$ be the linear function connecting $(a,f(a))$ to $(b,f(b)).$ Then $f(x) - l(x)$ equals $0$ at the endpoints. From the above we can find polynomials $Q_n \to f-l$ uniformly on $[a,b],$ with $Q_n(a)= Q_n(b) = 0.$ Now check that $Q_n + l$ satisfies the requirements.</p> Answer 2: <p>You can prescribe equality of the functions at an arbitrary finite set of points ( derivatives too if you like). Let $P$ approximating $f$ by at most $\delta$. Your new polynomial should be $$P + Q$$ where $Q$ interpolates $f-P$ at the given points. The values of $f-P$ are all $\le \delta$. We can choose $\delta$ so that $Q$ does not exceed $\epsilon/2$ on the whole set. Make sure to take $\delta \le \epsilon/2$ also. Then $P+Q$ does not differ from $f$ by nore than $\epsilon$ and coincides with $f$ at the given points. </p> Answer 3: <p>Let's do it first for the case $f(a) = f(b)=0.$ Find polynomials $P_n \to f$ uniformly on $[a,b].$ Then verfify that the polynomials</p> <p>$$Q_n(x)=P_n(x)- P_n(a) -\frac{P_n(b)-P_n(a)}{b-a}(x-a)$$</p> <p>converge to $f$ uniformly on $[a,b],$ with $Q_n(a) = Q_n(b) = 0$ for all $n.$</p> <p>In the general case, let $l(x)$ be the linear function connecting $(a,f(a))$ to $(b,f(b)).$ Then $f(x) - l(x)$ equals $0$ at the endpoints. From the above we can find polynomials $Q_n \to f-l$ uniformly on $[a,b],$ with $Q_n(a)= Q_n(b) = 0.$ Now check that $Q_n + l$ satisfies the requirements.</p> Answer 4: <p>You can prescribe equality of the functions at an arbitrary finite set of points ( derivatives too if you like). Let $P$ approximating $f$ by at most $\delta$. Your new polynomial should be $$P + Q$$ where $Q$ interpolates $f-P$ at the given points. The values of $f-P$ are all $\le \delta$. We can choose $\delta$ so that $Q$ does not exceed $\epsilon/2$ on the whole set. Make sure to take $\delta \le \epsilon/2$ also. Then $P+Q$ does not differ from $f$ by nore than $\epsilon$ and coincides with $f$ at the given points. </p> Answer 5: <p>Let's do it first for the case $f(a) = f(b)=0.$ Find polynomials $P_n \to f$ uniformly on $[a,b].$ Then verfify that the polynomials</p> <p>$$Q_n(x)=P_n(x)- P_n(a) -\frac{P_n(b)-P_n(a)}{b-a}(x-a)$$</p> <p>converge to $f$ uniformly on $[a,b],$ with $Q_n(a) = Q_n(b) = 0$ for all $n.$</p> <p>In the general case, let $l(x)$ be the linear function connecting $(a,f(a))$ to $(b,f(b)).$ Then $f(x) - l(x)$ equals $0$ at the endpoints. From the above we can find polynomials $Q_n \to f-l$ uniformly on $[a,b],$ with $Q_n(a)= Q_n(b) = 0.$ Now check that $Q_n + l$ satisfies the requirements.</p> Answer 6: <p>You can prescribe equality of the functions at an arbitrary finite set of points ( derivatives too if you like). Let $P$ approximating $f$ by at most $\delta$. Your new polynomial should be $$P + Q$$ where $Q$ interpolates $f-P$ at the given points. The values of $f-P$ are all $\le \delta$. We can choose $\delta$ so that $Q$ does not exceed $\epsilon/2$ on the whole set. Make sure to take $\delta \le \epsilon/2$ also. Then $P+Q$ does not differ from $f$ by nore than $\epsilon$ and coincides with $f$ at the given points. </p> Answer 7: <p>Let's do it first for the case $f(a) = f(b)=0.$ Find polynomials $P_n \to f$ uniformly on $[a,b].$ Then verfify that the polynomials</p> <p>$$Q_n(x)=P_n(x)- P_n(a) -\frac{P_n(b)-P_n(a)}{b-a}(x-a)$$</p> <p>converge to $f$ uniformly on $[a,b],$ with $Q_n(a) = Q_n(b) = 0$ for all $n.$</p> <p>In the general case, let $l(x)$ be the linear function connecting $(a,f(a))$ to $(b,f(b)).$ Then $f(x) - l(x)$ equals $0$ at the endpoints. From the above we can find polynomials $Q_n \to f-l$ uniformly on $[a,b],$ with $Q_n(a)= Q_n(b) = 0.$ Now check that $Q_n + l$ satisfies the requirements.</p> Answer 8: <p>You can prescribe equality of the functions at an arbitrary finite set of points ( derivatives too if you like). Let $P$ approximating $f$ by at most $\delta$. Your new polynomial should be $$P + Q$$ where $Q$ interpolates $f-P$ at the given points. The values of $f-P$ are all $\le \delta$. We can choose $\delta$ so that $Q$ does not exceed $\epsilon/2$ on the whole set. Make sure to take $\delta \le \epsilon/2$ also. Then $P+Q$ does not differ from $f$ by nore than $\epsilon$ and coincides with $f$ at the given points. </p>
permutation of $9$ digit
2,081,280
4
https://math.stackexchange.com/questions/2081280/permutation-of-9-digit
<p>How many $9-\text{digit}$ number can be formed by using the digits $1, 2, 3, 4, 5, 6, 7,8, 9$ so that each number has $7$ digits in such place where that digit is less than the next digit?<br> <strong>Example</strong>: In number $2314; \; 2, 1$ are two digits such that each of them is less than the next digit. <br><br> <strong><em>Source</em></strong>: <strong>BDMO 2015 , Dhaka regional</strong></p>
[ { "answer_id": 2081322, "score": 3, "text": "<p>Given that the nine-digit number has exactly one point at which a digit's successor is less than itself, it can thus be divided into two parts, both of which are strictly ascending, neither of which is empty. (That is to say, the entire number isn't as a whole strictly ascending.)</p>\n\n<p>What's more, you can construct the number in exactly this fashion: Select a subset of the nine digits, and let those digits, in ascending order, form the prefix of the number. Then put the remaining digits in ascending order, and those form the suffix of the number. There are $2^9 = 512$ different subsets, so there are at most that many ways to generate a satisfactory number.</p>\n\n<p>However, there are a few subsets that won't work—namely, those subsets that are the $k$ smallest digits, for $k = 0$ to $9$. There are $10$ such non-satisfactory subsets, so the actual result should be $512-10 = 502$.</p>\n" }, { "answer_id": 2125333, "score": 1, "text": "<p>Consider the difference between each of the $8$ pair of digits in the $9$ digit number. $7$ digits are in such place where that digit is less than the next digit. So there must be <strong>ONLY ONE</strong> pair so that the difference is <strong>positive</strong>.<br>\nNow let this positive difference happen between $i^{th}$ and $(i+1)^{th}$ digit in the number. Now we can choose any set of $i$ numbers and arrange them in increasing order, and place other $9-i$ numbers after them, in order.<br>\nSince $i$ ranges from $1 \\to 9-1 = 8$, so our final answer is -<br>\n$$\\sum_{i = 1}^{8} \\left\\{\\binom{9}{i}-1\\right\\} \\\\\n \\sum_{i = 1}^{8}\\binom{9}{i} - 8\\\\ \n \\sum_{i=0}^{9}\\binom{9}{i} - \\binom{9}{9} - \\binom{9}{0} - 8\\\\\n 2^9 - 10 = \\color{red}{\\boxed{502}}$$ </p>\n\n<p>BTW, I am also Bangladeshi ... This problem repeated in some other regional too, with different constrains. Like we need to make $8$-digit numbers from $\\{1,2,\\cdots,8\\}$ such that $6$ of them are in that such place. Here the answer will be $2^8-9$</p>\n" }, { "answer_id": 2081322, "score": 3, "text": "<p>Given that the nine-digit number has exactly one point at which a digit's successor is less than itself, it can thus be divided into two parts, both of which are strictly ascending, neither of which is empty. (That is to say, the entire number isn't as a whole strictly ascending.)</p>\n\n<p>What's more, you can construct the number in exactly this fashion: Select a subset of the nine digits, and let those digits, in ascending order, form the prefix of the number. Then put the remaining digits in ascending order, and those form the suffix of the number. There are $2^9 = 512$ different subsets, so there are at most that many ways to generate a satisfactory number.</p>\n\n<p>However, there are a few subsets that won't work—namely, those subsets that are the $k$ smallest digits, for $k = 0$ to $9$. There are $10$ such non-satisfactory subsets, so the actual result should be $512-10 = 502$.</p>\n" }, { "answer_id": 2125333, "score": 1, "text": "<p>Consider the difference between each of the $8$ pair of digits in the $9$ digit number. $7$ digits are in such place where that digit is less than the next digit. So there must be <strong>ONLY ONE</strong> pair so that the difference is <strong>positive</strong>.<br>\nNow let this positive difference happen between $i^{th}$ and $(i+1)^{th}$ digit in the number. Now we can choose any set of $i$ numbers and arrange them in increasing order, and place other $9-i$ numbers after them, in order.<br>\nSince $i$ ranges from $1 \\to 9-1 = 8$, so our final answer is -<br>\n$$\\sum_{i = 1}^{8} \\left\\{\\binom{9}{i}-1\\right\\} \\\\\n \\sum_{i = 1}^{8}\\binom{9}{i} - 8\\\\ \n \\sum_{i=0}^{9}\\binom{9}{i} - \\binom{9}{9} - \\binom{9}{0} - 8\\\\\n 2^9 - 10 = \\color{red}{\\boxed{502}}$$ </p>\n\n<p>BTW, I am also Bangladeshi ... This problem repeated in some other regional too, with different constrains. Like we need to make $8$-digit numbers from $\\{1,2,\\cdots,8\\}$ such that $6$ of them are in that such place. Here the answer will be $2^8-9$</p>\n" }, { "answer_id": 2081322, "score": 3, "text": "<p>Given that the nine-digit number has exactly one point at which a digit's successor is less than itself, it can thus be divided into two parts, both of which are strictly ascending, neither of which is empty. (That is to say, the entire number isn't as a whole strictly ascending.)</p>\n\n<p>What's more, you can construct the number in exactly this fashion: Select a subset of the nine digits, and let those digits, in ascending order, form the prefix of the number. Then put the remaining digits in ascending order, and those form the suffix of the number. There are $2^9 = 512$ different subsets, so there are at most that many ways to generate a satisfactory number.</p>\n\n<p>However, there are a few subsets that won't work—namely, those subsets that are the $k$ smallest digits, for $k = 0$ to $9$. There are $10$ such non-satisfactory subsets, so the actual result should be $512-10 = 502$.</p>\n" }, { "answer_id": 2125333, "score": 1, "text": "<p>Consider the difference between each of the $8$ pair of digits in the $9$ digit number. $7$ digits are in such place where that digit is less than the next digit. So there must be <strong>ONLY ONE</strong> pair so that the difference is <strong>positive</strong>.<br>\nNow let this positive difference happen between $i^{th}$ and $(i+1)^{th}$ digit in the number. Now we can choose any set of $i$ numbers and arrange them in increasing order, and place other $9-i$ numbers after them, in order.<br>\nSince $i$ ranges from $1 \\to 9-1 = 8$, so our final answer is -<br>\n$$\\sum_{i = 1}^{8} \\left\\{\\binom{9}{i}-1\\right\\} \\\\\n \\sum_{i = 1}^{8}\\binom{9}{i} - 8\\\\ \n \\sum_{i=0}^{9}\\binom{9}{i} - \\binom{9}{9} - \\binom{9}{0} - 8\\\\\n 2^9 - 10 = \\color{red}{\\boxed{502}}$$ </p>\n\n<p>BTW, I am also Bangladeshi ... This problem repeated in some other regional too, with different constrains. Like we need to make $8$-digit numbers from $\\{1,2,\\cdots,8\\}$ such that $6$ of them are in that such place. Here the answer will be $2^8-9$</p>\n" }, { "answer_id": 2081322, "score": 3, "text": "<p>Given that the nine-digit number has exactly one point at which a digit's successor is less than itself, it can thus be divided into two parts, both of which are strictly ascending, neither of which is empty. (That is to say, the entire number isn't as a whole strictly ascending.)</p>\n\n<p>What's more, you can construct the number in exactly this fashion: Select a subset of the nine digits, and let those digits, in ascending order, form the prefix of the number. Then put the remaining digits in ascending order, and those form the suffix of the number. There are $2^9 = 512$ different subsets, so there are at most that many ways to generate a satisfactory number.</p>\n\n<p>However, there are a few subsets that won't work—namely, those subsets that are the $k$ smallest digits, for $k = 0$ to $9$. There are $10$ such non-satisfactory subsets, so the actual result should be $512-10 = 502$.</p>\n" }, { "answer_id": 2125333, "score": 1, "text": "<p>Consider the difference between each of the $8$ pair of digits in the $9$ digit number. $7$ digits are in such place where that digit is less than the next digit. So there must be <strong>ONLY ONE</strong> pair so that the difference is <strong>positive</strong>.<br>\nNow let this positive difference happen between $i^{th}$ and $(i+1)^{th}$ digit in the number. Now we can choose any set of $i$ numbers and arrange them in increasing order, and place other $9-i$ numbers after them, in order.<br>\nSince $i$ ranges from $1 \\to 9-1 = 8$, so our final answer is -<br>\n$$\\sum_{i = 1}^{8} \\left\\{\\binom{9}{i}-1\\right\\} \\\\\n \\sum_{i = 1}^{8}\\binom{9}{i} - 8\\\\ \n \\sum_{i=0}^{9}\\binom{9}{i} - \\binom{9}{9} - \\binom{9}{0} - 8\\\\\n 2^9 - 10 = \\color{red}{\\boxed{502}}$$ </p>\n\n<p>BTW, I am also Bangladeshi ... This problem repeated in some other regional too, with different constrains. Like we need to make $8$-digit numbers from $\\{1,2,\\cdots,8\\}$ such that $6$ of them are in that such place. Here the answer will be $2^8-9$</p>\n" } ]
Question: permutation of $9$ digit Answer 1: <p>Given that the nine-digit number has exactly one point at which a digit's successor is less than itself, it can thus be divided into two parts, both of which are strictly ascending, neither of which is empty. (That is to say, the entire number isn't as a whole strictly ascending.)</p> <p>What's more, you can construct the number in exactly this fashion: Select a subset of the nine digits, and let those digits, in ascending order, form the prefix of the number. Then put the remaining digits in ascending order, and those form the suffix of the number. There are $2^9 = 512$ different subsets, so there are at most that many ways to generate a satisfactory number.</p> <p>However, there are a few subsets that won't work—namely, those subsets that are the $k$ smallest digits, for $k = 0$ to $9$. There are $10$ such non-satisfactory subsets, so the actual result should be $512-10 = 502$.</p> Answer 2: <p>Consider the difference between each of the $8$ pair of digits in the $9$ digit number. $7$ digits are in such place where that digit is less than the next digit. So there must be <strong>ONLY ONE</strong> pair so that the difference is <strong>positive</strong>.<br> Now let this positive difference happen between $i^{th}$ and $(i+1)^{th}$ digit in the number. Now we can choose any set of $i$ numbers and arrange them in increasing order, and place other $9-i$ numbers after them, in order.<br> Since $i$ ranges from $1 \to 9-1 = 8$, so our final answer is -<br> $$\sum_{i = 1}^{8} \left\{\binom{9}{i}-1\right\} \\ \sum_{i = 1}^{8}\binom{9}{i} - 8\\ \sum_{i=0}^{9}\binom{9}{i} - \binom{9}{9} - \binom{9}{0} - 8\\ 2^9 - 10 = \color{red}{\boxed{502}}$$ </p> <p>BTW, I am also Bangladeshi ... This problem repeated in some other regional too, with different constrains. Like we need to make $8$-digit numbers from $\{1,2,\cdots,8\}$ such that $6$ of them are in that such place. Here the answer will be $2^8-9$</p> Answer 3: <p>Given that the nine-digit number has exactly one point at which a digit's successor is less than itself, it can thus be divided into two parts, both of which are strictly ascending, neither of which is empty. (That is to say, the entire number isn't as a whole strictly ascending.)</p> <p>What's more, you can construct the number in exactly this fashion: Select a subset of the nine digits, and let those digits, in ascending order, form the prefix of the number. Then put the remaining digits in ascending order, and those form the suffix of the number. There are $2^9 = 512$ different subsets, so there are at most that many ways to generate a satisfactory number.</p> <p>However, there are a few subsets that won't work—namely, those subsets that are the $k$ smallest digits, for $k = 0$ to $9$. There are $10$ such non-satisfactory subsets, so the actual result should be $512-10 = 502$.</p> Answer 4: <p>Consider the difference between each of the $8$ pair of digits in the $9$ digit number. $7$ digits are in such place where that digit is less than the next digit. So there must be <strong>ONLY ONE</strong> pair so that the difference is <strong>positive</strong>.<br> Now let this positive difference happen between $i^{th}$ and $(i+1)^{th}$ digit in the number. Now we can choose any set of $i$ numbers and arrange them in increasing order, and place other $9-i$ numbers after them, in order.<br> Since $i$ ranges from $1 \to 9-1 = 8$, so our final answer is -<br> $$\sum_{i = 1}^{8} \left\{\binom{9}{i}-1\right\} \\ \sum_{i = 1}^{8}\binom{9}{i} - 8\\ \sum_{i=0}^{9}\binom{9}{i} - \binom{9}{9} - \binom{9}{0} - 8\\ 2^9 - 10 = \color{red}{\boxed{502}}$$ </p> <p>BTW, I am also Bangladeshi ... This problem repeated in some other regional too, with different constrains. Like we need to make $8$-digit numbers from $\{1,2,\cdots,8\}$ such that $6$ of them are in that such place. Here the answer will be $2^8-9$</p> Answer 5: <p>Given that the nine-digit number has exactly one point at which a digit's successor is less than itself, it can thus be divided into two parts, both of which are strictly ascending, neither of which is empty. (That is to say, the entire number isn't as a whole strictly ascending.)</p> <p>What's more, you can construct the number in exactly this fashion: Select a subset of the nine digits, and let those digits, in ascending order, form the prefix of the number. Then put the remaining digits in ascending order, and those form the suffix of the number. There are $2^9 = 512$ different subsets, so there are at most that many ways to generate a satisfactory number.</p> <p>However, there are a few subsets that won't work—namely, those subsets that are the $k$ smallest digits, for $k = 0$ to $9$. There are $10$ such non-satisfactory subsets, so the actual result should be $512-10 = 502$.</p> Answer 6: <p>Consider the difference between each of the $8$ pair of digits in the $9$ digit number. $7$ digits are in such place where that digit is less than the next digit. So there must be <strong>ONLY ONE</strong> pair so that the difference is <strong>positive</strong>.<br> Now let this positive difference happen between $i^{th}$ and $(i+1)^{th}$ digit in the number. Now we can choose any set of $i$ numbers and arrange them in increasing order, and place other $9-i$ numbers after them, in order.<br> Since $i$ ranges from $1 \to 9-1 = 8$, so our final answer is -<br> $$\sum_{i = 1}^{8} \left\{\binom{9}{i}-1\right\} \\ \sum_{i = 1}^{8}\binom{9}{i} - 8\\ \sum_{i=0}^{9}\binom{9}{i} - \binom{9}{9} - \binom{9}{0} - 8\\ 2^9 - 10 = \color{red}{\boxed{502}}$$ </p> <p>BTW, I am also Bangladeshi ... This problem repeated in some other regional too, with different constrains. Like we need to make $8$-digit numbers from $\{1,2,\cdots,8\}$ such that $6$ of them are in that such place. Here the answer will be $2^8-9$</p> Answer 7: <p>Given that the nine-digit number has exactly one point at which a digit's successor is less than itself, it can thus be divided into two parts, both of which are strictly ascending, neither of which is empty. (That is to say, the entire number isn't as a whole strictly ascending.)</p> <p>What's more, you can construct the number in exactly this fashion: Select a subset of the nine digits, and let those digits, in ascending order, form the prefix of the number. Then put the remaining digits in ascending order, and those form the suffix of the number. There are $2^9 = 512$ different subsets, so there are at most that many ways to generate a satisfactory number.</p> <p>However, there are a few subsets that won't work—namely, those subsets that are the $k$ smallest digits, for $k = 0$ to $9$. There are $10$ such non-satisfactory subsets, so the actual result should be $512-10 = 502$.</p> Answer 8: <p>Consider the difference between each of the $8$ pair of digits in the $9$ digit number. $7$ digits are in such place where that digit is less than the next digit. So there must be <strong>ONLY ONE</strong> pair so that the difference is <strong>positive</strong>.<br> Now let this positive difference happen between $i^{th}$ and $(i+1)^{th}$ digit in the number. Now we can choose any set of $i$ numbers and arrange them in increasing order, and place other $9-i$ numbers after them, in order.<br> Since $i$ ranges from $1 \to 9-1 = 8$, so our final answer is -<br> $$\sum_{i = 1}^{8} \left\{\binom{9}{i}-1\right\} \\ \sum_{i = 1}^{8}\binom{9}{i} - 8\\ \sum_{i=0}^{9}\binom{9}{i} - \binom{9}{9} - \binom{9}{0} - 8\\ 2^9 - 10 = \color{red}{\boxed{502}}$$ </p> <p>BTW, I am also Bangladeshi ... This problem repeated in some other regional too, with different constrains. Like we need to make $8$-digit numbers from $\{1,2,\cdots,8\}$ such that $6$ of them are in that such place. Here the answer will be $2^8-9$</p>
Definition of Lie group isomorphism
2,081,584
4
https://math.stackexchange.com/questions/2081584/definition-of-lie-group-isomorphism
<p>Recently I am studying the Lie group. </p> <p>The definition of <code>Lie group isomorphism</code> is the following:<br> <a href="https://en.wikipedia.org/wiki/Lie_group#Homomorphisms_and_isomorphisms" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Lie_group#Homomorphisms_and_isomorphisms</a> </p> <p>Let $\phi: G\mapsto H$, where $G$ and $H$ are Lie goups. $\phi$ is Lie group isomorphism if </p> <blockquote> <ol> <li>$\phi$ is bijective homomorphism. </li> <li>$\phi^{-1}$ is a Lie group homomorphism. </li> </ol> </blockquote> <p>My question is simply</p> <p>Does 1. and 2. implies $\phi^{-1}$ is bijective?<br> Why not just say that $\phi$ and $\phi^{-1}$ are Lie group bijective homomorphism? (it seems much clear) Hope for a deep and clear explanation of both sentences. </p>
[ { "answer_id": 2081596, "score": 2, "text": "<p>Yes they imply that the inverse is a bijection. The inverse of a bijection is always a bijection. So stating one is a bijection is enough, the crux of it is that functions can be a homomorphism one way but not neccisserily the inverse. An example is homeomorphisms in topology. A function may be continuous but that does not mean that the inverse is continuous.</p>\n\n<p>For isomorphisms of course we must have like with homeomorphisms that the inverse and original function are homomorphisms, and in both cases it must be a bijection.</p>\n\n<p>Why it is not stated for the original function and inverse is as said, because they imply one another.</p>\n" }, { "answer_id": 2081595, "score": 1, "text": "<p>The existence of $\\phi^{-1}$ implies that $\\phi$ is bijective, and it is always true that $\\phi^{-1}$ is bijective (since $(\\phi^{-1})^{-1} = \\phi$, so $\\phi^{-1}$ is invertible). This is true of bijective maps in general, nothing specific to Lie groups. </p>\n\n<p>Your (equivalent) definition of a Lie group isomorphism therefore has a redundant condition (that $\\phi^{-1}$ is bijective) which should therefore be omitted. Then you simply have Wikipedia's definition. </p>\n" }, { "answer_id": 2081596, "score": 2, "text": "<p>Yes they imply that the inverse is a bijection. The inverse of a bijection is always a bijection. So stating one is a bijection is enough, the crux of it is that functions can be a homomorphism one way but not neccisserily the inverse. An example is homeomorphisms in topology. A function may be continuous but that does not mean that the inverse is continuous.</p>\n\n<p>For isomorphisms of course we must have like with homeomorphisms that the inverse and original function are homomorphisms, and in both cases it must be a bijection.</p>\n\n<p>Why it is not stated for the original function and inverse is as said, because they imply one another.</p>\n" }, { "answer_id": 2081595, "score": 1, "text": "<p>The existence of $\\phi^{-1}$ implies that $\\phi$ is bijective, and it is always true that $\\phi^{-1}$ is bijective (since $(\\phi^{-1})^{-1} = \\phi$, so $\\phi^{-1}$ is invertible). This is true of bijective maps in general, nothing specific to Lie groups. </p>\n\n<p>Your (equivalent) definition of a Lie group isomorphism therefore has a redundant condition (that $\\phi^{-1}$ is bijective) which should therefore be omitted. Then you simply have Wikipedia's definition. </p>\n" }, { "answer_id": 2081596, "score": 2, "text": "<p>Yes they imply that the inverse is a bijection. The inverse of a bijection is always a bijection. So stating one is a bijection is enough, the crux of it is that functions can be a homomorphism one way but not neccisserily the inverse. An example is homeomorphisms in topology. A function may be continuous but that does not mean that the inverse is continuous.</p>\n\n<p>For isomorphisms of course we must have like with homeomorphisms that the inverse and original function are homomorphisms, and in both cases it must be a bijection.</p>\n\n<p>Why it is not stated for the original function and inverse is as said, because they imply one another.</p>\n" }, { "answer_id": 2081595, "score": 1, "text": "<p>The existence of $\\phi^{-1}$ implies that $\\phi$ is bijective, and it is always true that $\\phi^{-1}$ is bijective (since $(\\phi^{-1})^{-1} = \\phi$, so $\\phi^{-1}$ is invertible). This is true of bijective maps in general, nothing specific to Lie groups. </p>\n\n<p>Your (equivalent) definition of a Lie group isomorphism therefore has a redundant condition (that $\\phi^{-1}$ is bijective) which should therefore be omitted. Then you simply have Wikipedia's definition. </p>\n" }, { "answer_id": 2081596, "score": 2, "text": "<p>Yes they imply that the inverse is a bijection. The inverse of a bijection is always a bijection. So stating one is a bijection is enough, the crux of it is that functions can be a homomorphism one way but not neccisserily the inverse. An example is homeomorphisms in topology. A function may be continuous but that does not mean that the inverse is continuous.</p>\n\n<p>For isomorphisms of course we must have like with homeomorphisms that the inverse and original function are homomorphisms, and in both cases it must be a bijection.</p>\n\n<p>Why it is not stated for the original function and inverse is as said, because they imply one another.</p>\n" }, { "answer_id": 2081595, "score": 1, "text": "<p>The existence of $\\phi^{-1}$ implies that $\\phi$ is bijective, and it is always true that $\\phi^{-1}$ is bijective (since $(\\phi^{-1})^{-1} = \\phi$, so $\\phi^{-1}$ is invertible). This is true of bijective maps in general, nothing specific to Lie groups. </p>\n\n<p>Your (equivalent) definition of a Lie group isomorphism therefore has a redundant condition (that $\\phi^{-1}$ is bijective) which should therefore be omitted. Then you simply have Wikipedia's definition. </p>\n" } ]
Question: Definition of Lie group isomorphism Answer 1: <p>Yes they imply that the inverse is a bijection. The inverse of a bijection is always a bijection. So stating one is a bijection is enough, the crux of it is that functions can be a homomorphism one way but not neccisserily the inverse. An example is homeomorphisms in topology. A function may be continuous but that does not mean that the inverse is continuous.</p> <p>For isomorphisms of course we must have like with homeomorphisms that the inverse and original function are homomorphisms, and in both cases it must be a bijection.</p> <p>Why it is not stated for the original function and inverse is as said, because they imply one another.</p> Answer 2: <p>The existence of $\phi^{-1}$ implies that $\phi$ is bijective, and it is always true that $\phi^{-1}$ is bijective (since $(\phi^{-1})^{-1} = \phi$, so $\phi^{-1}$ is invertible). This is true of bijective maps in general, nothing specific to Lie groups. </p> <p>Your (equivalent) definition of a Lie group isomorphism therefore has a redundant condition (that $\phi^{-1}$ is bijective) which should therefore be omitted. Then you simply have Wikipedia's definition. </p> Answer 3: <p>Yes they imply that the inverse is a bijection. The inverse of a bijection is always a bijection. So stating one is a bijection is enough, the crux of it is that functions can be a homomorphism one way but not neccisserily the inverse. An example is homeomorphisms in topology. A function may be continuous but that does not mean that the inverse is continuous.</p> <p>For isomorphisms of course we must have like with homeomorphisms that the inverse and original function are homomorphisms, and in both cases it must be a bijection.</p> <p>Why it is not stated for the original function and inverse is as said, because they imply one another.</p> Answer 4: <p>The existence of $\phi^{-1}$ implies that $\phi$ is bijective, and it is always true that $\phi^{-1}$ is bijective (since $(\phi^{-1})^{-1} = \phi$, so $\phi^{-1}$ is invertible). This is true of bijective maps in general, nothing specific to Lie groups. </p> <p>Your (equivalent) definition of a Lie group isomorphism therefore has a redundant condition (that $\phi^{-1}$ is bijective) which should therefore be omitted. Then you simply have Wikipedia's definition. </p> Answer 5: <p>Yes they imply that the inverse is a bijection. The inverse of a bijection is always a bijection. So stating one is a bijection is enough, the crux of it is that functions can be a homomorphism one way but not neccisserily the inverse. An example is homeomorphisms in topology. A function may be continuous but that does not mean that the inverse is continuous.</p> <p>For isomorphisms of course we must have like with homeomorphisms that the inverse and original function are homomorphisms, and in both cases it must be a bijection.</p> <p>Why it is not stated for the original function and inverse is as said, because they imply one another.</p> Answer 6: <p>The existence of $\phi^{-1}$ implies that $\phi$ is bijective, and it is always true that $\phi^{-1}$ is bijective (since $(\phi^{-1})^{-1} = \phi$, so $\phi^{-1}$ is invertible). This is true of bijective maps in general, nothing specific to Lie groups. </p> <p>Your (equivalent) definition of a Lie group isomorphism therefore has a redundant condition (that $\phi^{-1}$ is bijective) which should therefore be omitted. Then you simply have Wikipedia's definition. </p> Answer 7: <p>Yes they imply that the inverse is a bijection. The inverse of a bijection is always a bijection. So stating one is a bijection is enough, the crux of it is that functions can be a homomorphism one way but not neccisserily the inverse. An example is homeomorphisms in topology. A function may be continuous but that does not mean that the inverse is continuous.</p> <p>For isomorphisms of course we must have like with homeomorphisms that the inverse and original function are homomorphisms, and in both cases it must be a bijection.</p> <p>Why it is not stated for the original function and inverse is as said, because they imply one another.</p> Answer 8: <p>The existence of $\phi^{-1}$ implies that $\phi$ is bijective, and it is always true that $\phi^{-1}$ is bijective (since $(\phi^{-1})^{-1} = \phi$, so $\phi^{-1}$ is invertible). This is true of bijective maps in general, nothing specific to Lie groups. </p> <p>Your (equivalent) definition of a Lie group isomorphism therefore has a redundant condition (that $\phi^{-1}$ is bijective) which should therefore be omitted. Then you simply have Wikipedia's definition. </p>
Showing $\int_x^{x+1}f(t)\,dt \xrightarrow{x\to\infty}0$ for $f\in L^2 (\mathbb{R})$
2,081,701
4
https://math.stackexchange.com/questions/2081701/showing-int-xx1ft-dt-xrightarrowx-to-infty0-for-f-in-l2-mathbb
<blockquote> <p>Show that if <span class="math-container">$f\in L^2 (\mathbb{R})$</span> then <span class="math-container">$$\lim_{x \rightarrow \infty} g(x)=0,$$</span> where <span class="math-container">$$g(x)=\int_x^{x+1}f(t)\,dt$$</span></p> </blockquote> <p>Since <span class="math-container">$f \in L^2 (\mathbb{R})$</span> then <span class="math-container">$f \in L^2[x,x+1]$</span> for every <span class="math-container">$x$</span>. Then <span class="math-container">$$|g(x)| \leq \left(\int_x^{x+1} f(t)^2 dt\right)^{\frac{1}{2}} \int_x^{x+1} 1 dt$$</span> <span class="math-container">$$|g(x)|\leq \left(\int_x^{x+1} f(t)^2 dt\right)^{\frac{1}{2}}$$</span></p>
[ { "answer_id": 2081757, "score": 8, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Jensen&#39;s_inequality\">Jensen's Inequality</a>, the sum\n$$\n\\begin{align}\n\\sum_{n=0}^\\infty\\left(\\int_n^{n+1}|f(x)|\\,\\mathrm{d}x\\right)^2\n&amp;\\le\\sum_{n=0}^\\infty\\int_n^{n+1}|f(x)|^2\\,\\mathrm{d}x\\\\\n&amp;=\\int_0^\\infty|f(x)|^2\\,\\mathrm{d}x\\\\\n&amp;\\le\\|f\\|_{L^2(\\mathbb{R})}^2\n\\end{align}\n$$\nconverges, which means the terms must go to $0$. That is,\n$$\n\\lim_{n\\to\\infty}\\int_n^{n+1}f(x)\\,\\mathrm{d}x=0\n$$</p>\n" }, { "answer_id": 2081733, "score": 3, "text": "<p>Continuing from your last step, you have </p>\n\n<p>$$|g(x)|\\leq\\left(\\int_\\mathbb{R} f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}$$\nwhere $\\chi_{[x,x+1]}(t)$ is the indicator function on the interval $[x,x+1]$.</p>\n\n<p>So $$\\lim_{x\\to\\infty}|g(x)|\\leq\\lim_{x\\to\\infty}\\left(\\int_\\mathbb{R} f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}$$</p>\n\n<p>Since $|f(t)^2\\chi_{[x,x+1]}|\\leq|f(t)^2|$ which is integrable, by Lebesgue's Dominated Convergence Theorem, we can move the limit inside the integral, so </p>\n\n<p>$$\\lim_{x\\to\\infty}\\left(\\int_\\mathbb{R} f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}=\\left(\\int_\\mathbb{R} \\lim_{x\\to\\infty}f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}=0$$</p>\n\n<p>since $\\lim_{x\\to\\infty}\\chi_{[x,x+1]}=0$</p>\n\n<p>Hence $\\lim_{x\\to\\infty}|g(x)|\\leq 0$, and we can conclude from there.</p>\n" }, { "answer_id": 2081731, "score": 2, "text": "<blockquote>\n <p>The result is valid for functions in $C_0^\\infty(\\mathbb R)$. So, it is true for functions in $L^2(\\mathbb R)$ (as well as for functions in $L^p(\\mathbb R)$ with $1\\leq p&lt;\\infty$) by density.</p>\n</blockquote>\n\n<p><em>Details:</em> As $C_0^\\infty(\\mathbb R)$ is dense in $L^2(\\mathbb R)$, there is $(f_n)$ in $C_0^\\infty(\\mathbb R)$ such that \n$$\\|f_n-f\\|_{L^2(\\mathbb R)}\\xrightarrow{n\\to\\infty} 0.\\tag{1}$$\nTake $\\varepsilon&gt;0$. We want to show that there exist $M&gt;0$ such that\n$$x&gt;M\\quad\\Longrightarrow \\quad |g(x)|&lt;\\varepsilon.$$</p>\n\n<p>From $(1)$ there is $n_0\\in\\mathbb N$ such that\n$$\\|f_{n_0}-f\\|_{L^2(\\mathbb R)}&lt;\\varepsilon.\\tag{2}$$</p>\n\n<p>As $f_{n_0}$ has compact support, there is $M&gt;0$ such that\n$$|x|&gt;M\\quad\\Longrightarrow \\quad|f_{n_0}(x)|=0.\\tag{3}$$</p>\n\n<p>From $(2)$, $(3)$ and your estimate,\n$$\\begin{align}\nx&gt;M\\quad\\Longrightarrow \\quad |g(x)|\\leq \\|f\\|_{L^2(x,x+1)}&amp;\\leq\\|f_{n_0}-f\\|_{L^2(x,x+1)}+\\|f_{n_0}\\|_{L^2(x,x+1)}\\\\\n&amp;\\leq\\|f_{n_0}-f\\|_{L^2(\\mathbb R)}+\\|f_{n_0}\\|_{L^2(x,x+1)}\\\\\n&amp;&lt;\\varepsilon+0=\\varepsilon.\n\\end{align}$$</p>\n" }, { "answer_id": 2081751, "score": 2, "text": "<p>We have that $\\int_{-\\infty}^{+\\infty}f(u)^2du&lt;+\\infty \\Rightarrow \\lim_{x\\rightarrow +\\infty} \\int_{-\\infty}^{x}f(u)^2du=\\int_{-\\infty}^{+\\infty}f(u)^2du$ and so for $ε&gt;0,\\ \\exists M&gt;0\\ :\\ \\forall x$ with $x&gt;M$, we have: </p>\n\n<p>$|\\int_{-\\infty}^{x}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du |&lt;\\frac{ε^2}{2}\\ (1)$</p>\n\n<p>When $x&gt;M$ we also have $x&gt;M+1$. Thus $(1)$ and the triangle inequality yield that:</p>\n\n<p>$\\int_{x}^{x+1}f(u)^2du=|\\int_{-\\infty}^{x+1}f(u)^2du -\\int_{-\\infty}^{x}f(u)^2du|=\\\\ |(\\int_{-\\infty}^{x+1}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du )-(\\int_{-\\infty}^{x}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du )|\\leq \\\\ |\\int_{-\\infty}^{x+1}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du |+|\\int_{-\\infty}^{x}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du |&lt;\\frac{ε^2}{2}+\\frac{ε^2}{2}=ε^2$</p>\n\n<p>Therefore for $ε&gt;0,\\ \\exists M&gt;0 : \\forall x$ with $x&gt;M$, we have:</p>\n\n<p>${(\\int_{x}^{x+1}f(u)^2du)}^{\\frac{1}{2}}&lt;ε \\Rightarrow |g(x)|&lt;ε$, by your attempt.</p>\n\n<p>From this point the definition of the limit to infinity finishes the job.</p>\n" }, { "answer_id": 2082656, "score": 2, "text": "<p>You have a good start. To finish, use the folling result: If $g\\in L^1(\\mathbb R),$ then $\\int_x^{x+1} |g| \\to 0$ as $x\\to \\infty.$</p>\n\n<p>Proof: Let $g_n = g\\chi_{[n,n+1]}.$ Then $g_n \\to 0$ pointwise everywhere, and $|g_n| \\le |g|$ on $\\mathbb R.$ By DCT, $\\int_{\\mathbb R} |g_n| \\to 0.$ That proves the result for $x= 1,2,\\dots,$ and that will give the result for arbitrary $x\\to \\infty.$</p>\n\n<p>In your problem, use $g=f^2.$</p>\n" }, { "answer_id": 2081730, "score": 1, "text": "<p>$d\\mu= f^2 dt$ defines a positive finite measure on $\\mathbb{R}$ (this is just another way of writing $\\mu (A) = \\int_{A} f(t)^2 dt$ if it's not clear), since $f$ is square-integrable.<br>\nThen, $\\mu(x,x+1)= \\mu(-\\infty,x+1) - \\mu(-\\infty,x)$ (this is allowed since both sets have finite measure).<br>\nThen $$ \\lim_{x \\to +\\infty} \\int_{x}^{x+1} f(t)^2 dt = \\lim_{x \\to +\\infty} \\mu(x,x+1) = \\mu(\\mathbb{R}) - \\mu(\\mathbb{R}) = 0$$</p>\n" }, { "answer_id": 2081757, "score": 8, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Jensen&#39;s_inequality\">Jensen's Inequality</a>, the sum\n$$\n\\begin{align}\n\\sum_{n=0}^\\infty\\left(\\int_n^{n+1}|f(x)|\\,\\mathrm{d}x\\right)^2\n&amp;\\le\\sum_{n=0}^\\infty\\int_n^{n+1}|f(x)|^2\\,\\mathrm{d}x\\\\\n&amp;=\\int_0^\\infty|f(x)|^2\\,\\mathrm{d}x\\\\\n&amp;\\le\\|f\\|_{L^2(\\mathbb{R})}^2\n\\end{align}\n$$\nconverges, which means the terms must go to $0$. That is,\n$$\n\\lim_{n\\to\\infty}\\int_n^{n+1}f(x)\\,\\mathrm{d}x=0\n$$</p>\n" }, { "answer_id": 2081733, "score": 3, "text": "<p>Continuing from your last step, you have </p>\n\n<p>$$|g(x)|\\leq\\left(\\int_\\mathbb{R} f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}$$\nwhere $\\chi_{[x,x+1]}(t)$ is the indicator function on the interval $[x,x+1]$.</p>\n\n<p>So $$\\lim_{x\\to\\infty}|g(x)|\\leq\\lim_{x\\to\\infty}\\left(\\int_\\mathbb{R} f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}$$</p>\n\n<p>Since $|f(t)^2\\chi_{[x,x+1]}|\\leq|f(t)^2|$ which is integrable, by Lebesgue's Dominated Convergence Theorem, we can move the limit inside the integral, so </p>\n\n<p>$$\\lim_{x\\to\\infty}\\left(\\int_\\mathbb{R} f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}=\\left(\\int_\\mathbb{R} \\lim_{x\\to\\infty}f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}=0$$</p>\n\n<p>since $\\lim_{x\\to\\infty}\\chi_{[x,x+1]}=0$</p>\n\n<p>Hence $\\lim_{x\\to\\infty}|g(x)|\\leq 0$, and we can conclude from there.</p>\n" }, { "answer_id": 2081731, "score": 2, "text": "<blockquote>\n <p>The result is valid for functions in $C_0^\\infty(\\mathbb R)$. So, it is true for functions in $L^2(\\mathbb R)$ (as well as for functions in $L^p(\\mathbb R)$ with $1\\leq p&lt;\\infty$) by density.</p>\n</blockquote>\n\n<p><em>Details:</em> As $C_0^\\infty(\\mathbb R)$ is dense in $L^2(\\mathbb R)$, there is $(f_n)$ in $C_0^\\infty(\\mathbb R)$ such that \n$$\\|f_n-f\\|_{L^2(\\mathbb R)}\\xrightarrow{n\\to\\infty} 0.\\tag{1}$$\nTake $\\varepsilon&gt;0$. We want to show that there exist $M&gt;0$ such that\n$$x&gt;M\\quad\\Longrightarrow \\quad |g(x)|&lt;\\varepsilon.$$</p>\n\n<p>From $(1)$ there is $n_0\\in\\mathbb N$ such that\n$$\\|f_{n_0}-f\\|_{L^2(\\mathbb R)}&lt;\\varepsilon.\\tag{2}$$</p>\n\n<p>As $f_{n_0}$ has compact support, there is $M&gt;0$ such that\n$$|x|&gt;M\\quad\\Longrightarrow \\quad|f_{n_0}(x)|=0.\\tag{3}$$</p>\n\n<p>From $(2)$, $(3)$ and your estimate,\n$$\\begin{align}\nx&gt;M\\quad\\Longrightarrow \\quad |g(x)|\\leq \\|f\\|_{L^2(x,x+1)}&amp;\\leq\\|f_{n_0}-f\\|_{L^2(x,x+1)}+\\|f_{n_0}\\|_{L^2(x,x+1)}\\\\\n&amp;\\leq\\|f_{n_0}-f\\|_{L^2(\\mathbb R)}+\\|f_{n_0}\\|_{L^2(x,x+1)}\\\\\n&amp;&lt;\\varepsilon+0=\\varepsilon.\n\\end{align}$$</p>\n" }, { "answer_id": 2081751, "score": 2, "text": "<p>We have that $\\int_{-\\infty}^{+\\infty}f(u)^2du&lt;+\\infty \\Rightarrow \\lim_{x\\rightarrow +\\infty} \\int_{-\\infty}^{x}f(u)^2du=\\int_{-\\infty}^{+\\infty}f(u)^2du$ and so for $ε&gt;0,\\ \\exists M&gt;0\\ :\\ \\forall x$ with $x&gt;M$, we have: </p>\n\n<p>$|\\int_{-\\infty}^{x}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du |&lt;\\frac{ε^2}{2}\\ (1)$</p>\n\n<p>When $x&gt;M$ we also have $x&gt;M+1$. Thus $(1)$ and the triangle inequality yield that:</p>\n\n<p>$\\int_{x}^{x+1}f(u)^2du=|\\int_{-\\infty}^{x+1}f(u)^2du -\\int_{-\\infty}^{x}f(u)^2du|=\\\\ |(\\int_{-\\infty}^{x+1}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du )-(\\int_{-\\infty}^{x}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du )|\\leq \\\\ |\\int_{-\\infty}^{x+1}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du |+|\\int_{-\\infty}^{x}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du |&lt;\\frac{ε^2}{2}+\\frac{ε^2}{2}=ε^2$</p>\n\n<p>Therefore for $ε&gt;0,\\ \\exists M&gt;0 : \\forall x$ with $x&gt;M$, we have:</p>\n\n<p>${(\\int_{x}^{x+1}f(u)^2du)}^{\\frac{1}{2}}&lt;ε \\Rightarrow |g(x)|&lt;ε$, by your attempt.</p>\n\n<p>From this point the definition of the limit to infinity finishes the job.</p>\n" }, { "answer_id": 2082656, "score": 2, "text": "<p>You have a good start. To finish, use the folling result: If $g\\in L^1(\\mathbb R),$ then $\\int_x^{x+1} |g| \\to 0$ as $x\\to \\infty.$</p>\n\n<p>Proof: Let $g_n = g\\chi_{[n,n+1]}.$ Then $g_n \\to 0$ pointwise everywhere, and $|g_n| \\le |g|$ on $\\mathbb R.$ By DCT, $\\int_{\\mathbb R} |g_n| \\to 0.$ That proves the result for $x= 1,2,\\dots,$ and that will give the result for arbitrary $x\\to \\infty.$</p>\n\n<p>In your problem, use $g=f^2.$</p>\n" }, { "answer_id": 2081730, "score": 1, "text": "<p>$d\\mu= f^2 dt$ defines a positive finite measure on $\\mathbb{R}$ (this is just another way of writing $\\mu (A) = \\int_{A} f(t)^2 dt$ if it's not clear), since $f$ is square-integrable.<br>\nThen, $\\mu(x,x+1)= \\mu(-\\infty,x+1) - \\mu(-\\infty,x)$ (this is allowed since both sets have finite measure).<br>\nThen $$ \\lim_{x \\to +\\infty} \\int_{x}^{x+1} f(t)^2 dt = \\lim_{x \\to +\\infty} \\mu(x,x+1) = \\mu(\\mathbb{R}) - \\mu(\\mathbb{R}) = 0$$</p>\n" }, { "answer_id": 2081757, "score": 8, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Jensen&#39;s_inequality\">Jensen's Inequality</a>, the sum\n$$\n\\begin{align}\n\\sum_{n=0}^\\infty\\left(\\int_n^{n+1}|f(x)|\\,\\mathrm{d}x\\right)^2\n&amp;\\le\\sum_{n=0}^\\infty\\int_n^{n+1}|f(x)|^2\\,\\mathrm{d}x\\\\\n&amp;=\\int_0^\\infty|f(x)|^2\\,\\mathrm{d}x\\\\\n&amp;\\le\\|f\\|_{L^2(\\mathbb{R})}^2\n\\end{align}\n$$\nconverges, which means the terms must go to $0$. That is,\n$$\n\\lim_{n\\to\\infty}\\int_n^{n+1}f(x)\\,\\mathrm{d}x=0\n$$</p>\n" }, { "answer_id": 2081733, "score": 3, "text": "<p>Continuing from your last step, you have </p>\n\n<p>$$|g(x)|\\leq\\left(\\int_\\mathbb{R} f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}$$\nwhere $\\chi_{[x,x+1]}(t)$ is the indicator function on the interval $[x,x+1]$.</p>\n\n<p>So $$\\lim_{x\\to\\infty}|g(x)|\\leq\\lim_{x\\to\\infty}\\left(\\int_\\mathbb{R} f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}$$</p>\n\n<p>Since $|f(t)^2\\chi_{[x,x+1]}|\\leq|f(t)^2|$ which is integrable, by Lebesgue's Dominated Convergence Theorem, we can move the limit inside the integral, so </p>\n\n<p>$$\\lim_{x\\to\\infty}\\left(\\int_\\mathbb{R} f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}=\\left(\\int_\\mathbb{R} \\lim_{x\\to\\infty}f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}=0$$</p>\n\n<p>since $\\lim_{x\\to\\infty}\\chi_{[x,x+1]}=0$</p>\n\n<p>Hence $\\lim_{x\\to\\infty}|g(x)|\\leq 0$, and we can conclude from there.</p>\n" }, { "answer_id": 2081731, "score": 2, "text": "<blockquote>\n <p>The result is valid for functions in $C_0^\\infty(\\mathbb R)$. So, it is true for functions in $L^2(\\mathbb R)$ (as well as for functions in $L^p(\\mathbb R)$ with $1\\leq p&lt;\\infty$) by density.</p>\n</blockquote>\n\n<p><em>Details:</em> As $C_0^\\infty(\\mathbb R)$ is dense in $L^2(\\mathbb R)$, there is $(f_n)$ in $C_0^\\infty(\\mathbb R)$ such that \n$$\\|f_n-f\\|_{L^2(\\mathbb R)}\\xrightarrow{n\\to\\infty} 0.\\tag{1}$$\nTake $\\varepsilon&gt;0$. We want to show that there exist $M&gt;0$ such that\n$$x&gt;M\\quad\\Longrightarrow \\quad |g(x)|&lt;\\varepsilon.$$</p>\n\n<p>From $(1)$ there is $n_0\\in\\mathbb N$ such that\n$$\\|f_{n_0}-f\\|_{L^2(\\mathbb R)}&lt;\\varepsilon.\\tag{2}$$</p>\n\n<p>As $f_{n_0}$ has compact support, there is $M&gt;0$ such that\n$$|x|&gt;M\\quad\\Longrightarrow \\quad|f_{n_0}(x)|=0.\\tag{3}$$</p>\n\n<p>From $(2)$, $(3)$ and your estimate,\n$$\\begin{align}\nx&gt;M\\quad\\Longrightarrow \\quad |g(x)|\\leq \\|f\\|_{L^2(x,x+1)}&amp;\\leq\\|f_{n_0}-f\\|_{L^2(x,x+1)}+\\|f_{n_0}\\|_{L^2(x,x+1)}\\\\\n&amp;\\leq\\|f_{n_0}-f\\|_{L^2(\\mathbb R)}+\\|f_{n_0}\\|_{L^2(x,x+1)}\\\\\n&amp;&lt;\\varepsilon+0=\\varepsilon.\n\\end{align}$$</p>\n" }, { "answer_id": 2081751, "score": 2, "text": "<p>We have that $\\int_{-\\infty}^{+\\infty}f(u)^2du&lt;+\\infty \\Rightarrow \\lim_{x\\rightarrow +\\infty} \\int_{-\\infty}^{x}f(u)^2du=\\int_{-\\infty}^{+\\infty}f(u)^2du$ and so for $ε&gt;0,\\ \\exists M&gt;0\\ :\\ \\forall x$ with $x&gt;M$, we have: </p>\n\n<p>$|\\int_{-\\infty}^{x}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du |&lt;\\frac{ε^2}{2}\\ (1)$</p>\n\n<p>When $x&gt;M$ we also have $x&gt;M+1$. Thus $(1)$ and the triangle inequality yield that:</p>\n\n<p>$\\int_{x}^{x+1}f(u)^2du=|\\int_{-\\infty}^{x+1}f(u)^2du -\\int_{-\\infty}^{x}f(u)^2du|=\\\\ |(\\int_{-\\infty}^{x+1}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du )-(\\int_{-\\infty}^{x}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du )|\\leq \\\\ |\\int_{-\\infty}^{x+1}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du |+|\\int_{-\\infty}^{x}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du |&lt;\\frac{ε^2}{2}+\\frac{ε^2}{2}=ε^2$</p>\n\n<p>Therefore for $ε&gt;0,\\ \\exists M&gt;0 : \\forall x$ with $x&gt;M$, we have:</p>\n\n<p>${(\\int_{x}^{x+1}f(u)^2du)}^{\\frac{1}{2}}&lt;ε \\Rightarrow |g(x)|&lt;ε$, by your attempt.</p>\n\n<p>From this point the definition of the limit to infinity finishes the job.</p>\n" }, { "answer_id": 2082656, "score": 2, "text": "<p>You have a good start. To finish, use the folling result: If $g\\in L^1(\\mathbb R),$ then $\\int_x^{x+1} |g| \\to 0$ as $x\\to \\infty.$</p>\n\n<p>Proof: Let $g_n = g\\chi_{[n,n+1]}.$ Then $g_n \\to 0$ pointwise everywhere, and $|g_n| \\le |g|$ on $\\mathbb R.$ By DCT, $\\int_{\\mathbb R} |g_n| \\to 0.$ That proves the result for $x= 1,2,\\dots,$ and that will give the result for arbitrary $x\\to \\infty.$</p>\n\n<p>In your problem, use $g=f^2.$</p>\n" }, { "answer_id": 2081730, "score": 1, "text": "<p>$d\\mu= f^2 dt$ defines a positive finite measure on $\\mathbb{R}$ (this is just another way of writing $\\mu (A) = \\int_{A} f(t)^2 dt$ if it's not clear), since $f$ is square-integrable.<br>\nThen, $\\mu(x,x+1)= \\mu(-\\infty,x+1) - \\mu(-\\infty,x)$ (this is allowed since both sets have finite measure).<br>\nThen $$ \\lim_{x \\to +\\infty} \\int_{x}^{x+1} f(t)^2 dt = \\lim_{x \\to +\\infty} \\mu(x,x+1) = \\mu(\\mathbb{R}) - \\mu(\\mathbb{R}) = 0$$</p>\n" }, { "answer_id": 2081757, "score": 8, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Jensen&#39;s_inequality\">Jensen's Inequality</a>, the sum\n$$\n\\begin{align}\n\\sum_{n=0}^\\infty\\left(\\int_n^{n+1}|f(x)|\\,\\mathrm{d}x\\right)^2\n&amp;\\le\\sum_{n=0}^\\infty\\int_n^{n+1}|f(x)|^2\\,\\mathrm{d}x\\\\\n&amp;=\\int_0^\\infty|f(x)|^2\\,\\mathrm{d}x\\\\\n&amp;\\le\\|f\\|_{L^2(\\mathbb{R})}^2\n\\end{align}\n$$\nconverges, which means the terms must go to $0$. That is,\n$$\n\\lim_{n\\to\\infty}\\int_n^{n+1}f(x)\\,\\mathrm{d}x=0\n$$</p>\n" }, { "answer_id": 2081733, "score": 3, "text": "<p>Continuing from your last step, you have </p>\n\n<p>$$|g(x)|\\leq\\left(\\int_\\mathbb{R} f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}$$\nwhere $\\chi_{[x,x+1]}(t)$ is the indicator function on the interval $[x,x+1]$.</p>\n\n<p>So $$\\lim_{x\\to\\infty}|g(x)|\\leq\\lim_{x\\to\\infty}\\left(\\int_\\mathbb{R} f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}$$</p>\n\n<p>Since $|f(t)^2\\chi_{[x,x+1]}|\\leq|f(t)^2|$ which is integrable, by Lebesgue's Dominated Convergence Theorem, we can move the limit inside the integral, so </p>\n\n<p>$$\\lim_{x\\to\\infty}\\left(\\int_\\mathbb{R} f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}=\\left(\\int_\\mathbb{R} \\lim_{x\\to\\infty}f(t)^2\\chi_{[x,x+1]}\\,dt\\right)^{1/2}=0$$</p>\n\n<p>since $\\lim_{x\\to\\infty}\\chi_{[x,x+1]}=0$</p>\n\n<p>Hence $\\lim_{x\\to\\infty}|g(x)|\\leq 0$, and we can conclude from there.</p>\n" }, { "answer_id": 2081731, "score": 2, "text": "<blockquote>\n <p>The result is valid for functions in $C_0^\\infty(\\mathbb R)$. So, it is true for functions in $L^2(\\mathbb R)$ (as well as for functions in $L^p(\\mathbb R)$ with $1\\leq p&lt;\\infty$) by density.</p>\n</blockquote>\n\n<p><em>Details:</em> As $C_0^\\infty(\\mathbb R)$ is dense in $L^2(\\mathbb R)$, there is $(f_n)$ in $C_0^\\infty(\\mathbb R)$ such that \n$$\\|f_n-f\\|_{L^2(\\mathbb R)}\\xrightarrow{n\\to\\infty} 0.\\tag{1}$$\nTake $\\varepsilon&gt;0$. We want to show that there exist $M&gt;0$ such that\n$$x&gt;M\\quad\\Longrightarrow \\quad |g(x)|&lt;\\varepsilon.$$</p>\n\n<p>From $(1)$ there is $n_0\\in\\mathbb N$ such that\n$$\\|f_{n_0}-f\\|_{L^2(\\mathbb R)}&lt;\\varepsilon.\\tag{2}$$</p>\n\n<p>As $f_{n_0}$ has compact support, there is $M&gt;0$ such that\n$$|x|&gt;M\\quad\\Longrightarrow \\quad|f_{n_0}(x)|=0.\\tag{3}$$</p>\n\n<p>From $(2)$, $(3)$ and your estimate,\n$$\\begin{align}\nx&gt;M\\quad\\Longrightarrow \\quad |g(x)|\\leq \\|f\\|_{L^2(x,x+1)}&amp;\\leq\\|f_{n_0}-f\\|_{L^2(x,x+1)}+\\|f_{n_0}\\|_{L^2(x,x+1)}\\\\\n&amp;\\leq\\|f_{n_0}-f\\|_{L^2(\\mathbb R)}+\\|f_{n_0}\\|_{L^2(x,x+1)}\\\\\n&amp;&lt;\\varepsilon+0=\\varepsilon.\n\\end{align}$$</p>\n" }, { "answer_id": 2081751, "score": 2, "text": "<p>We have that $\\int_{-\\infty}^{+\\infty}f(u)^2du&lt;+\\infty \\Rightarrow \\lim_{x\\rightarrow +\\infty} \\int_{-\\infty}^{x}f(u)^2du=\\int_{-\\infty}^{+\\infty}f(u)^2du$ and so for $ε&gt;0,\\ \\exists M&gt;0\\ :\\ \\forall x$ with $x&gt;M$, we have: </p>\n\n<p>$|\\int_{-\\infty}^{x}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du |&lt;\\frac{ε^2}{2}\\ (1)$</p>\n\n<p>When $x&gt;M$ we also have $x&gt;M+1$. Thus $(1)$ and the triangle inequality yield that:</p>\n\n<p>$\\int_{x}^{x+1}f(u)^2du=|\\int_{-\\infty}^{x+1}f(u)^2du -\\int_{-\\infty}^{x}f(u)^2du|=\\\\ |(\\int_{-\\infty}^{x+1}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du )-(\\int_{-\\infty}^{x}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du )|\\leq \\\\ |\\int_{-\\infty}^{x+1}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du |+|\\int_{-\\infty}^{x}f(u)^2du -\\int_{-\\infty}^{+\\infty}f(u)^2du |&lt;\\frac{ε^2}{2}+\\frac{ε^2}{2}=ε^2$</p>\n\n<p>Therefore for $ε&gt;0,\\ \\exists M&gt;0 : \\forall x$ with $x&gt;M$, we have:</p>\n\n<p>${(\\int_{x}^{x+1}f(u)^2du)}^{\\frac{1}{2}}&lt;ε \\Rightarrow |g(x)|&lt;ε$, by your attempt.</p>\n\n<p>From this point the definition of the limit to infinity finishes the job.</p>\n" }, { "answer_id": 2082656, "score": 2, "text": "<p>You have a good start. To finish, use the folling result: If $g\\in L^1(\\mathbb R),$ then $\\int_x^{x+1} |g| \\to 0$ as $x\\to \\infty.$</p>\n\n<p>Proof: Let $g_n = g\\chi_{[n,n+1]}.$ Then $g_n \\to 0$ pointwise everywhere, and $|g_n| \\le |g|$ on $\\mathbb R.$ By DCT, $\\int_{\\mathbb R} |g_n| \\to 0.$ That proves the result for $x= 1,2,\\dots,$ and that will give the result for arbitrary $x\\to \\infty.$</p>\n\n<p>In your problem, use $g=f^2.$</p>\n" }, { "answer_id": 2081730, "score": 1, "text": "<p>$d\\mu= f^2 dt$ defines a positive finite measure on $\\mathbb{R}$ (this is just another way of writing $\\mu (A) = \\int_{A} f(t)^2 dt$ if it's not clear), since $f$ is square-integrable.<br>\nThen, $\\mu(x,x+1)= \\mu(-\\infty,x+1) - \\mu(-\\infty,x)$ (this is allowed since both sets have finite measure).<br>\nThen $$ \\lim_{x \\to +\\infty} \\int_{x}^{x+1} f(t)^2 dt = \\lim_{x \\to +\\infty} \\mu(x,x+1) = \\mu(\\mathbb{R}) - \\mu(\\mathbb{R}) = 0$$</p>\n" } ]
Question: Showing $\int_x^{x+1}f(t)\,dt \xrightarrow{x\to\infty}0$ for $f\in L^2 (\mathbb{R})$ Answer 1: <p>Using <a href="https://en.wikipedia.org/wiki/Jensen&#39;s_inequality">Jensen's Inequality</a>, the sum $$ \begin{align} \sum_{n=0}^\infty\left(\int_n^{n+1}|f(x)|\,\mathrm{d}x\right)^2 &amp;\le\sum_{n=0}^\infty\int_n^{n+1}|f(x)|^2\,\mathrm{d}x\\ &amp;=\int_0^\infty|f(x)|^2\,\mathrm{d}x\\ &amp;\le\|f\|_{L^2(\mathbb{R})}^2 \end{align} $$ converges, which means the terms must go to $0$. That is, $$ \lim_{n\to\infty}\int_n^{n+1}f(x)\,\mathrm{d}x=0 $$</p> Answer 2: <p>Continuing from your last step, you have </p> <p>$$|g(x)|\leq\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}$$ where $\chi_{[x,x+1]}(t)$ is the indicator function on the interval $[x,x+1]$.</p> <p>So $$\lim_{x\to\infty}|g(x)|\leq\lim_{x\to\infty}\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}$$</p> <p>Since $|f(t)^2\chi_{[x,x+1]}|\leq|f(t)^2|$ which is integrable, by Lebesgue's Dominated Convergence Theorem, we can move the limit inside the integral, so </p> <p>$$\lim_{x\to\infty}\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}=\left(\int_\mathbb{R} \lim_{x\to\infty}f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}=0$$</p> <p>since $\lim_{x\to\infty}\chi_{[x,x+1]}=0$</p> <p>Hence $\lim_{x\to\infty}|g(x)|\leq 0$, and we can conclude from there.</p> Answer 3: <blockquote> <p>The result is valid for functions in $C_0^\infty(\mathbb R)$. So, it is true for functions in $L^2(\mathbb R)$ (as well as for functions in $L^p(\mathbb R)$ with $1\leq p&lt;\infty$) by density.</p> </blockquote> <p><em>Details:</em> As $C_0^\infty(\mathbb R)$ is dense in $L^2(\mathbb R)$, there is $(f_n)$ in $C_0^\infty(\mathbb R)$ such that $$\|f_n-f\|_{L^2(\mathbb R)}\xrightarrow{n\to\infty} 0.\tag{1}$$ Take $\varepsilon&gt;0$. We want to show that there exist $M&gt;0$ such that $$x&gt;M\quad\Longrightarrow \quad |g(x)|&lt;\varepsilon.$$</p> <p>From $(1)$ there is $n_0\in\mathbb N$ such that $$\|f_{n_0}-f\|_{L^2(\mathbb R)}&lt;\varepsilon.\tag{2}$$</p> <p>As $f_{n_0}$ has compact support, there is $M&gt;0$ such that $$|x|&gt;M\quad\Longrightarrow \quad|f_{n_0}(x)|=0.\tag{3}$$</p> <p>From $(2)$, $(3)$ and your estimate, $$\begin{align} x&gt;M\quad\Longrightarrow \quad |g(x)|\leq \|f\|_{L^2(x,x+1)}&amp;\leq\|f_{n_0}-f\|_{L^2(x,x+1)}+\|f_{n_0}\|_{L^2(x,x+1)}\\ &amp;\leq\|f_{n_0}-f\|_{L^2(\mathbb R)}+\|f_{n_0}\|_{L^2(x,x+1)}\\ &amp;&lt;\varepsilon+0=\varepsilon. \end{align}$$</p> Answer 4: <p>We have that $\int_{-\infty}^{+\infty}f(u)^2du&lt;+\infty \Rightarrow \lim_{x\rightarrow +\infty} \int_{-\infty}^{x}f(u)^2du=\int_{-\infty}^{+\infty}f(u)^2du$ and so for $ε&gt;0,\ \exists M&gt;0\ :\ \forall x$ with $x&gt;M$, we have: </p> <p>$|\int_{-\infty}^{x}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du |&lt;\frac{ε^2}{2}\ (1)$</p> <p>When $x&gt;M$ we also have $x&gt;M+1$. Thus $(1)$ and the triangle inequality yield that:</p> <p>$\int_{x}^{x+1}f(u)^2du=|\int_{-\infty}^{x+1}f(u)^2du -\int_{-\infty}^{x}f(u)^2du|=\\ |(\int_{-\infty}^{x+1}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du )-(\int_{-\infty}^{x}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du )|\leq \\ |\int_{-\infty}^{x+1}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du |+|\int_{-\infty}^{x}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du |&lt;\frac{ε^2}{2}+\frac{ε^2}{2}=ε^2$</p> <p>Therefore for $ε&gt;0,\ \exists M&gt;0 : \forall x$ with $x&gt;M$, we have:</p> <p>${(\int_{x}^{x+1}f(u)^2du)}^{\frac{1}{2}}&lt;ε \Rightarrow |g(x)|&lt;ε$, by your attempt.</p> <p>From this point the definition of the limit to infinity finishes the job.</p> Answer 5: <p>You have a good start. To finish, use the folling result: If $g\in L^1(\mathbb R),$ then $\int_x^{x+1} |g| \to 0$ as $x\to \infty.$</p> <p>Proof: Let $g_n = g\chi_{[n,n+1]}.$ Then $g_n \to 0$ pointwise everywhere, and $|g_n| \le |g|$ on $\mathbb R.$ By DCT, $\int_{\mathbb R} |g_n| \to 0.$ That proves the result for $x= 1,2,\dots,$ and that will give the result for arbitrary $x\to \infty.$</p> <p>In your problem, use $g=f^2.$</p> Answer 6: <p>$d\mu= f^2 dt$ defines a positive finite measure on $\mathbb{R}$ (this is just another way of writing $\mu (A) = \int_{A} f(t)^2 dt$ if it's not clear), since $f$ is square-integrable.<br> Then, $\mu(x,x+1)= \mu(-\infty,x+1) - \mu(-\infty,x)$ (this is allowed since both sets have finite measure).<br> Then $$ \lim_{x \to +\infty} \int_{x}^{x+1} f(t)^2 dt = \lim_{x \to +\infty} \mu(x,x+1) = \mu(\mathbb{R}) - \mu(\mathbb{R}) = 0$$</p> Answer 7: <p>Using <a href="https://en.wikipedia.org/wiki/Jensen&#39;s_inequality">Jensen's Inequality</a>, the sum $$ \begin{align} \sum_{n=0}^\infty\left(\int_n^{n+1}|f(x)|\,\mathrm{d}x\right)^2 &amp;\le\sum_{n=0}^\infty\int_n^{n+1}|f(x)|^2\,\mathrm{d}x\\ &amp;=\int_0^\infty|f(x)|^2\,\mathrm{d}x\\ &amp;\le\|f\|_{L^2(\mathbb{R})}^2 \end{align} $$ converges, which means the terms must go to $0$. That is, $$ \lim_{n\to\infty}\int_n^{n+1}f(x)\,\mathrm{d}x=0 $$</p> Answer 8: <p>Continuing from your last step, you have </p> <p>$$|g(x)|\leq\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}$$ where $\chi_{[x,x+1]}(t)$ is the indicator function on the interval $[x,x+1]$.</p> <p>So $$\lim_{x\to\infty}|g(x)|\leq\lim_{x\to\infty}\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}$$</p> <p>Since $|f(t)^2\chi_{[x,x+1]}|\leq|f(t)^2|$ which is integrable, by Lebesgue's Dominated Convergence Theorem, we can move the limit inside the integral, so </p> <p>$$\lim_{x\to\infty}\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}=\left(\int_\mathbb{R} \lim_{x\to\infty}f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}=0$$</p> <p>since $\lim_{x\to\infty}\chi_{[x,x+1]}=0$</p> <p>Hence $\lim_{x\to\infty}|g(x)|\leq 0$, and we can conclude from there.</p> Answer 9: <blockquote> <p>The result is valid for functions in $C_0^\infty(\mathbb R)$. So, it is true for functions in $L^2(\mathbb R)$ (as well as for functions in $L^p(\mathbb R)$ with $1\leq p&lt;\infty$) by density.</p> </blockquote> <p><em>Details:</em> As $C_0^\infty(\mathbb R)$ is dense in $L^2(\mathbb R)$, there is $(f_n)$ in $C_0^\infty(\mathbb R)$ such that $$\|f_n-f\|_{L^2(\mathbb R)}\xrightarrow{n\to\infty} 0.\tag{1}$$ Take $\varepsilon&gt;0$. We want to show that there exist $M&gt;0$ such that $$x&gt;M\quad\Longrightarrow \quad |g(x)|&lt;\varepsilon.$$</p> <p>From $(1)$ there is $n_0\in\mathbb N$ such that $$\|f_{n_0}-f\|_{L^2(\mathbb R)}&lt;\varepsilon.\tag{2}$$</p> <p>As $f_{n_0}$ has compact support, there is $M&gt;0$ such that $$|x|&gt;M\quad\Longrightarrow \quad|f_{n_0}(x)|=0.\tag{3}$$</p> <p>From $(2)$, $(3)$ and your estimate, $$\begin{align} x&gt;M\quad\Longrightarrow \quad |g(x)|\leq \|f\|_{L^2(x,x+1)}&amp;\leq\|f_{n_0}-f\|_{L^2(x,x+1)}+\|f_{n_0}\|_{L^2(x,x+1)}\\ &amp;\leq\|f_{n_0}-f\|_{L^2(\mathbb R)}+\|f_{n_0}\|_{L^2(x,x+1)}\\ &amp;&lt;\varepsilon+0=\varepsilon. \end{align}$$</p> Answer 10: <p>We have that $\int_{-\infty}^{+\infty}f(u)^2du&lt;+\infty \Rightarrow \lim_{x\rightarrow +\infty} \int_{-\infty}^{x}f(u)^2du=\int_{-\infty}^{+\infty}f(u)^2du$ and so for $ε&gt;0,\ \exists M&gt;0\ :\ \forall x$ with $x&gt;M$, we have: </p> <p>$|\int_{-\infty}^{x}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du |&lt;\frac{ε^2}{2}\ (1)$</p> <p>When $x&gt;M$ we also have $x&gt;M+1$. Thus $(1)$ and the triangle inequality yield that:</p> <p>$\int_{x}^{x+1}f(u)^2du=|\int_{-\infty}^{x+1}f(u)^2du -\int_{-\infty}^{x}f(u)^2du|=\\ |(\int_{-\infty}^{x+1}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du )-(\int_{-\infty}^{x}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du )|\leq \\ |\int_{-\infty}^{x+1}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du |+|\int_{-\infty}^{x}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du |&lt;\frac{ε^2}{2}+\frac{ε^2}{2}=ε^2$</p> <p>Therefore for $ε&gt;0,\ \exists M&gt;0 : \forall x$ with $x&gt;M$, we have:</p> <p>${(\int_{x}^{x+1}f(u)^2du)}^{\frac{1}{2}}&lt;ε \Rightarrow |g(x)|&lt;ε$, by your attempt.</p> <p>From this point the definition of the limit to infinity finishes the job.</p> Answer 11: <p>You have a good start. To finish, use the folling result: If $g\in L^1(\mathbb R),$ then $\int_x^{x+1} |g| \to 0$ as $x\to \infty.$</p> <p>Proof: Let $g_n = g\chi_{[n,n+1]}.$ Then $g_n \to 0$ pointwise everywhere, and $|g_n| \le |g|$ on $\mathbb R.$ By DCT, $\int_{\mathbb R} |g_n| \to 0.$ That proves the result for $x= 1,2,\dots,$ and that will give the result for arbitrary $x\to \infty.$</p> <p>In your problem, use $g=f^2.$</p> Answer 12: <p>$d\mu= f^2 dt$ defines a positive finite measure on $\mathbb{R}$ (this is just another way of writing $\mu (A) = \int_{A} f(t)^2 dt$ if it's not clear), since $f$ is square-integrable.<br> Then, $\mu(x,x+1)= \mu(-\infty,x+1) - \mu(-\infty,x)$ (this is allowed since both sets have finite measure).<br> Then $$ \lim_{x \to +\infty} \int_{x}^{x+1} f(t)^2 dt = \lim_{x \to +\infty} \mu(x,x+1) = \mu(\mathbb{R}) - \mu(\mathbb{R}) = 0$$</p> Answer 13: <p>Using <a href="https://en.wikipedia.org/wiki/Jensen&#39;s_inequality">Jensen's Inequality</a>, the sum $$ \begin{align} \sum_{n=0}^\infty\left(\int_n^{n+1}|f(x)|\,\mathrm{d}x\right)^2 &amp;\le\sum_{n=0}^\infty\int_n^{n+1}|f(x)|^2\,\mathrm{d}x\\ &amp;=\int_0^\infty|f(x)|^2\,\mathrm{d}x\\ &amp;\le\|f\|_{L^2(\mathbb{R})}^2 \end{align} $$ converges, which means the terms must go to $0$. That is, $$ \lim_{n\to\infty}\int_n^{n+1}f(x)\,\mathrm{d}x=0 $$</p> Answer 14: <p>Continuing from your last step, you have </p> <p>$$|g(x)|\leq\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}$$ where $\chi_{[x,x+1]}(t)$ is the indicator function on the interval $[x,x+1]$.</p> <p>So $$\lim_{x\to\infty}|g(x)|\leq\lim_{x\to\infty}\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}$$</p> <p>Since $|f(t)^2\chi_{[x,x+1]}|\leq|f(t)^2|$ which is integrable, by Lebesgue's Dominated Convergence Theorem, we can move the limit inside the integral, so </p> <p>$$\lim_{x\to\infty}\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}=\left(\int_\mathbb{R} \lim_{x\to\infty}f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}=0$$</p> <p>since $\lim_{x\to\infty}\chi_{[x,x+1]}=0$</p> <p>Hence $\lim_{x\to\infty}|g(x)|\leq 0$, and we can conclude from there.</p> Answer 15: <blockquote> <p>The result is valid for functions in $C_0^\infty(\mathbb R)$. So, it is true for functions in $L^2(\mathbb R)$ (as well as for functions in $L^p(\mathbb R)$ with $1\leq p&lt;\infty$) by density.</p> </blockquote> <p><em>Details:</em> As $C_0^\infty(\mathbb R)$ is dense in $L^2(\mathbb R)$, there is $(f_n)$ in $C_0^\infty(\mathbb R)$ such that $$\|f_n-f\|_{L^2(\mathbb R)}\xrightarrow{n\to\infty} 0.\tag{1}$$ Take $\varepsilon&gt;0$. We want to show that there exist $M&gt;0$ such that $$x&gt;M\quad\Longrightarrow \quad |g(x)|&lt;\varepsilon.$$</p> <p>From $(1)$ there is $n_0\in\mathbb N$ such that $$\|f_{n_0}-f\|_{L^2(\mathbb R)}&lt;\varepsilon.\tag{2}$$</p> <p>As $f_{n_0}$ has compact support, there is $M&gt;0$ such that $$|x|&gt;M\quad\Longrightarrow \quad|f_{n_0}(x)|=0.\tag{3}$$</p> <p>From $(2)$, $(3)$ and your estimate, $$\begin{align} x&gt;M\quad\Longrightarrow \quad |g(x)|\leq \|f\|_{L^2(x,x+1)}&amp;\leq\|f_{n_0}-f\|_{L^2(x,x+1)}+\|f_{n_0}\|_{L^2(x,x+1)}\\ &amp;\leq\|f_{n_0}-f\|_{L^2(\mathbb R)}+\|f_{n_0}\|_{L^2(x,x+1)}\\ &amp;&lt;\varepsilon+0=\varepsilon. \end{align}$$</p> Answer 16: <p>We have that $\int_{-\infty}^{+\infty}f(u)^2du&lt;+\infty \Rightarrow \lim_{x\rightarrow +\infty} \int_{-\infty}^{x}f(u)^2du=\int_{-\infty}^{+\infty}f(u)^2du$ and so for $ε&gt;0,\ \exists M&gt;0\ :\ \forall x$ with $x&gt;M$, we have: </p> <p>$|\int_{-\infty}^{x}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du |&lt;\frac{ε^2}{2}\ (1)$</p> <p>When $x&gt;M$ we also have $x&gt;M+1$. Thus $(1)$ and the triangle inequality yield that:</p> <p>$\int_{x}^{x+1}f(u)^2du=|\int_{-\infty}^{x+1}f(u)^2du -\int_{-\infty}^{x}f(u)^2du|=\\ |(\int_{-\infty}^{x+1}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du )-(\int_{-\infty}^{x}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du )|\leq \\ |\int_{-\infty}^{x+1}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du |+|\int_{-\infty}^{x}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du |&lt;\frac{ε^2}{2}+\frac{ε^2}{2}=ε^2$</p> <p>Therefore for $ε&gt;0,\ \exists M&gt;0 : \forall x$ with $x&gt;M$, we have:</p> <p>${(\int_{x}^{x+1}f(u)^2du)}^{\frac{1}{2}}&lt;ε \Rightarrow |g(x)|&lt;ε$, by your attempt.</p> <p>From this point the definition of the limit to infinity finishes the job.</p> Answer 17: <p>You have a good start. To finish, use the folling result: If $g\in L^1(\mathbb R),$ then $\int_x^{x+1} |g| \to 0$ as $x\to \infty.$</p> <p>Proof: Let $g_n = g\chi_{[n,n+1]}.$ Then $g_n \to 0$ pointwise everywhere, and $|g_n| \le |g|$ on $\mathbb R.$ By DCT, $\int_{\mathbb R} |g_n| \to 0.$ That proves the result for $x= 1,2,\dots,$ and that will give the result for arbitrary $x\to \infty.$</p> <p>In your problem, use $g=f^2.$</p> Answer 18: <p>$d\mu= f^2 dt$ defines a positive finite measure on $\mathbb{R}$ (this is just another way of writing $\mu (A) = \int_{A} f(t)^2 dt$ if it's not clear), since $f$ is square-integrable.<br> Then, $\mu(x,x+1)= \mu(-\infty,x+1) - \mu(-\infty,x)$ (this is allowed since both sets have finite measure).<br> Then $$ \lim_{x \to +\infty} \int_{x}^{x+1} f(t)^2 dt = \lim_{x \to +\infty} \mu(x,x+1) = \mu(\mathbb{R}) - \mu(\mathbb{R}) = 0$$</p> Answer 19: <p>Using <a href="https://en.wikipedia.org/wiki/Jensen&#39;s_inequality">Jensen's Inequality</a>, the sum $$ \begin{align} \sum_{n=0}^\infty\left(\int_n^{n+1}|f(x)|\,\mathrm{d}x\right)^2 &amp;\le\sum_{n=0}^\infty\int_n^{n+1}|f(x)|^2\,\mathrm{d}x\\ &amp;=\int_0^\infty|f(x)|^2\,\mathrm{d}x\\ &amp;\le\|f\|_{L^2(\mathbb{R})}^2 \end{align} $$ converges, which means the terms must go to $0$. That is, $$ \lim_{n\to\infty}\int_n^{n+1}f(x)\,\mathrm{d}x=0 $$</p> Answer 20: <p>Continuing from your last step, you have </p> <p>$$|g(x)|\leq\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}$$ where $\chi_{[x,x+1]}(t)$ is the indicator function on the interval $[x,x+1]$.</p> <p>So $$\lim_{x\to\infty}|g(x)|\leq\lim_{x\to\infty}\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}$$</p> <p>Since $|f(t)^2\chi_{[x,x+1]}|\leq|f(t)^2|$ which is integrable, by Lebesgue's Dominated Convergence Theorem, we can move the limit inside the integral, so </p> <p>$$\lim_{x\to\infty}\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}=\left(\int_\mathbb{R} \lim_{x\to\infty}f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}=0$$</p> <p>since $\lim_{x\to\infty}\chi_{[x,x+1]}=0$</p> <p>Hence $\lim_{x\to\infty}|g(x)|\leq 0$, and we can conclude from there.</p> Answer 21: <blockquote> <p>The result is valid for functions in $C_0^\infty(\mathbb R)$. So, it is true for functions in $L^2(\mathbb R)$ (as well as for functions in $L^p(\mathbb R)$ with $1\leq p&lt;\infty$) by density.</p> </blockquote> <p><em>Details:</em> As $C_0^\infty(\mathbb R)$ is dense in $L^2(\mathbb R)$, there is $(f_n)$ in $C_0^\infty(\mathbb R)$ such that $$\|f_n-f\|_{L^2(\mathbb R)}\xrightarrow{n\to\infty} 0.\tag{1}$$ Take $\varepsilon&gt;0$. We want to show that there exist $M&gt;0$ such that $$x&gt;M\quad\Longrightarrow \quad |g(x)|&lt;\varepsilon.$$</p> <p>From $(1)$ there is $n_0\in\mathbb N$ such that $$\|f_{n_0}-f\|_{L^2(\mathbb R)}&lt;\varepsilon.\tag{2}$$</p> <p>As $f_{n_0}$ has compact support, there is $M&gt;0$ such that $$|x|&gt;M\quad\Longrightarrow \quad|f_{n_0}(x)|=0.\tag{3}$$</p> <p>From $(2)$, $(3)$ and your estimate, $$\begin{align} x&gt;M\quad\Longrightarrow \quad |g(x)|\leq \|f\|_{L^2(x,x+1)}&amp;\leq\|f_{n_0}-f\|_{L^2(x,x+1)}+\|f_{n_0}\|_{L^2(x,x+1)}\\ &amp;\leq\|f_{n_0}-f\|_{L^2(\mathbb R)}+\|f_{n_0}\|_{L^2(x,x+1)}\\ &amp;&lt;\varepsilon+0=\varepsilon. \end{align}$$</p> Answer 22: <p>We have that $\int_{-\infty}^{+\infty}f(u)^2du&lt;+\infty \Rightarrow \lim_{x\rightarrow +\infty} \int_{-\infty}^{x}f(u)^2du=\int_{-\infty}^{+\infty}f(u)^2du$ and so for $ε&gt;0,\ \exists M&gt;0\ :\ \forall x$ with $x&gt;M$, we have: </p> <p>$|\int_{-\infty}^{x}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du |&lt;\frac{ε^2}{2}\ (1)$</p> <p>When $x&gt;M$ we also have $x&gt;M+1$. Thus $(1)$ and the triangle inequality yield that:</p> <p>$\int_{x}^{x+1}f(u)^2du=|\int_{-\infty}^{x+1}f(u)^2du -\int_{-\infty}^{x}f(u)^2du|=\\ |(\int_{-\infty}^{x+1}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du )-(\int_{-\infty}^{x}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du )|\leq \\ |\int_{-\infty}^{x+1}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du |+|\int_{-\infty}^{x}f(u)^2du -\int_{-\infty}^{+\infty}f(u)^2du |&lt;\frac{ε^2}{2}+\frac{ε^2}{2}=ε^2$</p> <p>Therefore for $ε&gt;0,\ \exists M&gt;0 : \forall x$ with $x&gt;M$, we have:</p> <p>${(\int_{x}^{x+1}f(u)^2du)}^{\frac{1}{2}}&lt;ε \Rightarrow |g(x)|&lt;ε$, by your attempt.</p> <p>From this point the definition of the limit to infinity finishes the job.</p> Answer 23: <p>You have a good start. To finish, use the folling result: If $g\in L^1(\mathbb R),$ then $\int_x^{x+1} |g| \to 0$ as $x\to \infty.$</p> <p>Proof: Let $g_n = g\chi_{[n,n+1]}.$ Then $g_n \to 0$ pointwise everywhere, and $|g_n| \le |g|$ on $\mathbb R.$ By DCT, $\int_{\mathbb R} |g_n| \to 0.$ That proves the result for $x= 1,2,\dots,$ and that will give the result for arbitrary $x\to \infty.$</p> <p>In your problem, use $g=f^2.$</p> Answer 24: <p>$d\mu= f^2 dt$ defines a positive finite measure on $\mathbb{R}$ (this is just another way of writing $\mu (A) = \int_{A} f(t)^2 dt$ if it's not clear), since $f$ is square-integrable.<br> Then, $\mu(x,x+1)= \mu(-\infty,x+1) - \mu(-\infty,x)$ (this is allowed since both sets have finite measure).<br> Then $$ \lim_{x \to +\infty} \int_{x}^{x+1} f(t)^2 dt = \lim_{x \to +\infty} \mu(x,x+1) = \mu(\mathbb{R}) - \mu(\mathbb{R}) = 0$$</p>
Limits: if $ \lim_{x\to\infty}f(x) = 0 $, does $\lim_{x \to 0}f(1/x)$ exist?
2,081,817
4
https://math.stackexchange.com/questions/2081817/limits-if-lim-x-to-inftyfx-0-does-lim-x-to-0f1-x-exist
<p>Let $f$ be a function: $$ f:\mathbb{R} \to \mathbb{R} $$ It is known that: $$ \lim_{x\to\infty}f(x) = 0 $$ I need to prove / disprove that the following limit exist: $$ \lim_{x \to 0}f\left(\frac{1}{x}\right) $$</p>
[ { "answer_id": 2081825, "score": 8, "text": "<p>Example:</p>\n\n<p>$$f(x)=\\begin{cases}\\frac1x&amp;;x&gt;0\\\\1&amp;;x\\le0\\end{cases}$$</p>\n\n<p>$$\\lim_{x\\to\\infty}f(x)=0$$</p>\n\n<p>but</p>\n\n<p>$$\\lim_{x\\to0^+}f(1/x)=\\lim_{x\\to0^+}x=0$$</p>\n\n<p>$$\\lim_{x\\to0^-}f(1/x)=\\lim_{x\\to0^-}1=1$$</p>\n" }, { "answer_id": 2081834, "score": 5, "text": "<p>Maybe the simplest counterexample would be $f(x)=e^{-x}$.</p>\n\n<p>The limit $$\\lim_{x\\to 0} e^{-\\frac1x}$$ does not exist, because if $x$ is small but positive, $e^{-\\frac1x}$ is close to $0$, but if $x$ is small and negative, then $e^{-\\frac1x}$ is large.</p>\n" }, { "answer_id": 2081825, "score": 8, "text": "<p>Example:</p>\n\n<p>$$f(x)=\\begin{cases}\\frac1x&amp;;x&gt;0\\\\1&amp;;x\\le0\\end{cases}$$</p>\n\n<p>$$\\lim_{x\\to\\infty}f(x)=0$$</p>\n\n<p>but</p>\n\n<p>$$\\lim_{x\\to0^+}f(1/x)=\\lim_{x\\to0^+}x=0$$</p>\n\n<p>$$\\lim_{x\\to0^-}f(1/x)=\\lim_{x\\to0^-}1=1$$</p>\n" }, { "answer_id": 2081834, "score": 5, "text": "<p>Maybe the simplest counterexample would be $f(x)=e^{-x}$.</p>\n\n<p>The limit $$\\lim_{x\\to 0} e^{-\\frac1x}$$ does not exist, because if $x$ is small but positive, $e^{-\\frac1x}$ is close to $0$, but if $x$ is small and negative, then $e^{-\\frac1x}$ is large.</p>\n" }, { "answer_id": 2081825, "score": 8, "text": "<p>Example:</p>\n\n<p>$$f(x)=\\begin{cases}\\frac1x&amp;;x&gt;0\\\\1&amp;;x\\le0\\end{cases}$$</p>\n\n<p>$$\\lim_{x\\to\\infty}f(x)=0$$</p>\n\n<p>but</p>\n\n<p>$$\\lim_{x\\to0^+}f(1/x)=\\lim_{x\\to0^+}x=0$$</p>\n\n<p>$$\\lim_{x\\to0^-}f(1/x)=\\lim_{x\\to0^-}1=1$$</p>\n" }, { "answer_id": 2081834, "score": 5, "text": "<p>Maybe the simplest counterexample would be $f(x)=e^{-x}$.</p>\n\n<p>The limit $$\\lim_{x\\to 0} e^{-\\frac1x}$$ does not exist, because if $x$ is small but positive, $e^{-\\frac1x}$ is close to $0$, but if $x$ is small and negative, then $e^{-\\frac1x}$ is large.</p>\n" }, { "answer_id": 2081825, "score": 8, "text": "<p>Example:</p>\n\n<p>$$f(x)=\\begin{cases}\\frac1x&amp;;x&gt;0\\\\1&amp;;x\\le0\\end{cases}$$</p>\n\n<p>$$\\lim_{x\\to\\infty}f(x)=0$$</p>\n\n<p>but</p>\n\n<p>$$\\lim_{x\\to0^+}f(1/x)=\\lim_{x\\to0^+}x=0$$</p>\n\n<p>$$\\lim_{x\\to0^-}f(1/x)=\\lim_{x\\to0^-}1=1$$</p>\n" }, { "answer_id": 2081834, "score": 5, "text": "<p>Maybe the simplest counterexample would be $f(x)=e^{-x}$.</p>\n\n<p>The limit $$\\lim_{x\\to 0} e^{-\\frac1x}$$ does not exist, because if $x$ is small but positive, $e^{-\\frac1x}$ is close to $0$, but if $x$ is small and negative, then $e^{-\\frac1x}$ is large.</p>\n" } ]
Question: Limits: if $ \lim_{x\to\infty}f(x) = 0 $, does $\lim_{x \to 0}f(1/x)$ exist? Answer 1: <p>Example:</p> <p>$$f(x)=\begin{cases}\frac1x&amp;;x&gt;0\\1&amp;;x\le0\end{cases}$$</p> <p>$$\lim_{x\to\infty}f(x)=0$$</p> <p>but</p> <p>$$\lim_{x\to0^+}f(1/x)=\lim_{x\to0^+}x=0$$</p> <p>$$\lim_{x\to0^-}f(1/x)=\lim_{x\to0^-}1=1$$</p> Answer 2: <p>Maybe the simplest counterexample would be $f(x)=e^{-x}$.</p> <p>The limit $$\lim_{x\to 0} e^{-\frac1x}$$ does not exist, because if $x$ is small but positive, $e^{-\frac1x}$ is close to $0$, but if $x$ is small and negative, then $e^{-\frac1x}$ is large.</p> Answer 3: <p>Example:</p> <p>$$f(x)=\begin{cases}\frac1x&amp;;x&gt;0\\1&amp;;x\le0\end{cases}$$</p> <p>$$\lim_{x\to\infty}f(x)=0$$</p> <p>but</p> <p>$$\lim_{x\to0^+}f(1/x)=\lim_{x\to0^+}x=0$$</p> <p>$$\lim_{x\to0^-}f(1/x)=\lim_{x\to0^-}1=1$$</p> Answer 4: <p>Maybe the simplest counterexample would be $f(x)=e^{-x}$.</p> <p>The limit $$\lim_{x\to 0} e^{-\frac1x}$$ does not exist, because if $x$ is small but positive, $e^{-\frac1x}$ is close to $0$, but if $x$ is small and negative, then $e^{-\frac1x}$ is large.</p> Answer 5: <p>Example:</p> <p>$$f(x)=\begin{cases}\frac1x&amp;;x&gt;0\\1&amp;;x\le0\end{cases}$$</p> <p>$$\lim_{x\to\infty}f(x)=0$$</p> <p>but</p> <p>$$\lim_{x\to0^+}f(1/x)=\lim_{x\to0^+}x=0$$</p> <p>$$\lim_{x\to0^-}f(1/x)=\lim_{x\to0^-}1=1$$</p> Answer 6: <p>Maybe the simplest counterexample would be $f(x)=e^{-x}$.</p> <p>The limit $$\lim_{x\to 0} e^{-\frac1x}$$ does not exist, because if $x$ is small but positive, $e^{-\frac1x}$ is close to $0$, but if $x$ is small and negative, then $e^{-\frac1x}$ is large.</p> Answer 7: <p>Example:</p> <p>$$f(x)=\begin{cases}\frac1x&amp;;x&gt;0\\1&amp;;x\le0\end{cases}$$</p> <p>$$\lim_{x\to\infty}f(x)=0$$</p> <p>but</p> <p>$$\lim_{x\to0^+}f(1/x)=\lim_{x\to0^+}x=0$$</p> <p>$$\lim_{x\to0^-}f(1/x)=\lim_{x\to0^-}1=1$$</p> Answer 8: <p>Maybe the simplest counterexample would be $f(x)=e^{-x}$.</p> <p>The limit $$\lim_{x\to 0} e^{-\frac1x}$$ does not exist, because if $x$ is small but positive, $e^{-\frac1x}$ is close to $0$, but if $x$ is small and negative, then $e^{-\frac1x}$ is large.</p>
Can I construct a collection of pairs of socks?
2,081,822
4
https://math.stackexchange.com/questions/2081822/can-i-construct-a-collection-of-pairs-of-socks
<p>A standard example of something that cannot be done without the axiom of choice is choosing one sock from each pair of socks in an infinite collection of pairs of socks. In contrast, it <em>is</em> possible to do a similar thing for shoes with the axiom of replacement, using the functional relation $P(x,y)$ which evaluates to true iff x is a pair of shoes and y is the left shoe from this pair.</p> <p>Is it possible to construct, in ZF, a collection of pairs whose elements are as indistinguishable as the elements of a pair of socks, so that the axiom of replacement won't help us choose one from each pair? I'm not sure how this would be done without the elements of the pairs actually being equal, which would make them no longer pairs. Any difference between the elements of a pair seems like it would be possible grounds for a functional relation that could circumvent the axiom of choice.</p>
[ { "answer_id": 2081835, "score": 4, "text": "<p>You can't <em>construct</em> such things in ZF (because the ZF world you work in could actually be a ZFC world) -- but they may (or may not) <em>exist</em> all the same.</p>\n\n<blockquote>\n <p>I'm not sure how this would be done without the elements of the pairs actually being equal, which would make them no longer pairs. Any difference between the elements of a pair seems like it would be possible grounds for a functional relation that could circumvent the axiom of choice.</p>\n</blockquote>\n\n<p>That is not actually a good intuition.</p>\n\n<p>There are (supposing ZF is consistent) models of ZF in which there is a countable family of pairwise disjoint <strong>2-element</strong> sets that has no choice function.</p>\n\n<p>In such a model, if you view <em>any single</em> of the 2-element sets in isolation, you can perfectly well distinguish one of its elements from another. What prevents you from constructing a choice function using Replacement is that you need <em>different</em> properties for distinguishing elements of different pairs -- there is no <em>single</em> formula that is true for exactly one element of <em>each</em> of the pairs.</p>\n" }, { "answer_id": 2082239, "score": 3, "text": "<p>Let me add to Henning's excellent answer that it is consistent with $\\sf ZF$ that every family of finite sets admits a choice function, in particular there are no Russell sets (i.e., sets which can be partitioned into pairs, and do not admit a choice function on any infinite family of these pairs).</p>\n\n<p>In fact, this is consistent with even the failure of countable choice. Namely, it might be that we cannot choose from a countable family of non-empty sets; but we are still able to choose from a countable family of <em>pairs</em>.</p>\n\n<p>So we cannot prove outright from $\\sf ZF$ or even $\\sf ZF+\\lnot AC_\\omega$, that such a family of pairs exist.</p>\n\n<p>All that we can say is that due to the work of Cohen (and Fraenkel, Mostowski and Specker to some extent) we know that assuming that such a sequence of pairs <em>exists</em> will not add a contradiction to $\\sf ZF$, unless such a contradiction already exists. This is worse, since there are no \"natural failures of choice\" which can prove the existence of such set, other than assuming it exists.</p>\n" }, { "answer_id": 2081835, "score": 4, "text": "<p>You can't <em>construct</em> such things in ZF (because the ZF world you work in could actually be a ZFC world) -- but they may (or may not) <em>exist</em> all the same.</p>\n\n<blockquote>\n <p>I'm not sure how this would be done without the elements of the pairs actually being equal, which would make them no longer pairs. Any difference between the elements of a pair seems like it would be possible grounds for a functional relation that could circumvent the axiom of choice.</p>\n</blockquote>\n\n<p>That is not actually a good intuition.</p>\n\n<p>There are (supposing ZF is consistent) models of ZF in which there is a countable family of pairwise disjoint <strong>2-element</strong> sets that has no choice function.</p>\n\n<p>In such a model, if you view <em>any single</em> of the 2-element sets in isolation, you can perfectly well distinguish one of its elements from another. What prevents you from constructing a choice function using Replacement is that you need <em>different</em> properties for distinguishing elements of different pairs -- there is no <em>single</em> formula that is true for exactly one element of <em>each</em> of the pairs.</p>\n" }, { "answer_id": 2082239, "score": 3, "text": "<p>Let me add to Henning's excellent answer that it is consistent with $\\sf ZF$ that every family of finite sets admits a choice function, in particular there are no Russell sets (i.e., sets which can be partitioned into pairs, and do not admit a choice function on any infinite family of these pairs).</p>\n\n<p>In fact, this is consistent with even the failure of countable choice. Namely, it might be that we cannot choose from a countable family of non-empty sets; but we are still able to choose from a countable family of <em>pairs</em>.</p>\n\n<p>So we cannot prove outright from $\\sf ZF$ or even $\\sf ZF+\\lnot AC_\\omega$, that such a family of pairs exist.</p>\n\n<p>All that we can say is that due to the work of Cohen (and Fraenkel, Mostowski and Specker to some extent) we know that assuming that such a sequence of pairs <em>exists</em> will not add a contradiction to $\\sf ZF$, unless such a contradiction already exists. This is worse, since there are no \"natural failures of choice\" which can prove the existence of such set, other than assuming it exists.</p>\n" }, { "answer_id": 2081835, "score": 4, "text": "<p>You can't <em>construct</em> such things in ZF (because the ZF world you work in could actually be a ZFC world) -- but they may (or may not) <em>exist</em> all the same.</p>\n\n<blockquote>\n <p>I'm not sure how this would be done without the elements of the pairs actually being equal, which would make them no longer pairs. Any difference between the elements of a pair seems like it would be possible grounds for a functional relation that could circumvent the axiom of choice.</p>\n</blockquote>\n\n<p>That is not actually a good intuition.</p>\n\n<p>There are (supposing ZF is consistent) models of ZF in which there is a countable family of pairwise disjoint <strong>2-element</strong> sets that has no choice function.</p>\n\n<p>In such a model, if you view <em>any single</em> of the 2-element sets in isolation, you can perfectly well distinguish one of its elements from another. What prevents you from constructing a choice function using Replacement is that you need <em>different</em> properties for distinguishing elements of different pairs -- there is no <em>single</em> formula that is true for exactly one element of <em>each</em> of the pairs.</p>\n" }, { "answer_id": 2082239, "score": 3, "text": "<p>Let me add to Henning's excellent answer that it is consistent with $\\sf ZF$ that every family of finite sets admits a choice function, in particular there are no Russell sets (i.e., sets which can be partitioned into pairs, and do not admit a choice function on any infinite family of these pairs).</p>\n\n<p>In fact, this is consistent with even the failure of countable choice. Namely, it might be that we cannot choose from a countable family of non-empty sets; but we are still able to choose from a countable family of <em>pairs</em>.</p>\n\n<p>So we cannot prove outright from $\\sf ZF$ or even $\\sf ZF+\\lnot AC_\\omega$, that such a family of pairs exist.</p>\n\n<p>All that we can say is that due to the work of Cohen (and Fraenkel, Mostowski and Specker to some extent) we know that assuming that such a sequence of pairs <em>exists</em> will not add a contradiction to $\\sf ZF$, unless such a contradiction already exists. This is worse, since there are no \"natural failures of choice\" which can prove the existence of such set, other than assuming it exists.</p>\n" }, { "answer_id": 2081835, "score": 4, "text": "<p>You can't <em>construct</em> such things in ZF (because the ZF world you work in could actually be a ZFC world) -- but they may (or may not) <em>exist</em> all the same.</p>\n\n<blockquote>\n <p>I'm not sure how this would be done without the elements of the pairs actually being equal, which would make them no longer pairs. Any difference between the elements of a pair seems like it would be possible grounds for a functional relation that could circumvent the axiom of choice.</p>\n</blockquote>\n\n<p>That is not actually a good intuition.</p>\n\n<p>There are (supposing ZF is consistent) models of ZF in which there is a countable family of pairwise disjoint <strong>2-element</strong> sets that has no choice function.</p>\n\n<p>In such a model, if you view <em>any single</em> of the 2-element sets in isolation, you can perfectly well distinguish one of its elements from another. What prevents you from constructing a choice function using Replacement is that you need <em>different</em> properties for distinguishing elements of different pairs -- there is no <em>single</em> formula that is true for exactly one element of <em>each</em> of the pairs.</p>\n" }, { "answer_id": 2082239, "score": 3, "text": "<p>Let me add to Henning's excellent answer that it is consistent with $\\sf ZF$ that every family of finite sets admits a choice function, in particular there are no Russell sets (i.e., sets which can be partitioned into pairs, and do not admit a choice function on any infinite family of these pairs).</p>\n\n<p>In fact, this is consistent with even the failure of countable choice. Namely, it might be that we cannot choose from a countable family of non-empty sets; but we are still able to choose from a countable family of <em>pairs</em>.</p>\n\n<p>So we cannot prove outright from $\\sf ZF$ or even $\\sf ZF+\\lnot AC_\\omega$, that such a family of pairs exist.</p>\n\n<p>All that we can say is that due to the work of Cohen (and Fraenkel, Mostowski and Specker to some extent) we know that assuming that such a sequence of pairs <em>exists</em> will not add a contradiction to $\\sf ZF$, unless such a contradiction already exists. This is worse, since there are no \"natural failures of choice\" which can prove the existence of such set, other than assuming it exists.</p>\n" } ]
Question: Can I construct a collection of pairs of socks? Answer 1: <p>You can't <em>construct</em> such things in ZF (because the ZF world you work in could actually be a ZFC world) -- but they may (or may not) <em>exist</em> all the same.</p> <blockquote> <p>I'm not sure how this would be done without the elements of the pairs actually being equal, which would make them no longer pairs. Any difference between the elements of a pair seems like it would be possible grounds for a functional relation that could circumvent the axiom of choice.</p> </blockquote> <p>That is not actually a good intuition.</p> <p>There are (supposing ZF is consistent) models of ZF in which there is a countable family of pairwise disjoint <strong>2-element</strong> sets that has no choice function.</p> <p>In such a model, if you view <em>any single</em> of the 2-element sets in isolation, you can perfectly well distinguish one of its elements from another. What prevents you from constructing a choice function using Replacement is that you need <em>different</em> properties for distinguishing elements of different pairs -- there is no <em>single</em> formula that is true for exactly one element of <em>each</em> of the pairs.</p> Answer 2: <p>Let me add to Henning's excellent answer that it is consistent with $\sf ZF$ that every family of finite sets admits a choice function, in particular there are no Russell sets (i.e., sets which can be partitioned into pairs, and do not admit a choice function on any infinite family of these pairs).</p> <p>In fact, this is consistent with even the failure of countable choice. Namely, it might be that we cannot choose from a countable family of non-empty sets; but we are still able to choose from a countable family of <em>pairs</em>.</p> <p>So we cannot prove outright from $\sf ZF$ or even $\sf ZF+\lnot AC_\omega$, that such a family of pairs exist.</p> <p>All that we can say is that due to the work of Cohen (and Fraenkel, Mostowski and Specker to some extent) we know that assuming that such a sequence of pairs <em>exists</em> will not add a contradiction to $\sf ZF$, unless such a contradiction already exists. This is worse, since there are no "natural failures of choice" which can prove the existence of such set, other than assuming it exists.</p> Answer 3: <p>You can't <em>construct</em> such things in ZF (because the ZF world you work in could actually be a ZFC world) -- but they may (or may not) <em>exist</em> all the same.</p> <blockquote> <p>I'm not sure how this would be done without the elements of the pairs actually being equal, which would make them no longer pairs. Any difference between the elements of a pair seems like it would be possible grounds for a functional relation that could circumvent the axiom of choice.</p> </blockquote> <p>That is not actually a good intuition.</p> <p>There are (supposing ZF is consistent) models of ZF in which there is a countable family of pairwise disjoint <strong>2-element</strong> sets that has no choice function.</p> <p>In such a model, if you view <em>any single</em> of the 2-element sets in isolation, you can perfectly well distinguish one of its elements from another. What prevents you from constructing a choice function using Replacement is that you need <em>different</em> properties for distinguishing elements of different pairs -- there is no <em>single</em> formula that is true for exactly one element of <em>each</em> of the pairs.</p> Answer 4: <p>Let me add to Henning's excellent answer that it is consistent with $\sf ZF$ that every family of finite sets admits a choice function, in particular there are no Russell sets (i.e., sets which can be partitioned into pairs, and do not admit a choice function on any infinite family of these pairs).</p> <p>In fact, this is consistent with even the failure of countable choice. Namely, it might be that we cannot choose from a countable family of non-empty sets; but we are still able to choose from a countable family of <em>pairs</em>.</p> <p>So we cannot prove outright from $\sf ZF$ or even $\sf ZF+\lnot AC_\omega$, that such a family of pairs exist.</p> <p>All that we can say is that due to the work of Cohen (and Fraenkel, Mostowski and Specker to some extent) we know that assuming that such a sequence of pairs <em>exists</em> will not add a contradiction to $\sf ZF$, unless such a contradiction already exists. This is worse, since there are no "natural failures of choice" which can prove the existence of such set, other than assuming it exists.</p> Answer 5: <p>You can't <em>construct</em> such things in ZF (because the ZF world you work in could actually be a ZFC world) -- but they may (or may not) <em>exist</em> all the same.</p> <blockquote> <p>I'm not sure how this would be done without the elements of the pairs actually being equal, which would make them no longer pairs. Any difference between the elements of a pair seems like it would be possible grounds for a functional relation that could circumvent the axiom of choice.</p> </blockquote> <p>That is not actually a good intuition.</p> <p>There are (supposing ZF is consistent) models of ZF in which there is a countable family of pairwise disjoint <strong>2-element</strong> sets that has no choice function.</p> <p>In such a model, if you view <em>any single</em> of the 2-element sets in isolation, you can perfectly well distinguish one of its elements from another. What prevents you from constructing a choice function using Replacement is that you need <em>different</em> properties for distinguishing elements of different pairs -- there is no <em>single</em> formula that is true for exactly one element of <em>each</em> of the pairs.</p> Answer 6: <p>Let me add to Henning's excellent answer that it is consistent with $\sf ZF$ that every family of finite sets admits a choice function, in particular there are no Russell sets (i.e., sets which can be partitioned into pairs, and do not admit a choice function on any infinite family of these pairs).</p> <p>In fact, this is consistent with even the failure of countable choice. Namely, it might be that we cannot choose from a countable family of non-empty sets; but we are still able to choose from a countable family of <em>pairs</em>.</p> <p>So we cannot prove outright from $\sf ZF$ or even $\sf ZF+\lnot AC_\omega$, that such a family of pairs exist.</p> <p>All that we can say is that due to the work of Cohen (and Fraenkel, Mostowski and Specker to some extent) we know that assuming that such a sequence of pairs <em>exists</em> will not add a contradiction to $\sf ZF$, unless such a contradiction already exists. This is worse, since there are no "natural failures of choice" which can prove the existence of such set, other than assuming it exists.</p> Answer 7: <p>You can't <em>construct</em> such things in ZF (because the ZF world you work in could actually be a ZFC world) -- but they may (or may not) <em>exist</em> all the same.</p> <blockquote> <p>I'm not sure how this would be done without the elements of the pairs actually being equal, which would make them no longer pairs. Any difference between the elements of a pair seems like it would be possible grounds for a functional relation that could circumvent the axiom of choice.</p> </blockquote> <p>That is not actually a good intuition.</p> <p>There are (supposing ZF is consistent) models of ZF in which there is a countable family of pairwise disjoint <strong>2-element</strong> sets that has no choice function.</p> <p>In such a model, if you view <em>any single</em> of the 2-element sets in isolation, you can perfectly well distinguish one of its elements from another. What prevents you from constructing a choice function using Replacement is that you need <em>different</em> properties for distinguishing elements of different pairs -- there is no <em>single</em> formula that is true for exactly one element of <em>each</em> of the pairs.</p> Answer 8: <p>Let me add to Henning's excellent answer that it is consistent with $\sf ZF$ that every family of finite sets admits a choice function, in particular there are no Russell sets (i.e., sets which can be partitioned into pairs, and do not admit a choice function on any infinite family of these pairs).</p> <p>In fact, this is consistent with even the failure of countable choice. Namely, it might be that we cannot choose from a countable family of non-empty sets; but we are still able to choose from a countable family of <em>pairs</em>.</p> <p>So we cannot prove outright from $\sf ZF$ or even $\sf ZF+\lnot AC_\omega$, that such a family of pairs exist.</p> <p>All that we can say is that due to the work of Cohen (and Fraenkel, Mostowski and Specker to some extent) we know that assuming that such a sequence of pairs <em>exists</em> will not add a contradiction to $\sf ZF$, unless such a contradiction already exists. This is worse, since there are no "natural failures of choice" which can prove the existence of such set, other than assuming it exists.</p>
Real projective space $\mathbb{R}P^1$ is diffeomorphic to $S^1$?
2,081,895
4
https://math.stackexchange.com/questions/2081895/real-projective-space-mathbbrp1-is-diffeomorphic-to-s1
<p>I am reading Lee, Manifolds and Differential Geometry, and I am a bit confused. It is seen that the real projective space $\mathbb{R}P^n$ is homeomorphic to $S^n/ \sim$, where $p \sim q$ iff $p = \pm q$. </p> <p>However, in one of the exercises (1.48) one is asked to prove that $\mathbb{R}P^1$ is diffeomorphic to $S^1$. </p> <p>Is this really true (is there a simple proof of this) and does this imply something for the relation between $S^1$ and $S^1/\sim$? </p> <p>Does this also mean that $\mathbb{R}P^1$ is <em>homeomorphic</em> to $S^1$ (since diffeomorphism is a stronger condition)? </p>
[ { "answer_id": 2081925, "score": 2, "text": "<p>The unique 1-dimensional compact connected manifold is $S^1$. </p>\n\n<p>To see that $RP^1$ is $S^1$, you view $S^1=\\{ e^{it}, t=\\in [0,2\\pi])\\}$.\nThe identification $p\\simeq q$ here can be described like so. Consider the interval $I=\\{e^{it}, t\\in [0,\\pi]\\}$. If $t\\in (0,\\pi)$ there exists a unique $t'\\in (\\pi,2\\pi)$ such that $e^{it}\\simeq e^{it'}$ and $e^{i0}=1\\simeq e^{i\\pi}=-1$. Thus the quotient space is obtained by identifying $0$ and $-1$ in $I$. That is you identify, the endpoints of an interval. This is the circle.</p>\n" }, { "answer_id": 2081938, "score": 2, "text": "<p>$f : RP^1 \\to S^1, z \\mapsto z^2$ is the desired diffeomorphism (hence homeomorphism).</p>\n" }, { "answer_id": 2081925, "score": 2, "text": "<p>The unique 1-dimensional compact connected manifold is $S^1$. </p>\n\n<p>To see that $RP^1$ is $S^1$, you view $S^1=\\{ e^{it}, t=\\in [0,2\\pi])\\}$.\nThe identification $p\\simeq q$ here can be described like so. Consider the interval $I=\\{e^{it}, t\\in [0,\\pi]\\}$. If $t\\in (0,\\pi)$ there exists a unique $t'\\in (\\pi,2\\pi)$ such that $e^{it}\\simeq e^{it'}$ and $e^{i0}=1\\simeq e^{i\\pi}=-1$. Thus the quotient space is obtained by identifying $0$ and $-1$ in $I$. That is you identify, the endpoints of an interval. This is the circle.</p>\n" }, { "answer_id": 2081938, "score": 2, "text": "<p>$f : RP^1 \\to S^1, z \\mapsto z^2$ is the desired diffeomorphism (hence homeomorphism).</p>\n" }, { "answer_id": 2081925, "score": 2, "text": "<p>The unique 1-dimensional compact connected manifold is $S^1$. </p>\n\n<p>To see that $RP^1$ is $S^1$, you view $S^1=\\{ e^{it}, t=\\in [0,2\\pi])\\}$.\nThe identification $p\\simeq q$ here can be described like so. Consider the interval $I=\\{e^{it}, t\\in [0,\\pi]\\}$. If $t\\in (0,\\pi)$ there exists a unique $t'\\in (\\pi,2\\pi)$ such that $e^{it}\\simeq e^{it'}$ and $e^{i0}=1\\simeq e^{i\\pi}=-1$. Thus the quotient space is obtained by identifying $0$ and $-1$ in $I$. That is you identify, the endpoints of an interval. This is the circle.</p>\n" }, { "answer_id": 2081938, "score": 2, "text": "<p>$f : RP^1 \\to S^1, z \\mapsto z^2$ is the desired diffeomorphism (hence homeomorphism).</p>\n" }, { "answer_id": 2081925, "score": 2, "text": "<p>The unique 1-dimensional compact connected manifold is $S^1$. </p>\n\n<p>To see that $RP^1$ is $S^1$, you view $S^1=\\{ e^{it}, t=\\in [0,2\\pi])\\}$.\nThe identification $p\\simeq q$ here can be described like so. Consider the interval $I=\\{e^{it}, t\\in [0,\\pi]\\}$. If $t\\in (0,\\pi)$ there exists a unique $t'\\in (\\pi,2\\pi)$ such that $e^{it}\\simeq e^{it'}$ and $e^{i0}=1\\simeq e^{i\\pi}=-1$. Thus the quotient space is obtained by identifying $0$ and $-1$ in $I$. That is you identify, the endpoints of an interval. This is the circle.</p>\n" }, { "answer_id": 2081938, "score": 2, "text": "<p>$f : RP^1 \\to S^1, z \\mapsto z^2$ is the desired diffeomorphism (hence homeomorphism).</p>\n" } ]
Question: Real projective space $\mathbb{R}P^1$ is diffeomorphic to $S^1$? Answer 1: <p>The unique 1-dimensional compact connected manifold is $S^1$. </p> <p>To see that $RP^1$ is $S^1$, you view $S^1=\{ e^{it}, t=\in [0,2\pi])\}$. The identification $p\simeq q$ here can be described like so. Consider the interval $I=\{e^{it}, t\in [0,\pi]\}$. If $t\in (0,\pi)$ there exists a unique $t'\in (\pi,2\pi)$ such that $e^{it}\simeq e^{it'}$ and $e^{i0}=1\simeq e^{i\pi}=-1$. Thus the quotient space is obtained by identifying $0$ and $-1$ in $I$. That is you identify, the endpoints of an interval. This is the circle.</p> Answer 2: <p>$f : RP^1 \to S^1, z \mapsto z^2$ is the desired diffeomorphism (hence homeomorphism).</p> Answer 3: <p>The unique 1-dimensional compact connected manifold is $S^1$. </p> <p>To see that $RP^1$ is $S^1$, you view $S^1=\{ e^{it}, t=\in [0,2\pi])\}$. The identification $p\simeq q$ here can be described like so. Consider the interval $I=\{e^{it}, t\in [0,\pi]\}$. If $t\in (0,\pi)$ there exists a unique $t'\in (\pi,2\pi)$ such that $e^{it}\simeq e^{it'}$ and $e^{i0}=1\simeq e^{i\pi}=-1$. Thus the quotient space is obtained by identifying $0$ and $-1$ in $I$. That is you identify, the endpoints of an interval. This is the circle.</p> Answer 4: <p>$f : RP^1 \to S^1, z \mapsto z^2$ is the desired diffeomorphism (hence homeomorphism).</p> Answer 5: <p>The unique 1-dimensional compact connected manifold is $S^1$. </p> <p>To see that $RP^1$ is $S^1$, you view $S^1=\{ e^{it}, t=\in [0,2\pi])\}$. The identification $p\simeq q$ here can be described like so. Consider the interval $I=\{e^{it}, t\in [0,\pi]\}$. If $t\in (0,\pi)$ there exists a unique $t'\in (\pi,2\pi)$ such that $e^{it}\simeq e^{it'}$ and $e^{i0}=1\simeq e^{i\pi}=-1$. Thus the quotient space is obtained by identifying $0$ and $-1$ in $I$. That is you identify, the endpoints of an interval. This is the circle.</p> Answer 6: <p>$f : RP^1 \to S^1, z \mapsto z^2$ is the desired diffeomorphism (hence homeomorphism).</p> Answer 7: <p>The unique 1-dimensional compact connected manifold is $S^1$. </p> <p>To see that $RP^1$ is $S^1$, you view $S^1=\{ e^{it}, t=\in [0,2\pi])\}$. The identification $p\simeq q$ here can be described like so. Consider the interval $I=\{e^{it}, t\in [0,\pi]\}$. If $t\in (0,\pi)$ there exists a unique $t'\in (\pi,2\pi)$ such that $e^{it}\simeq e^{it'}$ and $e^{i0}=1\simeq e^{i\pi}=-1$. Thus the quotient space is obtained by identifying $0$ and $-1$ in $I$. That is you identify, the endpoints of an interval. This is the circle.</p> Answer 8: <p>$f : RP^1 \to S^1, z \mapsto z^2$ is the desired diffeomorphism (hence homeomorphism).</p>
Are monotonic and bijective functions the same?
2,082,015
4
https://math.stackexchange.com/questions/2082015/are-monotonic-and-bijective-functions-the-same
<p>The question is simple : consider the family of <strong>monotonic functions</strong> ; $m(x) : \mathbb{R} \rightarrow \mathbb{R}$, and the familly of <strong>bijective functions</strong> ; $b(x) : \mathbb{R} \rightarrow \mathbb{R}$. Are they actually the same? If not, I would like to see some simple counter-examples.</p>
[ { "answer_id": 2082017, "score": 26, "text": "<p>Short answer:</p>\n\n<p><span class=\"math-container\">$f(x)=1$</span></p>\n\n<p>is monotonic, but clearly not bijective.</p>\n\n<hr>\n\n<p>Longer answer:</p>\n\n<p>You are probably asking about <em>strictly</em> monotonic functions (that way you can get injectivity), but the answer is still <em>no</em>.</p>\n\n<p><span class=\"math-container\">$f(x)=e^x$</span> is monotonic, but not bijective.</p>\n\n<p><span class=\"math-container\">$$f(x)=\\begin{cases}x &amp; x&gt;0\\lor x&lt;-1\\\\ -x-1 &amp; -1\\leq x\\leq 0\\end{cases}$$</span></p>\n\n<p>is bijective, but not monotonic.</p>\n\n<hr>\n\n<p>Even longer answer:</p>\n\n<p>You might mean <em>strictly monotonic continuous</em> functions, in which case the answer is still no (<span class=\"math-container\">$f(x)=e^x$</span> is strictly monotonic and continuous, but not bijective), however, it is true that the other type of counterexample cannot be found, i.e. </p>\n\n<blockquote>\n <p>Every continuous bijective function from <span class=\"math-container\">$\\mathbb R$</span> to <span class=\"math-container\">$\\mathbb R$</span> is strictly monotonic.</p>\n</blockquote>\n\n<hr>\n\n<p><strong>Edit for the question posed in comments</strong>:</p>\n\n<p>You are making a mistake a lot of math students make, and it's usually the fault of the teachers not emphasizing it enough. The thing is:</p>\n\n<p><strong><em>A FUNCTION IS DEFINED BY THREE THINGS</em></strong>:</p>\n\n<ol>\n<li>The domain.</li>\n<li>The codomain.</li>\n<li>The \"action\".</li>\n</ol>\n\n<p>So, if I want to <strong>truly</strong> mathematically correctly define some function, I can say:</p>\n\n<blockquote>\n <p><span class=\"math-container\">$f$</span> is the function <strong>from</strong> <span class=\"math-container\">$A$</span> to <span class=\"math-container\">$B$</span> defined by <span class=\"math-container\">$f(x)=...$</span></p>\n</blockquote>\n\n<p>Note, it is important to note both <strong>from where</strong> the function is mapping, <strong>to where</strong> it is mapping, and <strong>how</strong> it is mapping.</p>\n\n<p>Example:</p>\n\n<ul>\n<li>The function <span class=\"math-container\">$f:\\mathbb R\\to\\mathbb R$</span> defined by <span class=\"math-container\">$f(x)=e^x$</span> is a function.</li>\n<li>The function <span class=\"math-container\">$g:\\mathbb R\\to(0,\\infty)$</span> defined by <span class=\"math-container\">$g(x)=e^x$</span> is a function.</li>\n</ul>\n\n<p><strong>IMPORTANT</strong>:</p>\n\n<p><span class=\"math-container\">$f$</span> and <span class=\"math-container\">$g$</span> <strong>are not the same function</strong>. I cannot stress this enough. <span class=\"math-container\">$f$</span> and <span class=\"math-container\">$g$</span> map all numbers to the exact same number, but because their codomains are different, they are, by definition, different functions. It is true that if we <em>restrict</em> the codomain of <span class=\"math-container\">$f$</span> to <span class=\"math-container\">$(0,\\infty)$</span>, we get <span class=\"math-container\">$g$</span>, but it is not true that <span class=\"math-container\">$f$</span> is the same function as <span class=\"math-container\">$g$</span>.</p>\n\n<p><strong>Why</strong>?</p>\n\n<p>You may think this is unnecessary, but it is very necessary if you want any meaningful definition of the word <strong>surjective</strong>. Why? Well, remember:</p>\n\n<blockquote>\n <p>A function <span class=\"math-container\">$f:A\\to B$</span> is surjective if, for every <span class=\"math-container\">$b\\in B$</span>, there exists some <span class=\"math-container\">$a\\in A$</span> such that <span class=\"math-container\">$f(a)=b$</span>.</p>\n</blockquote>\n\n<p>Now, take any function <span class=\"math-container\">$h:A\\to B$</span>. And define <span class=\"math-container\">$B'=f(A)=\\{f(a)|a\\in A\\}$</span>. Then, this statement is true:</p>\n\n<ul>\n<li>For every <span class=\"math-container\">$b\\in B'$</span>, there exists some <span class=\"math-container\">$a\\in A$</span> such that <span class=\"math-container\">$h(a)=b$</span>.</li>\n</ul>\n\n<p>So, is <span class=\"math-container\">$h$</span> all of a sudden surjective? Just because we restricted its codomain? <strong>NO</strong>. If we restrict <span class=\"math-container\">$h$</span> to <span class=\"math-container\">$B'$</span>, we get a different function, and the <em>restricted</em> function is surjective, but <span class=\"math-container\">$h$</span> may not be.</p>\n\n<p>Similarly, our function <span class=\"math-container\">$g$</span> mapping from <span class=\"math-container\">$\\mathbb R$</span> to <span class=\"math-container\">$(0,\\infty)$</span> is surjective, but the function <span class=\"math-container\">$f$</span> is not.</p>\n" }, { "answer_id": 2082019, "score": 1, "text": "<p>Let $f: \\mathbb{R} \\to \\mathbb{R}$ be the zero function. It is monotonous yet not injective, since if $x_1 \\neq x_2$ does not imply that $f(x_1) \\neq f(x_2)$.</p>\n" }, { "answer_id": 2082029, "score": 1, "text": "<p>I guess you really mean strictly monotone functions, since constant functions are monotone and obviously not bijective. But even so, it is not enough. (Recall: a function $f:X\\to Y$ between topological spaces is said to be monotone if $f^{-1}(y)$ is connnected for each $y\\in Y$. Monotone functions can have \"flat\" spots where they are locally constant.)</p>\n\n<p>It's a simple matter to construct a large number of nonbijective monotone functions.\nIf $f:\\mathbb R\\to\\mathbb R$ is monotone, then $E_f:\\mathbb R\\to\\mathbb R$, where $E_f(x)\\equiv e^{f(x)}$, is a monotone function which is not surjective (so not bijective).</p>\n" }, { "answer_id": 2082017, "score": 26, "text": "<p>Short answer:</p>\n\n<p><span class=\"math-container\">$f(x)=1$</span></p>\n\n<p>is monotonic, but clearly not bijective.</p>\n\n<hr>\n\n<p>Longer answer:</p>\n\n<p>You are probably asking about <em>strictly</em> monotonic functions (that way you can get injectivity), but the answer is still <em>no</em>.</p>\n\n<p><span class=\"math-container\">$f(x)=e^x$</span> is monotonic, but not bijective.</p>\n\n<p><span class=\"math-container\">$$f(x)=\\begin{cases}x &amp; x&gt;0\\lor x&lt;-1\\\\ -x-1 &amp; -1\\leq x\\leq 0\\end{cases}$$</span></p>\n\n<p>is bijective, but not monotonic.</p>\n\n<hr>\n\n<p>Even longer answer:</p>\n\n<p>You might mean <em>strictly monotonic continuous</em> functions, in which case the answer is still no (<span class=\"math-container\">$f(x)=e^x$</span> is strictly monotonic and continuous, but not bijective), however, it is true that the other type of counterexample cannot be found, i.e. </p>\n\n<blockquote>\n <p>Every continuous bijective function from <span class=\"math-container\">$\\mathbb R$</span> to <span class=\"math-container\">$\\mathbb R$</span> is strictly monotonic.</p>\n</blockquote>\n\n<hr>\n\n<p><strong>Edit for the question posed in comments</strong>:</p>\n\n<p>You are making a mistake a lot of math students make, and it's usually the fault of the teachers not emphasizing it enough. The thing is:</p>\n\n<p><strong><em>A FUNCTION IS DEFINED BY THREE THINGS</em></strong>:</p>\n\n<ol>\n<li>The domain.</li>\n<li>The codomain.</li>\n<li>The \"action\".</li>\n</ol>\n\n<p>So, if I want to <strong>truly</strong> mathematically correctly define some function, I can say:</p>\n\n<blockquote>\n <p><span class=\"math-container\">$f$</span> is the function <strong>from</strong> <span class=\"math-container\">$A$</span> to <span class=\"math-container\">$B$</span> defined by <span class=\"math-container\">$f(x)=...$</span></p>\n</blockquote>\n\n<p>Note, it is important to note both <strong>from where</strong> the function is mapping, <strong>to where</strong> it is mapping, and <strong>how</strong> it is mapping.</p>\n\n<p>Example:</p>\n\n<ul>\n<li>The function <span class=\"math-container\">$f:\\mathbb R\\to\\mathbb R$</span> defined by <span class=\"math-container\">$f(x)=e^x$</span> is a function.</li>\n<li>The function <span class=\"math-container\">$g:\\mathbb R\\to(0,\\infty)$</span> defined by <span class=\"math-container\">$g(x)=e^x$</span> is a function.</li>\n</ul>\n\n<p><strong>IMPORTANT</strong>:</p>\n\n<p><span class=\"math-container\">$f$</span> and <span class=\"math-container\">$g$</span> <strong>are not the same function</strong>. I cannot stress this enough. <span class=\"math-container\">$f$</span> and <span class=\"math-container\">$g$</span> map all numbers to the exact same number, but because their codomains are different, they are, by definition, different functions. It is true that if we <em>restrict</em> the codomain of <span class=\"math-container\">$f$</span> to <span class=\"math-container\">$(0,\\infty)$</span>, we get <span class=\"math-container\">$g$</span>, but it is not true that <span class=\"math-container\">$f$</span> is the same function as <span class=\"math-container\">$g$</span>.</p>\n\n<p><strong>Why</strong>?</p>\n\n<p>You may think this is unnecessary, but it is very necessary if you want any meaningful definition of the word <strong>surjective</strong>. Why? Well, remember:</p>\n\n<blockquote>\n <p>A function <span class=\"math-container\">$f:A\\to B$</span> is surjective if, for every <span class=\"math-container\">$b\\in B$</span>, there exists some <span class=\"math-container\">$a\\in A$</span> such that <span class=\"math-container\">$f(a)=b$</span>.</p>\n</blockquote>\n\n<p>Now, take any function <span class=\"math-container\">$h:A\\to B$</span>. And define <span class=\"math-container\">$B'=f(A)=\\{f(a)|a\\in A\\}$</span>. Then, this statement is true:</p>\n\n<ul>\n<li>For every <span class=\"math-container\">$b\\in B'$</span>, there exists some <span class=\"math-container\">$a\\in A$</span> such that <span class=\"math-container\">$h(a)=b$</span>.</li>\n</ul>\n\n<p>So, is <span class=\"math-container\">$h$</span> all of a sudden surjective? Just because we restricted its codomain? <strong>NO</strong>. If we restrict <span class=\"math-container\">$h$</span> to <span class=\"math-container\">$B'$</span>, we get a different function, and the <em>restricted</em> function is surjective, but <span class=\"math-container\">$h$</span> may not be.</p>\n\n<p>Similarly, our function <span class=\"math-container\">$g$</span> mapping from <span class=\"math-container\">$\\mathbb R$</span> to <span class=\"math-container\">$(0,\\infty)$</span> is surjective, but the function <span class=\"math-container\">$f$</span> is not.</p>\n" }, { "answer_id": 2082019, "score": 1, "text": "<p>Let $f: \\mathbb{R} \\to \\mathbb{R}$ be the zero function. It is monotonous yet not injective, since if $x_1 \\neq x_2$ does not imply that $f(x_1) \\neq f(x_2)$.</p>\n" }, { "answer_id": 2082029, "score": 1, "text": "<p>I guess you really mean strictly monotone functions, since constant functions are monotone and obviously not bijective. But even so, it is not enough. (Recall: a function $f:X\\to Y$ between topological spaces is said to be monotone if $f^{-1}(y)$ is connnected for each $y\\in Y$. Monotone functions can have \"flat\" spots where they are locally constant.)</p>\n\n<p>It's a simple matter to construct a large number of nonbijective monotone functions.\nIf $f:\\mathbb R\\to\\mathbb R$ is monotone, then $E_f:\\mathbb R\\to\\mathbb R$, where $E_f(x)\\equiv e^{f(x)}$, is a monotone function which is not surjective (so not bijective).</p>\n" }, { "answer_id": 2082017, "score": 26, "text": "<p>Short answer:</p>\n\n<p><span class=\"math-container\">$f(x)=1$</span></p>\n\n<p>is monotonic, but clearly not bijective.</p>\n\n<hr>\n\n<p>Longer answer:</p>\n\n<p>You are probably asking about <em>strictly</em> monotonic functions (that way you can get injectivity), but the answer is still <em>no</em>.</p>\n\n<p><span class=\"math-container\">$f(x)=e^x$</span> is monotonic, but not bijective.</p>\n\n<p><span class=\"math-container\">$$f(x)=\\begin{cases}x &amp; x&gt;0\\lor x&lt;-1\\\\ -x-1 &amp; -1\\leq x\\leq 0\\end{cases}$$</span></p>\n\n<p>is bijective, but not monotonic.</p>\n\n<hr>\n\n<p>Even longer answer:</p>\n\n<p>You might mean <em>strictly monotonic continuous</em> functions, in which case the answer is still no (<span class=\"math-container\">$f(x)=e^x$</span> is strictly monotonic and continuous, but not bijective), however, it is true that the other type of counterexample cannot be found, i.e. </p>\n\n<blockquote>\n <p>Every continuous bijective function from <span class=\"math-container\">$\\mathbb R$</span> to <span class=\"math-container\">$\\mathbb R$</span> is strictly monotonic.</p>\n</blockquote>\n\n<hr>\n\n<p><strong>Edit for the question posed in comments</strong>:</p>\n\n<p>You are making a mistake a lot of math students make, and it's usually the fault of the teachers not emphasizing it enough. The thing is:</p>\n\n<p><strong><em>A FUNCTION IS DEFINED BY THREE THINGS</em></strong>:</p>\n\n<ol>\n<li>The domain.</li>\n<li>The codomain.</li>\n<li>The \"action\".</li>\n</ol>\n\n<p>So, if I want to <strong>truly</strong> mathematically correctly define some function, I can say:</p>\n\n<blockquote>\n <p><span class=\"math-container\">$f$</span> is the function <strong>from</strong> <span class=\"math-container\">$A$</span> to <span class=\"math-container\">$B$</span> defined by <span class=\"math-container\">$f(x)=...$</span></p>\n</blockquote>\n\n<p>Note, it is important to note both <strong>from where</strong> the function is mapping, <strong>to where</strong> it is mapping, and <strong>how</strong> it is mapping.</p>\n\n<p>Example:</p>\n\n<ul>\n<li>The function <span class=\"math-container\">$f:\\mathbb R\\to\\mathbb R$</span> defined by <span class=\"math-container\">$f(x)=e^x$</span> is a function.</li>\n<li>The function <span class=\"math-container\">$g:\\mathbb R\\to(0,\\infty)$</span> defined by <span class=\"math-container\">$g(x)=e^x$</span> is a function.</li>\n</ul>\n\n<p><strong>IMPORTANT</strong>:</p>\n\n<p><span class=\"math-container\">$f$</span> and <span class=\"math-container\">$g$</span> <strong>are not the same function</strong>. I cannot stress this enough. <span class=\"math-container\">$f$</span> and <span class=\"math-container\">$g$</span> map all numbers to the exact same number, but because their codomains are different, they are, by definition, different functions. It is true that if we <em>restrict</em> the codomain of <span class=\"math-container\">$f$</span> to <span class=\"math-container\">$(0,\\infty)$</span>, we get <span class=\"math-container\">$g$</span>, but it is not true that <span class=\"math-container\">$f$</span> is the same function as <span class=\"math-container\">$g$</span>.</p>\n\n<p><strong>Why</strong>?</p>\n\n<p>You may think this is unnecessary, but it is very necessary if you want any meaningful definition of the word <strong>surjective</strong>. Why? Well, remember:</p>\n\n<blockquote>\n <p>A function <span class=\"math-container\">$f:A\\to B$</span> is surjective if, for every <span class=\"math-container\">$b\\in B$</span>, there exists some <span class=\"math-container\">$a\\in A$</span> such that <span class=\"math-container\">$f(a)=b$</span>.</p>\n</blockquote>\n\n<p>Now, take any function <span class=\"math-container\">$h:A\\to B$</span>. And define <span class=\"math-container\">$B'=f(A)=\\{f(a)|a\\in A\\}$</span>. Then, this statement is true:</p>\n\n<ul>\n<li>For every <span class=\"math-container\">$b\\in B'$</span>, there exists some <span class=\"math-container\">$a\\in A$</span> such that <span class=\"math-container\">$h(a)=b$</span>.</li>\n</ul>\n\n<p>So, is <span class=\"math-container\">$h$</span> all of a sudden surjective? Just because we restricted its codomain? <strong>NO</strong>. If we restrict <span class=\"math-container\">$h$</span> to <span class=\"math-container\">$B'$</span>, we get a different function, and the <em>restricted</em> function is surjective, but <span class=\"math-container\">$h$</span> may not be.</p>\n\n<p>Similarly, our function <span class=\"math-container\">$g$</span> mapping from <span class=\"math-container\">$\\mathbb R$</span> to <span class=\"math-container\">$(0,\\infty)$</span> is surjective, but the function <span class=\"math-container\">$f$</span> is not.</p>\n" }, { "answer_id": 2082019, "score": 1, "text": "<p>Let $f: \\mathbb{R} \\to \\mathbb{R}$ be the zero function. It is monotonous yet not injective, since if $x_1 \\neq x_2$ does not imply that $f(x_1) \\neq f(x_2)$.</p>\n" }, { "answer_id": 2082029, "score": 1, "text": "<p>I guess you really mean strictly monotone functions, since constant functions are monotone and obviously not bijective. But even so, it is not enough. (Recall: a function $f:X\\to Y$ between topological spaces is said to be monotone if $f^{-1}(y)$ is connnected for each $y\\in Y$. Monotone functions can have \"flat\" spots where they are locally constant.)</p>\n\n<p>It's a simple matter to construct a large number of nonbijective monotone functions.\nIf $f:\\mathbb R\\to\\mathbb R$ is monotone, then $E_f:\\mathbb R\\to\\mathbb R$, where $E_f(x)\\equiv e^{f(x)}$, is a monotone function which is not surjective (so not bijective).</p>\n" }, { "answer_id": 2082017, "score": 26, "text": "<p>Short answer:</p>\n\n<p><span class=\"math-container\">$f(x)=1$</span></p>\n\n<p>is monotonic, but clearly not bijective.</p>\n\n<hr>\n\n<p>Longer answer:</p>\n\n<p>You are probably asking about <em>strictly</em> monotonic functions (that way you can get injectivity), but the answer is still <em>no</em>.</p>\n\n<p><span class=\"math-container\">$f(x)=e^x$</span> is monotonic, but not bijective.</p>\n\n<p><span class=\"math-container\">$$f(x)=\\begin{cases}x &amp; x&gt;0\\lor x&lt;-1\\\\ -x-1 &amp; -1\\leq x\\leq 0\\end{cases}$$</span></p>\n\n<p>is bijective, but not monotonic.</p>\n\n<hr>\n\n<p>Even longer answer:</p>\n\n<p>You might mean <em>strictly monotonic continuous</em> functions, in which case the answer is still no (<span class=\"math-container\">$f(x)=e^x$</span> is strictly monotonic and continuous, but not bijective), however, it is true that the other type of counterexample cannot be found, i.e. </p>\n\n<blockquote>\n <p>Every continuous bijective function from <span class=\"math-container\">$\\mathbb R$</span> to <span class=\"math-container\">$\\mathbb R$</span> is strictly monotonic.</p>\n</blockquote>\n\n<hr>\n\n<p><strong>Edit for the question posed in comments</strong>:</p>\n\n<p>You are making a mistake a lot of math students make, and it's usually the fault of the teachers not emphasizing it enough. The thing is:</p>\n\n<p><strong><em>A FUNCTION IS DEFINED BY THREE THINGS</em></strong>:</p>\n\n<ol>\n<li>The domain.</li>\n<li>The codomain.</li>\n<li>The \"action\".</li>\n</ol>\n\n<p>So, if I want to <strong>truly</strong> mathematically correctly define some function, I can say:</p>\n\n<blockquote>\n <p><span class=\"math-container\">$f$</span> is the function <strong>from</strong> <span class=\"math-container\">$A$</span> to <span class=\"math-container\">$B$</span> defined by <span class=\"math-container\">$f(x)=...$</span></p>\n</blockquote>\n\n<p>Note, it is important to note both <strong>from where</strong> the function is mapping, <strong>to where</strong> it is mapping, and <strong>how</strong> it is mapping.</p>\n\n<p>Example:</p>\n\n<ul>\n<li>The function <span class=\"math-container\">$f:\\mathbb R\\to\\mathbb R$</span> defined by <span class=\"math-container\">$f(x)=e^x$</span> is a function.</li>\n<li>The function <span class=\"math-container\">$g:\\mathbb R\\to(0,\\infty)$</span> defined by <span class=\"math-container\">$g(x)=e^x$</span> is a function.</li>\n</ul>\n\n<p><strong>IMPORTANT</strong>:</p>\n\n<p><span class=\"math-container\">$f$</span> and <span class=\"math-container\">$g$</span> <strong>are not the same function</strong>. I cannot stress this enough. <span class=\"math-container\">$f$</span> and <span class=\"math-container\">$g$</span> map all numbers to the exact same number, but because their codomains are different, they are, by definition, different functions. It is true that if we <em>restrict</em> the codomain of <span class=\"math-container\">$f$</span> to <span class=\"math-container\">$(0,\\infty)$</span>, we get <span class=\"math-container\">$g$</span>, but it is not true that <span class=\"math-container\">$f$</span> is the same function as <span class=\"math-container\">$g$</span>.</p>\n\n<p><strong>Why</strong>?</p>\n\n<p>You may think this is unnecessary, but it is very necessary if you want any meaningful definition of the word <strong>surjective</strong>. Why? Well, remember:</p>\n\n<blockquote>\n <p>A function <span class=\"math-container\">$f:A\\to B$</span> is surjective if, for every <span class=\"math-container\">$b\\in B$</span>, there exists some <span class=\"math-container\">$a\\in A$</span> such that <span class=\"math-container\">$f(a)=b$</span>.</p>\n</blockquote>\n\n<p>Now, take any function <span class=\"math-container\">$h:A\\to B$</span>. And define <span class=\"math-container\">$B'=f(A)=\\{f(a)|a\\in A\\}$</span>. Then, this statement is true:</p>\n\n<ul>\n<li>For every <span class=\"math-container\">$b\\in B'$</span>, there exists some <span class=\"math-container\">$a\\in A$</span> such that <span class=\"math-container\">$h(a)=b$</span>.</li>\n</ul>\n\n<p>So, is <span class=\"math-container\">$h$</span> all of a sudden surjective? Just because we restricted its codomain? <strong>NO</strong>. If we restrict <span class=\"math-container\">$h$</span> to <span class=\"math-container\">$B'$</span>, we get a different function, and the <em>restricted</em> function is surjective, but <span class=\"math-container\">$h$</span> may not be.</p>\n\n<p>Similarly, our function <span class=\"math-container\">$g$</span> mapping from <span class=\"math-container\">$\\mathbb R$</span> to <span class=\"math-container\">$(0,\\infty)$</span> is surjective, but the function <span class=\"math-container\">$f$</span> is not.</p>\n" }, { "answer_id": 2082019, "score": 1, "text": "<p>Let $f: \\mathbb{R} \\to \\mathbb{R}$ be the zero function. It is monotonous yet not injective, since if $x_1 \\neq x_2$ does not imply that $f(x_1) \\neq f(x_2)$.</p>\n" }, { "answer_id": 2082029, "score": 1, "text": "<p>I guess you really mean strictly monotone functions, since constant functions are monotone and obviously not bijective. But even so, it is not enough. (Recall: a function $f:X\\to Y$ between topological spaces is said to be monotone if $f^{-1}(y)$ is connnected for each $y\\in Y$. Monotone functions can have \"flat\" spots where they are locally constant.)</p>\n\n<p>It's a simple matter to construct a large number of nonbijective monotone functions.\nIf $f:\\mathbb R\\to\\mathbb R$ is monotone, then $E_f:\\mathbb R\\to\\mathbb R$, where $E_f(x)\\equiv e^{f(x)}$, is a monotone function which is not surjective (so not bijective).</p>\n" } ]
Question: Are monotonic and bijective functions the same? Answer 1: <p>Short answer:</p> <p><span class="math-container">$f(x)=1$</span></p> <p>is monotonic, but clearly not bijective.</p> <hr> <p>Longer answer:</p> <p>You are probably asking about <em>strictly</em> monotonic functions (that way you can get injectivity), but the answer is still <em>no</em>.</p> <p><span class="math-container">$f(x)=e^x$</span> is monotonic, but not bijective.</p> <p><span class="math-container">$$f(x)=\begin{cases}x &amp; x&gt;0\lor x&lt;-1\\ -x-1 &amp; -1\leq x\leq 0\end{cases}$$</span></p> <p>is bijective, but not monotonic.</p> <hr> <p>Even longer answer:</p> <p>You might mean <em>strictly monotonic continuous</em> functions, in which case the answer is still no (<span class="math-container">$f(x)=e^x$</span> is strictly monotonic and continuous, but not bijective), however, it is true that the other type of counterexample cannot be found, i.e. </p> <blockquote> <p>Every continuous bijective function from <span class="math-container">$\mathbb R$</span> to <span class="math-container">$\mathbb R$</span> is strictly monotonic.</p> </blockquote> <hr> <p><strong>Edit for the question posed in comments</strong>:</p> <p>You are making a mistake a lot of math students make, and it's usually the fault of the teachers not emphasizing it enough. The thing is:</p> <p><strong><em>A FUNCTION IS DEFINED BY THREE THINGS</em></strong>:</p> <ol> <li>The domain.</li> <li>The codomain.</li> <li>The "action".</li> </ol> <p>So, if I want to <strong>truly</strong> mathematically correctly define some function, I can say:</p> <blockquote> <p><span class="math-container">$f$</span> is the function <strong>from</strong> <span class="math-container">$A$</span> to <span class="math-container">$B$</span> defined by <span class="math-container">$f(x)=...$</span></p> </blockquote> <p>Note, it is important to note both <strong>from where</strong> the function is mapping, <strong>to where</strong> it is mapping, and <strong>how</strong> it is mapping.</p> <p>Example:</p> <ul> <li>The function <span class="math-container">$f:\mathbb R\to\mathbb R$</span> defined by <span class="math-container">$f(x)=e^x$</span> is a function.</li> <li>The function <span class="math-container">$g:\mathbb R\to(0,\infty)$</span> defined by <span class="math-container">$g(x)=e^x$</span> is a function.</li> </ul> <p><strong>IMPORTANT</strong>:</p> <p><span class="math-container">$f$</span> and <span class="math-container">$g$</span> <strong>are not the same function</strong>. I cannot stress this enough. <span class="math-container">$f$</span> and <span class="math-container">$g$</span> map all numbers to the exact same number, but because their codomains are different, they are, by definition, different functions. It is true that if we <em>restrict</em> the codomain of <span class="math-container">$f$</span> to <span class="math-container">$(0,\infty)$</span>, we get <span class="math-container">$g$</span>, but it is not true that <span class="math-container">$f$</span> is the same function as <span class="math-container">$g$</span>.</p> <p><strong>Why</strong>?</p> <p>You may think this is unnecessary, but it is very necessary if you want any meaningful definition of the word <strong>surjective</strong>. Why? Well, remember:</p> <blockquote> <p>A function <span class="math-container">$f:A\to B$</span> is surjective if, for every <span class="math-container">$b\in B$</span>, there exists some <span class="math-container">$a\in A$</span> such that <span class="math-container">$f(a)=b$</span>.</p> </blockquote> <p>Now, take any function <span class="math-container">$h:A\to B$</span>. And define <span class="math-container">$B'=f(A)=\{f(a)|a\in A\}$</span>. Then, this statement is true:</p> <ul> <li>For every <span class="math-container">$b\in B'$</span>, there exists some <span class="math-container">$a\in A$</span> such that <span class="math-container">$h(a)=b$</span>.</li> </ul> <p>So, is <span class="math-container">$h$</span> all of a sudden surjective? Just because we restricted its codomain? <strong>NO</strong>. If we restrict <span class="math-container">$h$</span> to <span class="math-container">$B'$</span>, we get a different function, and the <em>restricted</em> function is surjective, but <span class="math-container">$h$</span> may not be.</p> <p>Similarly, our function <span class="math-container">$g$</span> mapping from <span class="math-container">$\mathbb R$</span> to <span class="math-container">$(0,\infty)$</span> is surjective, but the function <span class="math-container">$f$</span> is not.</p> Answer 2: <p>Let $f: \mathbb{R} \to \mathbb{R}$ be the zero function. It is monotonous yet not injective, since if $x_1 \neq x_2$ does not imply that $f(x_1) \neq f(x_2)$.</p> Answer 3: <p>I guess you really mean strictly monotone functions, since constant functions are monotone and obviously not bijective. But even so, it is not enough. (Recall: a function $f:X\to Y$ between topological spaces is said to be monotone if $f^{-1}(y)$ is connnected for each $y\in Y$. Monotone functions can have "flat" spots where they are locally constant.)</p> <p>It's a simple matter to construct a large number of nonbijective monotone functions. If $f:\mathbb R\to\mathbb R$ is monotone, then $E_f:\mathbb R\to\mathbb R$, where $E_f(x)\equiv e^{f(x)}$, is a monotone function which is not surjective (so not bijective).</p> Answer 4: <p>Short answer:</p> <p><span class="math-container">$f(x)=1$</span></p> <p>is monotonic, but clearly not bijective.</p> <hr> <p>Longer answer:</p> <p>You are probably asking about <em>strictly</em> monotonic functions (that way you can get injectivity), but the answer is still <em>no</em>.</p> <p><span class="math-container">$f(x)=e^x$</span> is monotonic, but not bijective.</p> <p><span class="math-container">$$f(x)=\begin{cases}x &amp; x&gt;0\lor x&lt;-1\\ -x-1 &amp; -1\leq x\leq 0\end{cases}$$</span></p> <p>is bijective, but not monotonic.</p> <hr> <p>Even longer answer:</p> <p>You might mean <em>strictly monotonic continuous</em> functions, in which case the answer is still no (<span class="math-container">$f(x)=e^x$</span> is strictly monotonic and continuous, but not bijective), however, it is true that the other type of counterexample cannot be found, i.e. </p> <blockquote> <p>Every continuous bijective function from <span class="math-container">$\mathbb R$</span> to <span class="math-container">$\mathbb R$</span> is strictly monotonic.</p> </blockquote> <hr> <p><strong>Edit for the question posed in comments</strong>:</p> <p>You are making a mistake a lot of math students make, and it's usually the fault of the teachers not emphasizing it enough. The thing is:</p> <p><strong><em>A FUNCTION IS DEFINED BY THREE THINGS</em></strong>:</p> <ol> <li>The domain.</li> <li>The codomain.</li> <li>The "action".</li> </ol> <p>So, if I want to <strong>truly</strong> mathematically correctly define some function, I can say:</p> <blockquote> <p><span class="math-container">$f$</span> is the function <strong>from</strong> <span class="math-container">$A$</span> to <span class="math-container">$B$</span> defined by <span class="math-container">$f(x)=...$</span></p> </blockquote> <p>Note, it is important to note both <strong>from where</strong> the function is mapping, <strong>to where</strong> it is mapping, and <strong>how</strong> it is mapping.</p> <p>Example:</p> <ul> <li>The function <span class="math-container">$f:\mathbb R\to\mathbb R$</span> defined by <span class="math-container">$f(x)=e^x$</span> is a function.</li> <li>The function <span class="math-container">$g:\mathbb R\to(0,\infty)$</span> defined by <span class="math-container">$g(x)=e^x$</span> is a function.</li> </ul> <p><strong>IMPORTANT</strong>:</p> <p><span class="math-container">$f$</span> and <span class="math-container">$g$</span> <strong>are not the same function</strong>. I cannot stress this enough. <span class="math-container">$f$</span> and <span class="math-container">$g$</span> map all numbers to the exact same number, but because their codomains are different, they are, by definition, different functions. It is true that if we <em>restrict</em> the codomain of <span class="math-container">$f$</span> to <span class="math-container">$(0,\infty)$</span>, we get <span class="math-container">$g$</span>, but it is not true that <span class="math-container">$f$</span> is the same function as <span class="math-container">$g$</span>.</p> <p><strong>Why</strong>?</p> <p>You may think this is unnecessary, but it is very necessary if you want any meaningful definition of the word <strong>surjective</strong>. Why? Well, remember:</p> <blockquote> <p>A function <span class="math-container">$f:A\to B$</span> is surjective if, for every <span class="math-container">$b\in B$</span>, there exists some <span class="math-container">$a\in A$</span> such that <span class="math-container">$f(a)=b$</span>.</p> </blockquote> <p>Now, take any function <span class="math-container">$h:A\to B$</span>. And define <span class="math-container">$B'=f(A)=\{f(a)|a\in A\}$</span>. Then, this statement is true:</p> <ul> <li>For every <span class="math-container">$b\in B'$</span>, there exists some <span class="math-container">$a\in A$</span> such that <span class="math-container">$h(a)=b$</span>.</li> </ul> <p>So, is <span class="math-container">$h$</span> all of a sudden surjective? Just because we restricted its codomain? <strong>NO</strong>. If we restrict <span class="math-container">$h$</span> to <span class="math-container">$B'$</span>, we get a different function, and the <em>restricted</em> function is surjective, but <span class="math-container">$h$</span> may not be.</p> <p>Similarly, our function <span class="math-container">$g$</span> mapping from <span class="math-container">$\mathbb R$</span> to <span class="math-container">$(0,\infty)$</span> is surjective, but the function <span class="math-container">$f$</span> is not.</p> Answer 5: <p>Let $f: \mathbb{R} \to \mathbb{R}$ be the zero function. It is monotonous yet not injective, since if $x_1 \neq x_2$ does not imply that $f(x_1) \neq f(x_2)$.</p> Answer 6: <p>I guess you really mean strictly monotone functions, since constant functions are monotone and obviously not bijective. But even so, it is not enough. (Recall: a function $f:X\to Y$ between topological spaces is said to be monotone if $f^{-1}(y)$ is connnected for each $y\in Y$. Monotone functions can have "flat" spots where they are locally constant.)</p> <p>It's a simple matter to construct a large number of nonbijective monotone functions. If $f:\mathbb R\to\mathbb R$ is monotone, then $E_f:\mathbb R\to\mathbb R$, where $E_f(x)\equiv e^{f(x)}$, is a monotone function which is not surjective (so not bijective).</p> Answer 7: <p>Short answer:</p> <p><span class="math-container">$f(x)=1$</span></p> <p>is monotonic, but clearly not bijective.</p> <hr> <p>Longer answer:</p> <p>You are probably asking about <em>strictly</em> monotonic functions (that way you can get injectivity), but the answer is still <em>no</em>.</p> <p><span class="math-container">$f(x)=e^x$</span> is monotonic, but not bijective.</p> <p><span class="math-container">$$f(x)=\begin{cases}x &amp; x&gt;0\lor x&lt;-1\\ -x-1 &amp; -1\leq x\leq 0\end{cases}$$</span></p> <p>is bijective, but not monotonic.</p> <hr> <p>Even longer answer:</p> <p>You might mean <em>strictly monotonic continuous</em> functions, in which case the answer is still no (<span class="math-container">$f(x)=e^x$</span> is strictly monotonic and continuous, but not bijective), however, it is true that the other type of counterexample cannot be found, i.e. </p> <blockquote> <p>Every continuous bijective function from <span class="math-container">$\mathbb R$</span> to <span class="math-container">$\mathbb R$</span> is strictly monotonic.</p> </blockquote> <hr> <p><strong>Edit for the question posed in comments</strong>:</p> <p>You are making a mistake a lot of math students make, and it's usually the fault of the teachers not emphasizing it enough. The thing is:</p> <p><strong><em>A FUNCTION IS DEFINED BY THREE THINGS</em></strong>:</p> <ol> <li>The domain.</li> <li>The codomain.</li> <li>The "action".</li> </ol> <p>So, if I want to <strong>truly</strong> mathematically correctly define some function, I can say:</p> <blockquote> <p><span class="math-container">$f$</span> is the function <strong>from</strong> <span class="math-container">$A$</span> to <span class="math-container">$B$</span> defined by <span class="math-container">$f(x)=...$</span></p> </blockquote> <p>Note, it is important to note both <strong>from where</strong> the function is mapping, <strong>to where</strong> it is mapping, and <strong>how</strong> it is mapping.</p> <p>Example:</p> <ul> <li>The function <span class="math-container">$f:\mathbb R\to\mathbb R$</span> defined by <span class="math-container">$f(x)=e^x$</span> is a function.</li> <li>The function <span class="math-container">$g:\mathbb R\to(0,\infty)$</span> defined by <span class="math-container">$g(x)=e^x$</span> is a function.</li> </ul> <p><strong>IMPORTANT</strong>:</p> <p><span class="math-container">$f$</span> and <span class="math-container">$g$</span> <strong>are not the same function</strong>. I cannot stress this enough. <span class="math-container">$f$</span> and <span class="math-container">$g$</span> map all numbers to the exact same number, but because their codomains are different, they are, by definition, different functions. It is true that if we <em>restrict</em> the codomain of <span class="math-container">$f$</span> to <span class="math-container">$(0,\infty)$</span>, we get <span class="math-container">$g$</span>, but it is not true that <span class="math-container">$f$</span> is the same function as <span class="math-container">$g$</span>.</p> <p><strong>Why</strong>?</p> <p>You may think this is unnecessary, but it is very necessary if you want any meaningful definition of the word <strong>surjective</strong>. Why? Well, remember:</p> <blockquote> <p>A function <span class="math-container">$f:A\to B$</span> is surjective if, for every <span class="math-container">$b\in B$</span>, there exists some <span class="math-container">$a\in A$</span> such that <span class="math-container">$f(a)=b$</span>.</p> </blockquote> <p>Now, take any function <span class="math-container">$h:A\to B$</span>. And define <span class="math-container">$B'=f(A)=\{f(a)|a\in A\}$</span>. Then, this statement is true:</p> <ul> <li>For every <span class="math-container">$b\in B'$</span>, there exists some <span class="math-container">$a\in A$</span> such that <span class="math-container">$h(a)=b$</span>.</li> </ul> <p>So, is <span class="math-container">$h$</span> all of a sudden surjective? Just because we restricted its codomain? <strong>NO</strong>. If we restrict <span class="math-container">$h$</span> to <span class="math-container">$B'$</span>, we get a different function, and the <em>restricted</em> function is surjective, but <span class="math-container">$h$</span> may not be.</p> <p>Similarly, our function <span class="math-container">$g$</span> mapping from <span class="math-container">$\mathbb R$</span> to <span class="math-container">$(0,\infty)$</span> is surjective, but the function <span class="math-container">$f$</span> is not.</p> Answer 8: <p>Let $f: \mathbb{R} \to \mathbb{R}$ be the zero function. It is monotonous yet not injective, since if $x_1 \neq x_2$ does not imply that $f(x_1) \neq f(x_2)$.</p> Answer 9: <p>I guess you really mean strictly monotone functions, since constant functions are monotone and obviously not bijective. But even so, it is not enough. (Recall: a function $f:X\to Y$ between topological spaces is said to be monotone if $f^{-1}(y)$ is connnected for each $y\in Y$. Monotone functions can have "flat" spots where they are locally constant.)</p> <p>It's a simple matter to construct a large number of nonbijective monotone functions. If $f:\mathbb R\to\mathbb R$ is monotone, then $E_f:\mathbb R\to\mathbb R$, where $E_f(x)\equiv e^{f(x)}$, is a monotone function which is not surjective (so not bijective).</p> Answer 10: <p>Short answer:</p> <p><span class="math-container">$f(x)=1$</span></p> <p>is monotonic, but clearly not bijective.</p> <hr> <p>Longer answer:</p> <p>You are probably asking about <em>strictly</em> monotonic functions (that way you can get injectivity), but the answer is still <em>no</em>.</p> <p><span class="math-container">$f(x)=e^x$</span> is monotonic, but not bijective.</p> <p><span class="math-container">$$f(x)=\begin{cases}x &amp; x&gt;0\lor x&lt;-1\\ -x-1 &amp; -1\leq x\leq 0\end{cases}$$</span></p> <p>is bijective, but not monotonic.</p> <hr> <p>Even longer answer:</p> <p>You might mean <em>strictly monotonic continuous</em> functions, in which case the answer is still no (<span class="math-container">$f(x)=e^x$</span> is strictly monotonic and continuous, but not bijective), however, it is true that the other type of counterexample cannot be found, i.e. </p> <blockquote> <p>Every continuous bijective function from <span class="math-container">$\mathbb R$</span> to <span class="math-container">$\mathbb R$</span> is strictly monotonic.</p> </blockquote> <hr> <p><strong>Edit for the question posed in comments</strong>:</p> <p>You are making a mistake a lot of math students make, and it's usually the fault of the teachers not emphasizing it enough. The thing is:</p> <p><strong><em>A FUNCTION IS DEFINED BY THREE THINGS</em></strong>:</p> <ol> <li>The domain.</li> <li>The codomain.</li> <li>The "action".</li> </ol> <p>So, if I want to <strong>truly</strong> mathematically correctly define some function, I can say:</p> <blockquote> <p><span class="math-container">$f$</span> is the function <strong>from</strong> <span class="math-container">$A$</span> to <span class="math-container">$B$</span> defined by <span class="math-container">$f(x)=...$</span></p> </blockquote> <p>Note, it is important to note both <strong>from where</strong> the function is mapping, <strong>to where</strong> it is mapping, and <strong>how</strong> it is mapping.</p> <p>Example:</p> <ul> <li>The function <span class="math-container">$f:\mathbb R\to\mathbb R$</span> defined by <span class="math-container">$f(x)=e^x$</span> is a function.</li> <li>The function <span class="math-container">$g:\mathbb R\to(0,\infty)$</span> defined by <span class="math-container">$g(x)=e^x$</span> is a function.</li> </ul> <p><strong>IMPORTANT</strong>:</p> <p><span class="math-container">$f$</span> and <span class="math-container">$g$</span> <strong>are not the same function</strong>. I cannot stress this enough. <span class="math-container">$f$</span> and <span class="math-container">$g$</span> map all numbers to the exact same number, but because their codomains are different, they are, by definition, different functions. It is true that if we <em>restrict</em> the codomain of <span class="math-container">$f$</span> to <span class="math-container">$(0,\infty)$</span>, we get <span class="math-container">$g$</span>, but it is not true that <span class="math-container">$f$</span> is the same function as <span class="math-container">$g$</span>.</p> <p><strong>Why</strong>?</p> <p>You may think this is unnecessary, but it is very necessary if you want any meaningful definition of the word <strong>surjective</strong>. Why? Well, remember:</p> <blockquote> <p>A function <span class="math-container">$f:A\to B$</span> is surjective if, for every <span class="math-container">$b\in B$</span>, there exists some <span class="math-container">$a\in A$</span> such that <span class="math-container">$f(a)=b$</span>.</p> </blockquote> <p>Now, take any function <span class="math-container">$h:A\to B$</span>. And define <span class="math-container">$B'=f(A)=\{f(a)|a\in A\}$</span>. Then, this statement is true:</p> <ul> <li>For every <span class="math-container">$b\in B'$</span>, there exists some <span class="math-container">$a\in A$</span> such that <span class="math-container">$h(a)=b$</span>.</li> </ul> <p>So, is <span class="math-container">$h$</span> all of a sudden surjective? Just because we restricted its codomain? <strong>NO</strong>. If we restrict <span class="math-container">$h$</span> to <span class="math-container">$B'$</span>, we get a different function, and the <em>restricted</em> function is surjective, but <span class="math-container">$h$</span> may not be.</p> <p>Similarly, our function <span class="math-container">$g$</span> mapping from <span class="math-container">$\mathbb R$</span> to <span class="math-container">$(0,\infty)$</span> is surjective, but the function <span class="math-container">$f$</span> is not.</p> Answer 11: <p>Let $f: \mathbb{R} \to \mathbb{R}$ be the zero function. It is monotonous yet not injective, since if $x_1 \neq x_2$ does not imply that $f(x_1) \neq f(x_2)$.</p> Answer 12: <p>I guess you really mean strictly monotone functions, since constant functions are monotone and obviously not bijective. But even so, it is not enough. (Recall: a function $f:X\to Y$ between topological spaces is said to be monotone if $f^{-1}(y)$ is connnected for each $y\in Y$. Monotone functions can have "flat" spots where they are locally constant.)</p> <p>It's a simple matter to construct a large number of nonbijective monotone functions. If $f:\mathbb R\to\mathbb R$ is monotone, then $E_f:\mathbb R\to\mathbb R$, where $E_f(x)\equiv e^{f(x)}$, is a monotone function which is not surjective (so not bijective).</p>
Prove that : $\sum_{n=1}^{\infty}\left({1\over n}-{2\zeta(2n)\over 2n+1}\right)=\ln{\pi}-1$
2,082,045
4
https://math.stackexchange.com/questions/2082045/prove-that-sum-n-1-infty-left1-over-n-2-zeta2n-over-2n1-right
<p>Prove that </p> <p>$$\sum_{n=1}^{\infty}\left({1\over n}-{2\zeta(2n)\over 2n+1}\right)=\ln{\pi}-1\tag0$$</p> <p>My try:</p> <p>$$\sum_{n=1}^{\infty}\left({1\over n}-{1\over n+1}\right)=1\tag1$$</p> <p>From <a href="http://mathworld.wolfram.com/RiemannZetaFunction.html" rel="nofollow noreferrer">wolfram:(126)</a></p> <p>$$\sum_{n=1}^{\infty}{\zeta(2n)\over n(2n+1)}=\ln{2\pi}-1\tag2$$</p> <p>Rewrite:</p> <p>$$\sum_{n=1}^{\infty}\left({\zeta(2n)\over 2n}-{\zeta(2n)\over 2n+1}\right)={1\over 2}\left(\ln{2\pi}-1\right)\tag3$$</p> <p>$2\times(3)-(0)$:</p> <p>$$\sum_{n=1}^{\infty}\left({\zeta(2n)-1\over n}\right)=\ln{2}\tag4$$</p> <p>$(4)$ it is a well known proven identity from <a href="https://en.m.wikipedia.org/wiki/Riemann_zeta_function" rel="nofollow noreferrer">wikipedia(infinite series)</a></p> <p>If I got to $(4)$ and it is know on that (4) is correct then it is imply that my original question must be correct?</p>
[ { "answer_id": 2083161, "score": 5, "text": "<p>Note that $$\\sum_{n\\geq1}\\left(\\frac{1}{n}-\\frac{2\\zeta\\left(2n\\right)}{2n+1}\\right)=\\sum_{n\\geq1}\\left(\\frac{2n+1-2n\\zeta\\left(2n\\right)}{n\\left(2n+1\\right)}\\right)\n $$ $$=2\\sum_{n\\geq1}\\left(\\frac{1-\\zeta\\left(2n\\right)}{2n+1}\\right)+\\sum_{n\\geq1}\\left(\\frac{1}{n\\left(2n+1\\right)}\\right)=S_{1}+S_{2},\n $$ say. So let us analyze $S_{1}\n $. From the generating function $$\\sum_{n\\geq1}\\zeta\\left(2n\\right)x^{2n}=\\frac{1-\\pi x\\cot\\left(\\pi x\\right)}{2},\\,\\left|x\\right|&lt;1\n $$ we get $$\\sum_{n\\geq1}\\frac{\\zeta\\left(2n\\right)x^{2n+1}}{2n+1}=\\frac{1}{2}\\left(x-\\int_{0}^{x}\\pi y\\cot\\left(\\pi y\\right)dy\\right)\\tag{1}\n $$ and the integral in the RHS of $(1)$ is, integrating by parts, $$\\int_{0}^{x}\\pi y\\cot\\left(\\pi y\\right)dy=x\\log\\left(\\sin\\left(\\pi x\\right)\\right)-\\frac{1}{\\pi}\\int_{0}^{\\pi x}\\log\\left(\\sin\\left(u\\right)\\right)du\n $$ $$=x\\log\\left(\\sin\\left(\\pi x\\right)\\right)+\\frac{1}{2\\pi}\\textrm{Cl}_{2}\\left(2\\pi x\\right)+x\\log\\left(2\\right)\n $$ where $\\textrm{Cl}_{2}\\left(x\\right)\n $ is the <a href=\"https://en.wikipedia.org/wiki/Clausen_function\" rel=\"noreferrer\">Clausen function of order $2$</a>. Now recalling Taylor series of $\\textrm{arctanh}(x)\n $ $$\\sum_{n\\geq1}\\frac{x^{2n+1}}{2n+1}=\\textrm{arctanh}(x)-x\n $$ we get $$\\sum_{n\\geq1}\\frac{\\left(1-\\zeta\\left(2n\\right)\\right)x^{2n+1}}{2n+1}=\\textrm{arctanh}(x)-x-\\frac{1}{2}\\left(x-x\\log\\left(\\sin\\left(\\pi x\\right)\\right)-\\frac{1}{2\\pi}\\textrm{Cl}_{2}\\left(2\\pi x\\right)-x\\log\\left(2\\right)\\right)=f(x)$$ and recalling that $$\\textrm{Cl}_{2}\\left(m\\pi\\right)=0,\\, m\\in\\mathbb{Z}\n $$ we get $$\\sum_{n\\geq1}\\frac{\\left(1-\\zeta\\left(2n\\right)\\right)}{2n+1}=\\lim_{x\\rightarrow1} f(x)=-\\frac{3}{2}+\\frac{\\log\\left(4\\pi\\right)}{2}.\n $$ Now we consider $S_{2}\n $. From the Taylor series of $\\log\\left(1-x\\right)\n $ we observe that $$\\sum_{n\\geq1}\\frac{x^{2n+1}}{n\\left(2n+1\\right)}=-\\int_{0}^{x}\\log\\left(1-y^{2}\\right)dy=-\\int_{0}^{x}\\log\\left(1-y\\right)dy-\\int_{0}^{x}\\log\\left(1+y\\right)dy\n $$ $$=-\\left(1-x\\right)\\log\\left(1-x\\right)-\\left(x+1\\right)\\log\\left(x+1\\right)+2x\n $$ hence $$\\sum_{n\\geq1}\\frac{1}{n\\left(2n+1\\right)}=\\lim_{x\\rightarrow1}\\left(-\\left(1-x\\right)\\log\\left(1-x\\right)-\\left(x+1\\right)\\log\\left(x+1\\right)+2x\\right)\n $$ $$=2-\\log\\left(4\\right)\n $$ so finally $$\\sum_{n\\geq1}\\left(\\frac{1}{n}-\\frac{2\\zeta\\left(2n\\right)}{2n+1}\\right)=\\color{red}{\\log\\left(\\pi\\right)-1}$$ as wanted. </p>\n" }, { "answer_id": 2083393, "score": 1, "text": "<p>Using the generalisation $Q_m(x)$ of the Gamma function in </p>\n\n<p><a href=\"https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf\" rel=\"nofollow noreferrer\">https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf</a> </p>\n\n<p>with $\\enspace\\displaystyle \\ln Q_m(x)=\\frac{(-x)^{m+1}}{m+1}\\gamma +\\sum\\limits_{n=2}^\\infty\\frac{(-x)^{m+n}}{m+n}\\zeta(n)\\enspace$ on page $13$, $(4.2)$, $Q_m:=Q_m(1),$</p>\n\n<p>and $\\enspace\\displaystyle Q_m^*(x):=(1+x)Q_m(x)\\enspace$ on page $24$, $(4.8)$ and $(4.9)$, </p>\n\n<p>and with $\\enspace\\displaystyle Q_1^*(1)=2Q_1(1)=2\\frac{\\sqrt{2\\pi}}{e}\\enspace$ and $\\enspace\\displaystyle Q_1^*(-1)=\\frac{1}{eQ_1(1)}=\\frac{1}{\\sqrt{2\\pi}}\\enspace$ we get</p>\n\n<p>$\\enspace\\displaystyle\\sum\\limits_{n=1}^\\infty(\\frac{1}{n}-\\frac{2\\zeta(2n)}{2n+1})=-2\\sum\\limits_{n=1}^\\infty(\\frac{\\zeta(2n)}{2n+1}-\\frac{1}{2n+1})+2\\sum\\limits_{n=1}^\\infty(\\frac{1}{2n}-\\frac{1}{2n+1})=$ </p>\n\n<p>$\\enspace\\displaystyle =-2+\\ln Q_1^*(1)-\\ln Q_1^*(-1)+2(1-\\ln 2)=\\ln\\pi -1$</p>\n" }, { "answer_id": 4688854, "score": 0, "text": "<p>We have to prove the relation</p>\n<p><span class=\"math-container\">$$s:=\\sum _{n=1}^{\\infty } \\left(\\frac{1}{n}-\\frac{2 \\zeta (2 n)}{2 n+1}\\right)=\\log(\\pi)-1\\tag{1}$$</span></p>\n<p>Instead of the sum <span class=\"math-container\">$s$</span> we consider the generating sum (<span class=\"math-container\">$0&lt;z&lt;1$</span>)</p>\n<p><span class=\"math-container\">$$g(z)=\\sum _{n=1}^{\\infty } z^{2n}\\left(\\frac{1}{n}-\\frac{2 \\zeta (2 n)}{2 n+1}\\right)=g_1(z)+g_2(z)\\tag{2}$$</span></p>\n<p>so that we have <span class=\"math-container\">$s=\\underset{z\\to 1_{-}}{\\text{lim}}(g(z))$</span>.</p>\n<p>Now\n<span class=\"math-container\">$$g_1(z) = -\\log(1-z^2)\\tag{3}$$</span></p>\n<p>and</p>\n<p><span class=\"math-container\">$$\\begin{align}\ng_2(z) &amp; := -2 \\sum _{n=1}^{\\infty } \\frac{ \\zeta (2 n)z^{2 n}}{2 n+1}\\\\\n&amp; = -2 \\sum _{n=1}^{\\infty } \\zeta (2 n) \\frac{1}{z}\\int_0^z t^{2 n} \\, dt \\\\\n&amp; =\n -\\frac{2}{z} \\int_0^z \\left(\\sum _{n=1}^{\\infty } \\zeta (2 n) t^{2 n}\\right) \\, dt\\\\\n&amp; = -\\frac{2}{z}\\sum _{k=1}^{\\infty } \\int_0^z \\left(\\sum _{n=1}^{\\infty } \\frac{t^{2 n}}{k^{2 n}}\\right) \\, dt \\\\\n&amp; = -\\frac{2}{z} \\sum _{k=1}^{\\infty } \\int_0^z \\left(-\\frac{t^2}{t^2-k^2} \\right)\\, dt\\\\\n&amp; = \\frac{2}{z}\\sum _{k=1}^{\\infty } \\left(z-\\frac{1}{2} k \\log \\left(\\frac{k+z}{k-z}\\right)\\right)\n\\end{align}\n\\tag{4}$$</span></p>\n<p>In order to do the limit <span class=\"math-container\">$z\\to 1$</span> we consider <span class=\"math-container\">$g_1$</span> and the first term of the sum of <span class=\"math-container\">$g_2$</span> together plus the remaining sum:</p>\n<p><span class=\"math-container\">$$s=\\underset{z\\to 1}{\\text{lim}}(g(z))= \\underset{z\\to 1}{\\text{lim}}(-\\log(1-z^2) + \\left(z-\\frac{1}{2} \\log \\left(\\frac{1+z}{1-z}\\right)\\right))+ s_3\\\\= 2 (1-\\log (2))+s3\\tag{5}$$</span></p>\n<p>here <span class=\"math-container\">$s3=\\underset{m\\to \\infty}{\\text{lim}}s_3(m)$</span> where</p>\n<p><span class=\"math-container\">$$s_3(m) = 2 \\sum _{k=2}^{m } \\left(\\frac{1}{2} k \\log \\left(\\frac{k+1}{k-1}\\right)-1\\right)\\tag{6}$$</span></p>\n<p>This tough looking sum can luckily be done exactly by elementary operations, i.e. shifting summation indices and summation limits, with the result</p>\n<p><span class=\"math-container\">$$s_3(m) =2 \\log (m!)+2 (m-1)-m \\log \\left(\\frac{1}{m}+1\\right)-2 m \\log (m)-\\log (m)+\\log (2)\\tag{7}$$</span></p>\n<p>For <span class=\"math-container\">$m\\to \\infty$</span> we have two non-trivial expansions</p>\n<p><span class=\"math-container\">$$\\log \\left(\\frac{1}{m}+1\\right) \\simeq \\frac{1}{m}-\\frac{1}{2 m^2}\\tag{8} $$</span></p>\n<p>and the Stirling expansion</p>\n<p><span class=\"math-container\">$$m!\\simeq \\left(\\frac{\\sqrt{\\frac{\\pi }{2}}}{6 \\sqrt{m}}+\\sqrt{2 \\color{red}\\pi } \\sqrt{m}\\right) e^{m (\\log (m)-1)}\\tag{9}$$</span></p>\n<p>which is responsible for bringing in the constant <span class=\"math-container\">$\\pi$</span>.</p>\n<p>Adding the asymptotic terms and going to the limit <span class=\"math-container\">$m=\\infty$</span> we obtain</p>\n<p><span class=\"math-container\">$$s_3 = -3+\\log (2)+\\log (2 \\pi )\\tag{10}$$</span></p>\n<p>so that finally</p>\n<p><span class=\"math-container\">$$s=2 (1-\\log (2))-3+\\log (2)+\\log (2 \\pi )=\\log(\\pi)-1\\tag{11}$$</span></p>\n<p>which completes the derivation.</p>\n" }, { "answer_id": 2083161, "score": 5, "text": "<p>Note that $$\\sum_{n\\geq1}\\left(\\frac{1}{n}-\\frac{2\\zeta\\left(2n\\right)}{2n+1}\\right)=\\sum_{n\\geq1}\\left(\\frac{2n+1-2n\\zeta\\left(2n\\right)}{n\\left(2n+1\\right)}\\right)\n $$ $$=2\\sum_{n\\geq1}\\left(\\frac{1-\\zeta\\left(2n\\right)}{2n+1}\\right)+\\sum_{n\\geq1}\\left(\\frac{1}{n\\left(2n+1\\right)}\\right)=S_{1}+S_{2},\n $$ say. So let us analyze $S_{1}\n $. From the generating function $$\\sum_{n\\geq1}\\zeta\\left(2n\\right)x^{2n}=\\frac{1-\\pi x\\cot\\left(\\pi x\\right)}{2},\\,\\left|x\\right|&lt;1\n $$ we get $$\\sum_{n\\geq1}\\frac{\\zeta\\left(2n\\right)x^{2n+1}}{2n+1}=\\frac{1}{2}\\left(x-\\int_{0}^{x}\\pi y\\cot\\left(\\pi y\\right)dy\\right)\\tag{1}\n $$ and the integral in the RHS of $(1)$ is, integrating by parts, $$\\int_{0}^{x}\\pi y\\cot\\left(\\pi y\\right)dy=x\\log\\left(\\sin\\left(\\pi x\\right)\\right)-\\frac{1}{\\pi}\\int_{0}^{\\pi x}\\log\\left(\\sin\\left(u\\right)\\right)du\n $$ $$=x\\log\\left(\\sin\\left(\\pi x\\right)\\right)+\\frac{1}{2\\pi}\\textrm{Cl}_{2}\\left(2\\pi x\\right)+x\\log\\left(2\\right)\n $$ where $\\textrm{Cl}_{2}\\left(x\\right)\n $ is the <a href=\"https://en.wikipedia.org/wiki/Clausen_function\" rel=\"noreferrer\">Clausen function of order $2$</a>. Now recalling Taylor series of $\\textrm{arctanh}(x)\n $ $$\\sum_{n\\geq1}\\frac{x^{2n+1}}{2n+1}=\\textrm{arctanh}(x)-x\n $$ we get $$\\sum_{n\\geq1}\\frac{\\left(1-\\zeta\\left(2n\\right)\\right)x^{2n+1}}{2n+1}=\\textrm{arctanh}(x)-x-\\frac{1}{2}\\left(x-x\\log\\left(\\sin\\left(\\pi x\\right)\\right)-\\frac{1}{2\\pi}\\textrm{Cl}_{2}\\left(2\\pi x\\right)-x\\log\\left(2\\right)\\right)=f(x)$$ and recalling that $$\\textrm{Cl}_{2}\\left(m\\pi\\right)=0,\\, m\\in\\mathbb{Z}\n $$ we get $$\\sum_{n\\geq1}\\frac{\\left(1-\\zeta\\left(2n\\right)\\right)}{2n+1}=\\lim_{x\\rightarrow1} f(x)=-\\frac{3}{2}+\\frac{\\log\\left(4\\pi\\right)}{2}.\n $$ Now we consider $S_{2}\n $. From the Taylor series of $\\log\\left(1-x\\right)\n $ we observe that $$\\sum_{n\\geq1}\\frac{x^{2n+1}}{n\\left(2n+1\\right)}=-\\int_{0}^{x}\\log\\left(1-y^{2}\\right)dy=-\\int_{0}^{x}\\log\\left(1-y\\right)dy-\\int_{0}^{x}\\log\\left(1+y\\right)dy\n $$ $$=-\\left(1-x\\right)\\log\\left(1-x\\right)-\\left(x+1\\right)\\log\\left(x+1\\right)+2x\n $$ hence $$\\sum_{n\\geq1}\\frac{1}{n\\left(2n+1\\right)}=\\lim_{x\\rightarrow1}\\left(-\\left(1-x\\right)\\log\\left(1-x\\right)-\\left(x+1\\right)\\log\\left(x+1\\right)+2x\\right)\n $$ $$=2-\\log\\left(4\\right)\n $$ so finally $$\\sum_{n\\geq1}\\left(\\frac{1}{n}-\\frac{2\\zeta\\left(2n\\right)}{2n+1}\\right)=\\color{red}{\\log\\left(\\pi\\right)-1}$$ as wanted. </p>\n" }, { "answer_id": 2083393, "score": 1, "text": "<p>Using the generalisation $Q_m(x)$ of the Gamma function in </p>\n\n<p><a href=\"https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf\" rel=\"nofollow noreferrer\">https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf</a> </p>\n\n<p>with $\\enspace\\displaystyle \\ln Q_m(x)=\\frac{(-x)^{m+1}}{m+1}\\gamma +\\sum\\limits_{n=2}^\\infty\\frac{(-x)^{m+n}}{m+n}\\zeta(n)\\enspace$ on page $13$, $(4.2)$, $Q_m:=Q_m(1),$</p>\n\n<p>and $\\enspace\\displaystyle Q_m^*(x):=(1+x)Q_m(x)\\enspace$ on page $24$, $(4.8)$ and $(4.9)$, </p>\n\n<p>and with $\\enspace\\displaystyle Q_1^*(1)=2Q_1(1)=2\\frac{\\sqrt{2\\pi}}{e}\\enspace$ and $\\enspace\\displaystyle Q_1^*(-1)=\\frac{1}{eQ_1(1)}=\\frac{1}{\\sqrt{2\\pi}}\\enspace$ we get</p>\n\n<p>$\\enspace\\displaystyle\\sum\\limits_{n=1}^\\infty(\\frac{1}{n}-\\frac{2\\zeta(2n)}{2n+1})=-2\\sum\\limits_{n=1}^\\infty(\\frac{\\zeta(2n)}{2n+1}-\\frac{1}{2n+1})+2\\sum\\limits_{n=1}^\\infty(\\frac{1}{2n}-\\frac{1}{2n+1})=$ </p>\n\n<p>$\\enspace\\displaystyle =-2+\\ln Q_1^*(1)-\\ln Q_1^*(-1)+2(1-\\ln 2)=\\ln\\pi -1$</p>\n" }, { "answer_id": 4688854, "score": 0, "text": "<p>We have to prove the relation</p>\n<p><span class=\"math-container\">$$s:=\\sum _{n=1}^{\\infty } \\left(\\frac{1}{n}-\\frac{2 \\zeta (2 n)}{2 n+1}\\right)=\\log(\\pi)-1\\tag{1}$$</span></p>\n<p>Instead of the sum <span class=\"math-container\">$s$</span> we consider the generating sum (<span class=\"math-container\">$0&lt;z&lt;1$</span>)</p>\n<p><span class=\"math-container\">$$g(z)=\\sum _{n=1}^{\\infty } z^{2n}\\left(\\frac{1}{n}-\\frac{2 \\zeta (2 n)}{2 n+1}\\right)=g_1(z)+g_2(z)\\tag{2}$$</span></p>\n<p>so that we have <span class=\"math-container\">$s=\\underset{z\\to 1_{-}}{\\text{lim}}(g(z))$</span>.</p>\n<p>Now\n<span class=\"math-container\">$$g_1(z) = -\\log(1-z^2)\\tag{3}$$</span></p>\n<p>and</p>\n<p><span class=\"math-container\">$$\\begin{align}\ng_2(z) &amp; := -2 \\sum _{n=1}^{\\infty } \\frac{ \\zeta (2 n)z^{2 n}}{2 n+1}\\\\\n&amp; = -2 \\sum _{n=1}^{\\infty } \\zeta (2 n) \\frac{1}{z}\\int_0^z t^{2 n} \\, dt \\\\\n&amp; =\n -\\frac{2}{z} \\int_0^z \\left(\\sum _{n=1}^{\\infty } \\zeta (2 n) t^{2 n}\\right) \\, dt\\\\\n&amp; = -\\frac{2}{z}\\sum _{k=1}^{\\infty } \\int_0^z \\left(\\sum _{n=1}^{\\infty } \\frac{t^{2 n}}{k^{2 n}}\\right) \\, dt \\\\\n&amp; = -\\frac{2}{z} \\sum _{k=1}^{\\infty } \\int_0^z \\left(-\\frac{t^2}{t^2-k^2} \\right)\\, dt\\\\\n&amp; = \\frac{2}{z}\\sum _{k=1}^{\\infty } \\left(z-\\frac{1}{2} k \\log \\left(\\frac{k+z}{k-z}\\right)\\right)\n\\end{align}\n\\tag{4}$$</span></p>\n<p>In order to do the limit <span class=\"math-container\">$z\\to 1$</span> we consider <span class=\"math-container\">$g_1$</span> and the first term of the sum of <span class=\"math-container\">$g_2$</span> together plus the remaining sum:</p>\n<p><span class=\"math-container\">$$s=\\underset{z\\to 1}{\\text{lim}}(g(z))= \\underset{z\\to 1}{\\text{lim}}(-\\log(1-z^2) + \\left(z-\\frac{1}{2} \\log \\left(\\frac{1+z}{1-z}\\right)\\right))+ s_3\\\\= 2 (1-\\log (2))+s3\\tag{5}$$</span></p>\n<p>here <span class=\"math-container\">$s3=\\underset{m\\to \\infty}{\\text{lim}}s_3(m)$</span> where</p>\n<p><span class=\"math-container\">$$s_3(m) = 2 \\sum _{k=2}^{m } \\left(\\frac{1}{2} k \\log \\left(\\frac{k+1}{k-1}\\right)-1\\right)\\tag{6}$$</span></p>\n<p>This tough looking sum can luckily be done exactly by elementary operations, i.e. shifting summation indices and summation limits, with the result</p>\n<p><span class=\"math-container\">$$s_3(m) =2 \\log (m!)+2 (m-1)-m \\log \\left(\\frac{1}{m}+1\\right)-2 m \\log (m)-\\log (m)+\\log (2)\\tag{7}$$</span></p>\n<p>For <span class=\"math-container\">$m\\to \\infty$</span> we have two non-trivial expansions</p>\n<p><span class=\"math-container\">$$\\log \\left(\\frac{1}{m}+1\\right) \\simeq \\frac{1}{m}-\\frac{1}{2 m^2}\\tag{8} $$</span></p>\n<p>and the Stirling expansion</p>\n<p><span class=\"math-container\">$$m!\\simeq \\left(\\frac{\\sqrt{\\frac{\\pi }{2}}}{6 \\sqrt{m}}+\\sqrt{2 \\color{red}\\pi } \\sqrt{m}\\right) e^{m (\\log (m)-1)}\\tag{9}$$</span></p>\n<p>which is responsible for bringing in the constant <span class=\"math-container\">$\\pi$</span>.</p>\n<p>Adding the asymptotic terms and going to the limit <span class=\"math-container\">$m=\\infty$</span> we obtain</p>\n<p><span class=\"math-container\">$$s_3 = -3+\\log (2)+\\log (2 \\pi )\\tag{10}$$</span></p>\n<p>so that finally</p>\n<p><span class=\"math-container\">$$s=2 (1-\\log (2))-3+\\log (2)+\\log (2 \\pi )=\\log(\\pi)-1\\tag{11}$$</span></p>\n<p>which completes the derivation.</p>\n" }, { "answer_id": 2083161, "score": 5, "text": "<p>Note that $$\\sum_{n\\geq1}\\left(\\frac{1}{n}-\\frac{2\\zeta\\left(2n\\right)}{2n+1}\\right)=\\sum_{n\\geq1}\\left(\\frac{2n+1-2n\\zeta\\left(2n\\right)}{n\\left(2n+1\\right)}\\right)\n $$ $$=2\\sum_{n\\geq1}\\left(\\frac{1-\\zeta\\left(2n\\right)}{2n+1}\\right)+\\sum_{n\\geq1}\\left(\\frac{1}{n\\left(2n+1\\right)}\\right)=S_{1}+S_{2},\n $$ say. So let us analyze $S_{1}\n $. From the generating function $$\\sum_{n\\geq1}\\zeta\\left(2n\\right)x^{2n}=\\frac{1-\\pi x\\cot\\left(\\pi x\\right)}{2},\\,\\left|x\\right|&lt;1\n $$ we get $$\\sum_{n\\geq1}\\frac{\\zeta\\left(2n\\right)x^{2n+1}}{2n+1}=\\frac{1}{2}\\left(x-\\int_{0}^{x}\\pi y\\cot\\left(\\pi y\\right)dy\\right)\\tag{1}\n $$ and the integral in the RHS of $(1)$ is, integrating by parts, $$\\int_{0}^{x}\\pi y\\cot\\left(\\pi y\\right)dy=x\\log\\left(\\sin\\left(\\pi x\\right)\\right)-\\frac{1}{\\pi}\\int_{0}^{\\pi x}\\log\\left(\\sin\\left(u\\right)\\right)du\n $$ $$=x\\log\\left(\\sin\\left(\\pi x\\right)\\right)+\\frac{1}{2\\pi}\\textrm{Cl}_{2}\\left(2\\pi x\\right)+x\\log\\left(2\\right)\n $$ where $\\textrm{Cl}_{2}\\left(x\\right)\n $ is the <a href=\"https://en.wikipedia.org/wiki/Clausen_function\" rel=\"noreferrer\">Clausen function of order $2$</a>. Now recalling Taylor series of $\\textrm{arctanh}(x)\n $ $$\\sum_{n\\geq1}\\frac{x^{2n+1}}{2n+1}=\\textrm{arctanh}(x)-x\n $$ we get $$\\sum_{n\\geq1}\\frac{\\left(1-\\zeta\\left(2n\\right)\\right)x^{2n+1}}{2n+1}=\\textrm{arctanh}(x)-x-\\frac{1}{2}\\left(x-x\\log\\left(\\sin\\left(\\pi x\\right)\\right)-\\frac{1}{2\\pi}\\textrm{Cl}_{2}\\left(2\\pi x\\right)-x\\log\\left(2\\right)\\right)=f(x)$$ and recalling that $$\\textrm{Cl}_{2}\\left(m\\pi\\right)=0,\\, m\\in\\mathbb{Z}\n $$ we get $$\\sum_{n\\geq1}\\frac{\\left(1-\\zeta\\left(2n\\right)\\right)}{2n+1}=\\lim_{x\\rightarrow1} f(x)=-\\frac{3}{2}+\\frac{\\log\\left(4\\pi\\right)}{2}.\n $$ Now we consider $S_{2}\n $. From the Taylor series of $\\log\\left(1-x\\right)\n $ we observe that $$\\sum_{n\\geq1}\\frac{x^{2n+1}}{n\\left(2n+1\\right)}=-\\int_{0}^{x}\\log\\left(1-y^{2}\\right)dy=-\\int_{0}^{x}\\log\\left(1-y\\right)dy-\\int_{0}^{x}\\log\\left(1+y\\right)dy\n $$ $$=-\\left(1-x\\right)\\log\\left(1-x\\right)-\\left(x+1\\right)\\log\\left(x+1\\right)+2x\n $$ hence $$\\sum_{n\\geq1}\\frac{1}{n\\left(2n+1\\right)}=\\lim_{x\\rightarrow1}\\left(-\\left(1-x\\right)\\log\\left(1-x\\right)-\\left(x+1\\right)\\log\\left(x+1\\right)+2x\\right)\n $$ $$=2-\\log\\left(4\\right)\n $$ so finally $$\\sum_{n\\geq1}\\left(\\frac{1}{n}-\\frac{2\\zeta\\left(2n\\right)}{2n+1}\\right)=\\color{red}{\\log\\left(\\pi\\right)-1}$$ as wanted. </p>\n" }, { "answer_id": 2083393, "score": 1, "text": "<p>Using the generalisation $Q_m(x)$ of the Gamma function in </p>\n\n<p><a href=\"https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf\" rel=\"nofollow noreferrer\">https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf</a> </p>\n\n<p>with $\\enspace\\displaystyle \\ln Q_m(x)=\\frac{(-x)^{m+1}}{m+1}\\gamma +\\sum\\limits_{n=2}^\\infty\\frac{(-x)^{m+n}}{m+n}\\zeta(n)\\enspace$ on page $13$, $(4.2)$, $Q_m:=Q_m(1),$</p>\n\n<p>and $\\enspace\\displaystyle Q_m^*(x):=(1+x)Q_m(x)\\enspace$ on page $24$, $(4.8)$ and $(4.9)$, </p>\n\n<p>and with $\\enspace\\displaystyle Q_1^*(1)=2Q_1(1)=2\\frac{\\sqrt{2\\pi}}{e}\\enspace$ and $\\enspace\\displaystyle Q_1^*(-1)=\\frac{1}{eQ_1(1)}=\\frac{1}{\\sqrt{2\\pi}}\\enspace$ we get</p>\n\n<p>$\\enspace\\displaystyle\\sum\\limits_{n=1}^\\infty(\\frac{1}{n}-\\frac{2\\zeta(2n)}{2n+1})=-2\\sum\\limits_{n=1}^\\infty(\\frac{\\zeta(2n)}{2n+1}-\\frac{1}{2n+1})+2\\sum\\limits_{n=1}^\\infty(\\frac{1}{2n}-\\frac{1}{2n+1})=$ </p>\n\n<p>$\\enspace\\displaystyle =-2+\\ln Q_1^*(1)-\\ln Q_1^*(-1)+2(1-\\ln 2)=\\ln\\pi -1$</p>\n" }, { "answer_id": 4688854, "score": 0, "text": "<p>We have to prove the relation</p>\n<p><span class=\"math-container\">$$s:=\\sum _{n=1}^{\\infty } \\left(\\frac{1}{n}-\\frac{2 \\zeta (2 n)}{2 n+1}\\right)=\\log(\\pi)-1\\tag{1}$$</span></p>\n<p>Instead of the sum <span class=\"math-container\">$s$</span> we consider the generating sum (<span class=\"math-container\">$0&lt;z&lt;1$</span>)</p>\n<p><span class=\"math-container\">$$g(z)=\\sum _{n=1}^{\\infty } z^{2n}\\left(\\frac{1}{n}-\\frac{2 \\zeta (2 n)}{2 n+1}\\right)=g_1(z)+g_2(z)\\tag{2}$$</span></p>\n<p>so that we have <span class=\"math-container\">$s=\\underset{z\\to 1_{-}}{\\text{lim}}(g(z))$</span>.</p>\n<p>Now\n<span class=\"math-container\">$$g_1(z) = -\\log(1-z^2)\\tag{3}$$</span></p>\n<p>and</p>\n<p><span class=\"math-container\">$$\\begin{align}\ng_2(z) &amp; := -2 \\sum _{n=1}^{\\infty } \\frac{ \\zeta (2 n)z^{2 n}}{2 n+1}\\\\\n&amp; = -2 \\sum _{n=1}^{\\infty } \\zeta (2 n) \\frac{1}{z}\\int_0^z t^{2 n} \\, dt \\\\\n&amp; =\n -\\frac{2}{z} \\int_0^z \\left(\\sum _{n=1}^{\\infty } \\zeta (2 n) t^{2 n}\\right) \\, dt\\\\\n&amp; = -\\frac{2}{z}\\sum _{k=1}^{\\infty } \\int_0^z \\left(\\sum _{n=1}^{\\infty } \\frac{t^{2 n}}{k^{2 n}}\\right) \\, dt \\\\\n&amp; = -\\frac{2}{z} \\sum _{k=1}^{\\infty } \\int_0^z \\left(-\\frac{t^2}{t^2-k^2} \\right)\\, dt\\\\\n&amp; = \\frac{2}{z}\\sum _{k=1}^{\\infty } \\left(z-\\frac{1}{2} k \\log \\left(\\frac{k+z}{k-z}\\right)\\right)\n\\end{align}\n\\tag{4}$$</span></p>\n<p>In order to do the limit <span class=\"math-container\">$z\\to 1$</span> we consider <span class=\"math-container\">$g_1$</span> and the first term of the sum of <span class=\"math-container\">$g_2$</span> together plus the remaining sum:</p>\n<p><span class=\"math-container\">$$s=\\underset{z\\to 1}{\\text{lim}}(g(z))= \\underset{z\\to 1}{\\text{lim}}(-\\log(1-z^2) + \\left(z-\\frac{1}{2} \\log \\left(\\frac{1+z}{1-z}\\right)\\right))+ s_3\\\\= 2 (1-\\log (2))+s3\\tag{5}$$</span></p>\n<p>here <span class=\"math-container\">$s3=\\underset{m\\to \\infty}{\\text{lim}}s_3(m)$</span> where</p>\n<p><span class=\"math-container\">$$s_3(m) = 2 \\sum _{k=2}^{m } \\left(\\frac{1}{2} k \\log \\left(\\frac{k+1}{k-1}\\right)-1\\right)\\tag{6}$$</span></p>\n<p>This tough looking sum can luckily be done exactly by elementary operations, i.e. shifting summation indices and summation limits, with the result</p>\n<p><span class=\"math-container\">$$s_3(m) =2 \\log (m!)+2 (m-1)-m \\log \\left(\\frac{1}{m}+1\\right)-2 m \\log (m)-\\log (m)+\\log (2)\\tag{7}$$</span></p>\n<p>For <span class=\"math-container\">$m\\to \\infty$</span> we have two non-trivial expansions</p>\n<p><span class=\"math-container\">$$\\log \\left(\\frac{1}{m}+1\\right) \\simeq \\frac{1}{m}-\\frac{1}{2 m^2}\\tag{8} $$</span></p>\n<p>and the Stirling expansion</p>\n<p><span class=\"math-container\">$$m!\\simeq \\left(\\frac{\\sqrt{\\frac{\\pi }{2}}}{6 \\sqrt{m}}+\\sqrt{2 \\color{red}\\pi } \\sqrt{m}\\right) e^{m (\\log (m)-1)}\\tag{9}$$</span></p>\n<p>which is responsible for bringing in the constant <span class=\"math-container\">$\\pi$</span>.</p>\n<p>Adding the asymptotic terms and going to the limit <span class=\"math-container\">$m=\\infty$</span> we obtain</p>\n<p><span class=\"math-container\">$$s_3 = -3+\\log (2)+\\log (2 \\pi )\\tag{10}$$</span></p>\n<p>so that finally</p>\n<p><span class=\"math-container\">$$s=2 (1-\\log (2))-3+\\log (2)+\\log (2 \\pi )=\\log(\\pi)-1\\tag{11}$$</span></p>\n<p>which completes the derivation.</p>\n" }, { "answer_id": 2083161, "score": 5, "text": "<p>Note that $$\\sum_{n\\geq1}\\left(\\frac{1}{n}-\\frac{2\\zeta\\left(2n\\right)}{2n+1}\\right)=\\sum_{n\\geq1}\\left(\\frac{2n+1-2n\\zeta\\left(2n\\right)}{n\\left(2n+1\\right)}\\right)\n $$ $$=2\\sum_{n\\geq1}\\left(\\frac{1-\\zeta\\left(2n\\right)}{2n+1}\\right)+\\sum_{n\\geq1}\\left(\\frac{1}{n\\left(2n+1\\right)}\\right)=S_{1}+S_{2},\n $$ say. So let us analyze $S_{1}\n $. From the generating function $$\\sum_{n\\geq1}\\zeta\\left(2n\\right)x^{2n}=\\frac{1-\\pi x\\cot\\left(\\pi x\\right)}{2},\\,\\left|x\\right|&lt;1\n $$ we get $$\\sum_{n\\geq1}\\frac{\\zeta\\left(2n\\right)x^{2n+1}}{2n+1}=\\frac{1}{2}\\left(x-\\int_{0}^{x}\\pi y\\cot\\left(\\pi y\\right)dy\\right)\\tag{1}\n $$ and the integral in the RHS of $(1)$ is, integrating by parts, $$\\int_{0}^{x}\\pi y\\cot\\left(\\pi y\\right)dy=x\\log\\left(\\sin\\left(\\pi x\\right)\\right)-\\frac{1}{\\pi}\\int_{0}^{\\pi x}\\log\\left(\\sin\\left(u\\right)\\right)du\n $$ $$=x\\log\\left(\\sin\\left(\\pi x\\right)\\right)+\\frac{1}{2\\pi}\\textrm{Cl}_{2}\\left(2\\pi x\\right)+x\\log\\left(2\\right)\n $$ where $\\textrm{Cl}_{2}\\left(x\\right)\n $ is the <a href=\"https://en.wikipedia.org/wiki/Clausen_function\" rel=\"noreferrer\">Clausen function of order $2$</a>. Now recalling Taylor series of $\\textrm{arctanh}(x)\n $ $$\\sum_{n\\geq1}\\frac{x^{2n+1}}{2n+1}=\\textrm{arctanh}(x)-x\n $$ we get $$\\sum_{n\\geq1}\\frac{\\left(1-\\zeta\\left(2n\\right)\\right)x^{2n+1}}{2n+1}=\\textrm{arctanh}(x)-x-\\frac{1}{2}\\left(x-x\\log\\left(\\sin\\left(\\pi x\\right)\\right)-\\frac{1}{2\\pi}\\textrm{Cl}_{2}\\left(2\\pi x\\right)-x\\log\\left(2\\right)\\right)=f(x)$$ and recalling that $$\\textrm{Cl}_{2}\\left(m\\pi\\right)=0,\\, m\\in\\mathbb{Z}\n $$ we get $$\\sum_{n\\geq1}\\frac{\\left(1-\\zeta\\left(2n\\right)\\right)}{2n+1}=\\lim_{x\\rightarrow1} f(x)=-\\frac{3}{2}+\\frac{\\log\\left(4\\pi\\right)}{2}.\n $$ Now we consider $S_{2}\n $. From the Taylor series of $\\log\\left(1-x\\right)\n $ we observe that $$\\sum_{n\\geq1}\\frac{x^{2n+1}}{n\\left(2n+1\\right)}=-\\int_{0}^{x}\\log\\left(1-y^{2}\\right)dy=-\\int_{0}^{x}\\log\\left(1-y\\right)dy-\\int_{0}^{x}\\log\\left(1+y\\right)dy\n $$ $$=-\\left(1-x\\right)\\log\\left(1-x\\right)-\\left(x+1\\right)\\log\\left(x+1\\right)+2x\n $$ hence $$\\sum_{n\\geq1}\\frac{1}{n\\left(2n+1\\right)}=\\lim_{x\\rightarrow1}\\left(-\\left(1-x\\right)\\log\\left(1-x\\right)-\\left(x+1\\right)\\log\\left(x+1\\right)+2x\\right)\n $$ $$=2-\\log\\left(4\\right)\n $$ so finally $$\\sum_{n\\geq1}\\left(\\frac{1}{n}-\\frac{2\\zeta\\left(2n\\right)}{2n+1}\\right)=\\color{red}{\\log\\left(\\pi\\right)-1}$$ as wanted. </p>\n" }, { "answer_id": 2083393, "score": 1, "text": "<p>Using the generalisation $Q_m(x)$ of the Gamma function in </p>\n\n<p><a href=\"https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf\" rel=\"nofollow noreferrer\">https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf</a> </p>\n\n<p>with $\\enspace\\displaystyle \\ln Q_m(x)=\\frac{(-x)^{m+1}}{m+1}\\gamma +\\sum\\limits_{n=2}^\\infty\\frac{(-x)^{m+n}}{m+n}\\zeta(n)\\enspace$ on page $13$, $(4.2)$, $Q_m:=Q_m(1),$</p>\n\n<p>and $\\enspace\\displaystyle Q_m^*(x):=(1+x)Q_m(x)\\enspace$ on page $24$, $(4.8)$ and $(4.9)$, </p>\n\n<p>and with $\\enspace\\displaystyle Q_1^*(1)=2Q_1(1)=2\\frac{\\sqrt{2\\pi}}{e}\\enspace$ and $\\enspace\\displaystyle Q_1^*(-1)=\\frac{1}{eQ_1(1)}=\\frac{1}{\\sqrt{2\\pi}}\\enspace$ we get</p>\n\n<p>$\\enspace\\displaystyle\\sum\\limits_{n=1}^\\infty(\\frac{1}{n}-\\frac{2\\zeta(2n)}{2n+1})=-2\\sum\\limits_{n=1}^\\infty(\\frac{\\zeta(2n)}{2n+1}-\\frac{1}{2n+1})+2\\sum\\limits_{n=1}^\\infty(\\frac{1}{2n}-\\frac{1}{2n+1})=$ </p>\n\n<p>$\\enspace\\displaystyle =-2+\\ln Q_1^*(1)-\\ln Q_1^*(-1)+2(1-\\ln 2)=\\ln\\pi -1$</p>\n" }, { "answer_id": 4688854, "score": 0, "text": "<p>We have to prove the relation</p>\n<p><span class=\"math-container\">$$s:=\\sum _{n=1}^{\\infty } \\left(\\frac{1}{n}-\\frac{2 \\zeta (2 n)}{2 n+1}\\right)=\\log(\\pi)-1\\tag{1}$$</span></p>\n<p>Instead of the sum <span class=\"math-container\">$s$</span> we consider the generating sum (<span class=\"math-container\">$0&lt;z&lt;1$</span>)</p>\n<p><span class=\"math-container\">$$g(z)=\\sum _{n=1}^{\\infty } z^{2n}\\left(\\frac{1}{n}-\\frac{2 \\zeta (2 n)}{2 n+1}\\right)=g_1(z)+g_2(z)\\tag{2}$$</span></p>\n<p>so that we have <span class=\"math-container\">$s=\\underset{z\\to 1_{-}}{\\text{lim}}(g(z))$</span>.</p>\n<p>Now\n<span class=\"math-container\">$$g_1(z) = -\\log(1-z^2)\\tag{3}$$</span></p>\n<p>and</p>\n<p><span class=\"math-container\">$$\\begin{align}\ng_2(z) &amp; := -2 \\sum _{n=1}^{\\infty } \\frac{ \\zeta (2 n)z^{2 n}}{2 n+1}\\\\\n&amp; = -2 \\sum _{n=1}^{\\infty } \\zeta (2 n) \\frac{1}{z}\\int_0^z t^{2 n} \\, dt \\\\\n&amp; =\n -\\frac{2}{z} \\int_0^z \\left(\\sum _{n=1}^{\\infty } \\zeta (2 n) t^{2 n}\\right) \\, dt\\\\\n&amp; = -\\frac{2}{z}\\sum _{k=1}^{\\infty } \\int_0^z \\left(\\sum _{n=1}^{\\infty } \\frac{t^{2 n}}{k^{2 n}}\\right) \\, dt \\\\\n&amp; = -\\frac{2}{z} \\sum _{k=1}^{\\infty } \\int_0^z \\left(-\\frac{t^2}{t^2-k^2} \\right)\\, dt\\\\\n&amp; = \\frac{2}{z}\\sum _{k=1}^{\\infty } \\left(z-\\frac{1}{2} k \\log \\left(\\frac{k+z}{k-z}\\right)\\right)\n\\end{align}\n\\tag{4}$$</span></p>\n<p>In order to do the limit <span class=\"math-container\">$z\\to 1$</span> we consider <span class=\"math-container\">$g_1$</span> and the first term of the sum of <span class=\"math-container\">$g_2$</span> together plus the remaining sum:</p>\n<p><span class=\"math-container\">$$s=\\underset{z\\to 1}{\\text{lim}}(g(z))= \\underset{z\\to 1}{\\text{lim}}(-\\log(1-z^2) + \\left(z-\\frac{1}{2} \\log \\left(\\frac{1+z}{1-z}\\right)\\right))+ s_3\\\\= 2 (1-\\log (2))+s3\\tag{5}$$</span></p>\n<p>here <span class=\"math-container\">$s3=\\underset{m\\to \\infty}{\\text{lim}}s_3(m)$</span> where</p>\n<p><span class=\"math-container\">$$s_3(m) = 2 \\sum _{k=2}^{m } \\left(\\frac{1}{2} k \\log \\left(\\frac{k+1}{k-1}\\right)-1\\right)\\tag{6}$$</span></p>\n<p>This tough looking sum can luckily be done exactly by elementary operations, i.e. shifting summation indices and summation limits, with the result</p>\n<p><span class=\"math-container\">$$s_3(m) =2 \\log (m!)+2 (m-1)-m \\log \\left(\\frac{1}{m}+1\\right)-2 m \\log (m)-\\log (m)+\\log (2)\\tag{7}$$</span></p>\n<p>For <span class=\"math-container\">$m\\to \\infty$</span> we have two non-trivial expansions</p>\n<p><span class=\"math-container\">$$\\log \\left(\\frac{1}{m}+1\\right) \\simeq \\frac{1}{m}-\\frac{1}{2 m^2}\\tag{8} $$</span></p>\n<p>and the Stirling expansion</p>\n<p><span class=\"math-container\">$$m!\\simeq \\left(\\frac{\\sqrt{\\frac{\\pi }{2}}}{6 \\sqrt{m}}+\\sqrt{2 \\color{red}\\pi } \\sqrt{m}\\right) e^{m (\\log (m)-1)}\\tag{9}$$</span></p>\n<p>which is responsible for bringing in the constant <span class=\"math-container\">$\\pi$</span>.</p>\n<p>Adding the asymptotic terms and going to the limit <span class=\"math-container\">$m=\\infty$</span> we obtain</p>\n<p><span class=\"math-container\">$$s_3 = -3+\\log (2)+\\log (2 \\pi )\\tag{10}$$</span></p>\n<p>so that finally</p>\n<p><span class=\"math-container\">$$s=2 (1-\\log (2))-3+\\log (2)+\\log (2 \\pi )=\\log(\\pi)-1\\tag{11}$$</span></p>\n<p>which completes the derivation.</p>\n" } ]
Question: Prove that : $\sum_{n=1}^{\infty}\left({1\over n}-{2\zeta(2n)\over 2n+1}\right)=\ln{\pi}-1$ Answer 1: <p>Note that $$\sum_{n\geq1}\left(\frac{1}{n}-\frac{2\zeta\left(2n\right)}{2n+1}\right)=\sum_{n\geq1}\left(\frac{2n+1-2n\zeta\left(2n\right)}{n\left(2n+1\right)}\right) $$ $$=2\sum_{n\geq1}\left(\frac{1-\zeta\left(2n\right)}{2n+1}\right)+\sum_{n\geq1}\left(\frac{1}{n\left(2n+1\right)}\right)=S_{1}+S_{2}, $$ say. So let us analyze $S_{1} $. From the generating function $$\sum_{n\geq1}\zeta\left(2n\right)x^{2n}=\frac{1-\pi x\cot\left(\pi x\right)}{2},\,\left|x\right|&lt;1 $$ we get $$\sum_{n\geq1}\frac{\zeta\left(2n\right)x^{2n+1}}{2n+1}=\frac{1}{2}\left(x-\int_{0}^{x}\pi y\cot\left(\pi y\right)dy\right)\tag{1} $$ and the integral in the RHS of $(1)$ is, integrating by parts, $$\int_{0}^{x}\pi y\cot\left(\pi y\right)dy=x\log\left(\sin\left(\pi x\right)\right)-\frac{1}{\pi}\int_{0}^{\pi x}\log\left(\sin\left(u\right)\right)du $$ $$=x\log\left(\sin\left(\pi x\right)\right)+\frac{1}{2\pi}\textrm{Cl}_{2}\left(2\pi x\right)+x\log\left(2\right) $$ where $\textrm{Cl}_{2}\left(x\right) $ is the <a href="https://en.wikipedia.org/wiki/Clausen_function" rel="noreferrer">Clausen function of order $2$</a>. Now recalling Taylor series of $\textrm{arctanh}(x) $ $$\sum_{n\geq1}\frac{x^{2n+1}}{2n+1}=\textrm{arctanh}(x)-x $$ we get $$\sum_{n\geq1}\frac{\left(1-\zeta\left(2n\right)\right)x^{2n+1}}{2n+1}=\textrm{arctanh}(x)-x-\frac{1}{2}\left(x-x\log\left(\sin\left(\pi x\right)\right)-\frac{1}{2\pi}\textrm{Cl}_{2}\left(2\pi x\right)-x\log\left(2\right)\right)=f(x)$$ and recalling that $$\textrm{Cl}_{2}\left(m\pi\right)=0,\, m\in\mathbb{Z} $$ we get $$\sum_{n\geq1}\frac{\left(1-\zeta\left(2n\right)\right)}{2n+1}=\lim_{x\rightarrow1} f(x)=-\frac{3}{2}+\frac{\log\left(4\pi\right)}{2}. $$ Now we consider $S_{2} $. From the Taylor series of $\log\left(1-x\right) $ we observe that $$\sum_{n\geq1}\frac{x^{2n+1}}{n\left(2n+1\right)}=-\int_{0}^{x}\log\left(1-y^{2}\right)dy=-\int_{0}^{x}\log\left(1-y\right)dy-\int_{0}^{x}\log\left(1+y\right)dy $$ $$=-\left(1-x\right)\log\left(1-x\right)-\left(x+1\right)\log\left(x+1\right)+2x $$ hence $$\sum_{n\geq1}\frac{1}{n\left(2n+1\right)}=\lim_{x\rightarrow1}\left(-\left(1-x\right)\log\left(1-x\right)-\left(x+1\right)\log\left(x+1\right)+2x\right) $$ $$=2-\log\left(4\right) $$ so finally $$\sum_{n\geq1}\left(\frac{1}{n}-\frac{2\zeta\left(2n\right)}{2n+1}\right)=\color{red}{\log\left(\pi\right)-1}$$ as wanted. </p> Answer 2: <p>Using the generalisation $Q_m(x)$ of the Gamma function in </p> <p><a href="https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf" rel="nofollow noreferrer">https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf</a> </p> <p>with $\enspace\displaystyle \ln Q_m(x)=\frac{(-x)^{m+1}}{m+1}\gamma +\sum\limits_{n=2}^\infty\frac{(-x)^{m+n}}{m+n}\zeta(n)\enspace$ on page $13$, $(4.2)$, $Q_m:=Q_m(1),$</p> <p>and $\enspace\displaystyle Q_m^*(x):=(1+x)Q_m(x)\enspace$ on page $24$, $(4.8)$ and $(4.9)$, </p> <p>and with $\enspace\displaystyle Q_1^*(1)=2Q_1(1)=2\frac{\sqrt{2\pi}}{e}\enspace$ and $\enspace\displaystyle Q_1^*(-1)=\frac{1}{eQ_1(1)}=\frac{1}{\sqrt{2\pi}}\enspace$ we get</p> <p>$\enspace\displaystyle\sum\limits_{n=1}^\infty(\frac{1}{n}-\frac{2\zeta(2n)}{2n+1})=-2\sum\limits_{n=1}^\infty(\frac{\zeta(2n)}{2n+1}-\frac{1}{2n+1})+2\sum\limits_{n=1}^\infty(\frac{1}{2n}-\frac{1}{2n+1})=$ </p> <p>$\enspace\displaystyle =-2+\ln Q_1^*(1)-\ln Q_1^*(-1)+2(1-\ln 2)=\ln\pi -1$</p> Answer 3: <p>We have to prove the relation</p> <p><span class="math-container">$$s:=\sum _{n=1}^{\infty } \left(\frac{1}{n}-\frac{2 \zeta (2 n)}{2 n+1}\right)=\log(\pi)-1\tag{1}$$</span></p> <p>Instead of the sum <span class="math-container">$s$</span> we consider the generating sum (<span class="math-container">$0&lt;z&lt;1$</span>)</p> <p><span class="math-container">$$g(z)=\sum _{n=1}^{\infty } z^{2n}\left(\frac{1}{n}-\frac{2 \zeta (2 n)}{2 n+1}\right)=g_1(z)+g_2(z)\tag{2}$$</span></p> <p>so that we have <span class="math-container">$s=\underset{z\to 1_{-}}{\text{lim}}(g(z))$</span>.</p> <p>Now <span class="math-container">$$g_1(z) = -\log(1-z^2)\tag{3}$$</span></p> <p>and</p> <p><span class="math-container">$$\begin{align} g_2(z) &amp; := -2 \sum _{n=1}^{\infty } \frac{ \zeta (2 n)z^{2 n}}{2 n+1}\\ &amp; = -2 \sum _{n=1}^{\infty } \zeta (2 n) \frac{1}{z}\int_0^z t^{2 n} \, dt \\ &amp; = -\frac{2}{z} \int_0^z \left(\sum _{n=1}^{\infty } \zeta (2 n) t^{2 n}\right) \, dt\\ &amp; = -\frac{2}{z}\sum _{k=1}^{\infty } \int_0^z \left(\sum _{n=1}^{\infty } \frac{t^{2 n}}{k^{2 n}}\right) \, dt \\ &amp; = -\frac{2}{z} \sum _{k=1}^{\infty } \int_0^z \left(-\frac{t^2}{t^2-k^2} \right)\, dt\\ &amp; = \frac{2}{z}\sum _{k=1}^{\infty } \left(z-\frac{1}{2} k \log \left(\frac{k+z}{k-z}\right)\right) \end{align} \tag{4}$$</span></p> <p>In order to do the limit <span class="math-container">$z\to 1$</span> we consider <span class="math-container">$g_1$</span> and the first term of the sum of <span class="math-container">$g_2$</span> together plus the remaining sum:</p> <p><span class="math-container">$$s=\underset{z\to 1}{\text{lim}}(g(z))= \underset{z\to 1}{\text{lim}}(-\log(1-z^2) + \left(z-\frac{1}{2} \log \left(\frac{1+z}{1-z}\right)\right))+ s_3\\= 2 (1-\log (2))+s3\tag{5}$$</span></p> <p>here <span class="math-container">$s3=\underset{m\to \infty}{\text{lim}}s_3(m)$</span> where</p> <p><span class="math-container">$$s_3(m) = 2 \sum _{k=2}^{m } \left(\frac{1}{2} k \log \left(\frac{k+1}{k-1}\right)-1\right)\tag{6}$$</span></p> <p>This tough looking sum can luckily be done exactly by elementary operations, i.e. shifting summation indices and summation limits, with the result</p> <p><span class="math-container">$$s_3(m) =2 \log (m!)+2 (m-1)-m \log \left(\frac{1}{m}+1\right)-2 m \log (m)-\log (m)+\log (2)\tag{7}$$</span></p> <p>For <span class="math-container">$m\to \infty$</span> we have two non-trivial expansions</p> <p><span class="math-container">$$\log \left(\frac{1}{m}+1\right) \simeq \frac{1}{m}-\frac{1}{2 m^2}\tag{8} $$</span></p> <p>and the Stirling expansion</p> <p><span class="math-container">$$m!\simeq \left(\frac{\sqrt{\frac{\pi }{2}}}{6 \sqrt{m}}+\sqrt{2 \color{red}\pi } \sqrt{m}\right) e^{m (\log (m)-1)}\tag{9}$$</span></p> <p>which is responsible for bringing in the constant <span class="math-container">$\pi$</span>.</p> <p>Adding the asymptotic terms and going to the limit <span class="math-container">$m=\infty$</span> we obtain</p> <p><span class="math-container">$$s_3 = -3+\log (2)+\log (2 \pi )\tag{10}$$</span></p> <p>so that finally</p> <p><span class="math-container">$$s=2 (1-\log (2))-3+\log (2)+\log (2 \pi )=\log(\pi)-1\tag{11}$$</span></p> <p>which completes the derivation.</p> Answer 4: <p>Note that $$\sum_{n\geq1}\left(\frac{1}{n}-\frac{2\zeta\left(2n\right)}{2n+1}\right)=\sum_{n\geq1}\left(\frac{2n+1-2n\zeta\left(2n\right)}{n\left(2n+1\right)}\right) $$ $$=2\sum_{n\geq1}\left(\frac{1-\zeta\left(2n\right)}{2n+1}\right)+\sum_{n\geq1}\left(\frac{1}{n\left(2n+1\right)}\right)=S_{1}+S_{2}, $$ say. So let us analyze $S_{1} $. From the generating function $$\sum_{n\geq1}\zeta\left(2n\right)x^{2n}=\frac{1-\pi x\cot\left(\pi x\right)}{2},\,\left|x\right|&lt;1 $$ we get $$\sum_{n\geq1}\frac{\zeta\left(2n\right)x^{2n+1}}{2n+1}=\frac{1}{2}\left(x-\int_{0}^{x}\pi y\cot\left(\pi y\right)dy\right)\tag{1} $$ and the integral in the RHS of $(1)$ is, integrating by parts, $$\int_{0}^{x}\pi y\cot\left(\pi y\right)dy=x\log\left(\sin\left(\pi x\right)\right)-\frac{1}{\pi}\int_{0}^{\pi x}\log\left(\sin\left(u\right)\right)du $$ $$=x\log\left(\sin\left(\pi x\right)\right)+\frac{1}{2\pi}\textrm{Cl}_{2}\left(2\pi x\right)+x\log\left(2\right) $$ where $\textrm{Cl}_{2}\left(x\right) $ is the <a href="https://en.wikipedia.org/wiki/Clausen_function" rel="noreferrer">Clausen function of order $2$</a>. Now recalling Taylor series of $\textrm{arctanh}(x) $ $$\sum_{n\geq1}\frac{x^{2n+1}}{2n+1}=\textrm{arctanh}(x)-x $$ we get $$\sum_{n\geq1}\frac{\left(1-\zeta\left(2n\right)\right)x^{2n+1}}{2n+1}=\textrm{arctanh}(x)-x-\frac{1}{2}\left(x-x\log\left(\sin\left(\pi x\right)\right)-\frac{1}{2\pi}\textrm{Cl}_{2}\left(2\pi x\right)-x\log\left(2\right)\right)=f(x)$$ and recalling that $$\textrm{Cl}_{2}\left(m\pi\right)=0,\, m\in\mathbb{Z} $$ we get $$\sum_{n\geq1}\frac{\left(1-\zeta\left(2n\right)\right)}{2n+1}=\lim_{x\rightarrow1} f(x)=-\frac{3}{2}+\frac{\log\left(4\pi\right)}{2}. $$ Now we consider $S_{2} $. From the Taylor series of $\log\left(1-x\right) $ we observe that $$\sum_{n\geq1}\frac{x^{2n+1}}{n\left(2n+1\right)}=-\int_{0}^{x}\log\left(1-y^{2}\right)dy=-\int_{0}^{x}\log\left(1-y\right)dy-\int_{0}^{x}\log\left(1+y\right)dy $$ $$=-\left(1-x\right)\log\left(1-x\right)-\left(x+1\right)\log\left(x+1\right)+2x $$ hence $$\sum_{n\geq1}\frac{1}{n\left(2n+1\right)}=\lim_{x\rightarrow1}\left(-\left(1-x\right)\log\left(1-x\right)-\left(x+1\right)\log\left(x+1\right)+2x\right) $$ $$=2-\log\left(4\right) $$ so finally $$\sum_{n\geq1}\left(\frac{1}{n}-\frac{2\zeta\left(2n\right)}{2n+1}\right)=\color{red}{\log\left(\pi\right)-1}$$ as wanted. </p> Answer 5: <p>Using the generalisation $Q_m(x)$ of the Gamma function in </p> <p><a href="https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf" rel="nofollow noreferrer">https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf</a> </p> <p>with $\enspace\displaystyle \ln Q_m(x)=\frac{(-x)^{m+1}}{m+1}\gamma +\sum\limits_{n=2}^\infty\frac{(-x)^{m+n}}{m+n}\zeta(n)\enspace$ on page $13$, $(4.2)$, $Q_m:=Q_m(1),$</p> <p>and $\enspace\displaystyle Q_m^*(x):=(1+x)Q_m(x)\enspace$ on page $24$, $(4.8)$ and $(4.9)$, </p> <p>and with $\enspace\displaystyle Q_1^*(1)=2Q_1(1)=2\frac{\sqrt{2\pi}}{e}\enspace$ and $\enspace\displaystyle Q_1^*(-1)=\frac{1}{eQ_1(1)}=\frac{1}{\sqrt{2\pi}}\enspace$ we get</p> <p>$\enspace\displaystyle\sum\limits_{n=1}^\infty(\frac{1}{n}-\frac{2\zeta(2n)}{2n+1})=-2\sum\limits_{n=1}^\infty(\frac{\zeta(2n)}{2n+1}-\frac{1}{2n+1})+2\sum\limits_{n=1}^\infty(\frac{1}{2n}-\frac{1}{2n+1})=$ </p> <p>$\enspace\displaystyle =-2+\ln Q_1^*(1)-\ln Q_1^*(-1)+2(1-\ln 2)=\ln\pi -1$</p> Answer 6: <p>We have to prove the relation</p> <p><span class="math-container">$$s:=\sum _{n=1}^{\infty } \left(\frac{1}{n}-\frac{2 \zeta (2 n)}{2 n+1}\right)=\log(\pi)-1\tag{1}$$</span></p> <p>Instead of the sum <span class="math-container">$s$</span> we consider the generating sum (<span class="math-container">$0&lt;z&lt;1$</span>)</p> <p><span class="math-container">$$g(z)=\sum _{n=1}^{\infty } z^{2n}\left(\frac{1}{n}-\frac{2 \zeta (2 n)}{2 n+1}\right)=g_1(z)+g_2(z)\tag{2}$$</span></p> <p>so that we have <span class="math-container">$s=\underset{z\to 1_{-}}{\text{lim}}(g(z))$</span>.</p> <p>Now <span class="math-container">$$g_1(z) = -\log(1-z^2)\tag{3}$$</span></p> <p>and</p> <p><span class="math-container">$$\begin{align} g_2(z) &amp; := -2 \sum _{n=1}^{\infty } \frac{ \zeta (2 n)z^{2 n}}{2 n+1}\\ &amp; = -2 \sum _{n=1}^{\infty } \zeta (2 n) \frac{1}{z}\int_0^z t^{2 n} \, dt \\ &amp; = -\frac{2}{z} \int_0^z \left(\sum _{n=1}^{\infty } \zeta (2 n) t^{2 n}\right) \, dt\\ &amp; = -\frac{2}{z}\sum _{k=1}^{\infty } \int_0^z \left(\sum _{n=1}^{\infty } \frac{t^{2 n}}{k^{2 n}}\right) \, dt \\ &amp; = -\frac{2}{z} \sum _{k=1}^{\infty } \int_0^z \left(-\frac{t^2}{t^2-k^2} \right)\, dt\\ &amp; = \frac{2}{z}\sum _{k=1}^{\infty } \left(z-\frac{1}{2} k \log \left(\frac{k+z}{k-z}\right)\right) \end{align} \tag{4}$$</span></p> <p>In order to do the limit <span class="math-container">$z\to 1$</span> we consider <span class="math-container">$g_1$</span> and the first term of the sum of <span class="math-container">$g_2$</span> together plus the remaining sum:</p> <p><span class="math-container">$$s=\underset{z\to 1}{\text{lim}}(g(z))= \underset{z\to 1}{\text{lim}}(-\log(1-z^2) + \left(z-\frac{1}{2} \log \left(\frac{1+z}{1-z}\right)\right))+ s_3\\= 2 (1-\log (2))+s3\tag{5}$$</span></p> <p>here <span class="math-container">$s3=\underset{m\to \infty}{\text{lim}}s_3(m)$</span> where</p> <p><span class="math-container">$$s_3(m) = 2 \sum _{k=2}^{m } \left(\frac{1}{2} k \log \left(\frac{k+1}{k-1}\right)-1\right)\tag{6}$$</span></p> <p>This tough looking sum can luckily be done exactly by elementary operations, i.e. shifting summation indices and summation limits, with the result</p> <p><span class="math-container">$$s_3(m) =2 \log (m!)+2 (m-1)-m \log \left(\frac{1}{m}+1\right)-2 m \log (m)-\log (m)+\log (2)\tag{7}$$</span></p> <p>For <span class="math-container">$m\to \infty$</span> we have two non-trivial expansions</p> <p><span class="math-container">$$\log \left(\frac{1}{m}+1\right) \simeq \frac{1}{m}-\frac{1}{2 m^2}\tag{8} $$</span></p> <p>and the Stirling expansion</p> <p><span class="math-container">$$m!\simeq \left(\frac{\sqrt{\frac{\pi }{2}}}{6 \sqrt{m}}+\sqrt{2 \color{red}\pi } \sqrt{m}\right) e^{m (\log (m)-1)}\tag{9}$$</span></p> <p>which is responsible for bringing in the constant <span class="math-container">$\pi$</span>.</p> <p>Adding the asymptotic terms and going to the limit <span class="math-container">$m=\infty$</span> we obtain</p> <p><span class="math-container">$$s_3 = -3+\log (2)+\log (2 \pi )\tag{10}$$</span></p> <p>so that finally</p> <p><span class="math-container">$$s=2 (1-\log (2))-3+\log (2)+\log (2 \pi )=\log(\pi)-1\tag{11}$$</span></p> <p>which completes the derivation.</p> Answer 7: <p>Note that $$\sum_{n\geq1}\left(\frac{1}{n}-\frac{2\zeta\left(2n\right)}{2n+1}\right)=\sum_{n\geq1}\left(\frac{2n+1-2n\zeta\left(2n\right)}{n\left(2n+1\right)}\right) $$ $$=2\sum_{n\geq1}\left(\frac{1-\zeta\left(2n\right)}{2n+1}\right)+\sum_{n\geq1}\left(\frac{1}{n\left(2n+1\right)}\right)=S_{1}+S_{2}, $$ say. So let us analyze $S_{1} $. From the generating function $$\sum_{n\geq1}\zeta\left(2n\right)x^{2n}=\frac{1-\pi x\cot\left(\pi x\right)}{2},\,\left|x\right|&lt;1 $$ we get $$\sum_{n\geq1}\frac{\zeta\left(2n\right)x^{2n+1}}{2n+1}=\frac{1}{2}\left(x-\int_{0}^{x}\pi y\cot\left(\pi y\right)dy\right)\tag{1} $$ and the integral in the RHS of $(1)$ is, integrating by parts, $$\int_{0}^{x}\pi y\cot\left(\pi y\right)dy=x\log\left(\sin\left(\pi x\right)\right)-\frac{1}{\pi}\int_{0}^{\pi x}\log\left(\sin\left(u\right)\right)du $$ $$=x\log\left(\sin\left(\pi x\right)\right)+\frac{1}{2\pi}\textrm{Cl}_{2}\left(2\pi x\right)+x\log\left(2\right) $$ where $\textrm{Cl}_{2}\left(x\right) $ is the <a href="https://en.wikipedia.org/wiki/Clausen_function" rel="noreferrer">Clausen function of order $2$</a>. Now recalling Taylor series of $\textrm{arctanh}(x) $ $$\sum_{n\geq1}\frac{x^{2n+1}}{2n+1}=\textrm{arctanh}(x)-x $$ we get $$\sum_{n\geq1}\frac{\left(1-\zeta\left(2n\right)\right)x^{2n+1}}{2n+1}=\textrm{arctanh}(x)-x-\frac{1}{2}\left(x-x\log\left(\sin\left(\pi x\right)\right)-\frac{1}{2\pi}\textrm{Cl}_{2}\left(2\pi x\right)-x\log\left(2\right)\right)=f(x)$$ and recalling that $$\textrm{Cl}_{2}\left(m\pi\right)=0,\, m\in\mathbb{Z} $$ we get $$\sum_{n\geq1}\frac{\left(1-\zeta\left(2n\right)\right)}{2n+1}=\lim_{x\rightarrow1} f(x)=-\frac{3}{2}+\frac{\log\left(4\pi\right)}{2}. $$ Now we consider $S_{2} $. From the Taylor series of $\log\left(1-x\right) $ we observe that $$\sum_{n\geq1}\frac{x^{2n+1}}{n\left(2n+1\right)}=-\int_{0}^{x}\log\left(1-y^{2}\right)dy=-\int_{0}^{x}\log\left(1-y\right)dy-\int_{0}^{x}\log\left(1+y\right)dy $$ $$=-\left(1-x\right)\log\left(1-x\right)-\left(x+1\right)\log\left(x+1\right)+2x $$ hence $$\sum_{n\geq1}\frac{1}{n\left(2n+1\right)}=\lim_{x\rightarrow1}\left(-\left(1-x\right)\log\left(1-x\right)-\left(x+1\right)\log\left(x+1\right)+2x\right) $$ $$=2-\log\left(4\right) $$ so finally $$\sum_{n\geq1}\left(\frac{1}{n}-\frac{2\zeta\left(2n\right)}{2n+1}\right)=\color{red}{\log\left(\pi\right)-1}$$ as wanted. </p> Answer 8: <p>Using the generalisation $Q_m(x)$ of the Gamma function in </p> <p><a href="https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf" rel="nofollow noreferrer">https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf</a> </p> <p>with $\enspace\displaystyle \ln Q_m(x)=\frac{(-x)^{m+1}}{m+1}\gamma +\sum\limits_{n=2}^\infty\frac{(-x)^{m+n}}{m+n}\zeta(n)\enspace$ on page $13$, $(4.2)$, $Q_m:=Q_m(1),$</p> <p>and $\enspace\displaystyle Q_m^*(x):=(1+x)Q_m(x)\enspace$ on page $24$, $(4.8)$ and $(4.9)$, </p> <p>and with $\enspace\displaystyle Q_1^*(1)=2Q_1(1)=2\frac{\sqrt{2\pi}}{e}\enspace$ and $\enspace\displaystyle Q_1^*(-1)=\frac{1}{eQ_1(1)}=\frac{1}{\sqrt{2\pi}}\enspace$ we get</p> <p>$\enspace\displaystyle\sum\limits_{n=1}^\infty(\frac{1}{n}-\frac{2\zeta(2n)}{2n+1})=-2\sum\limits_{n=1}^\infty(\frac{\zeta(2n)}{2n+1}-\frac{1}{2n+1})+2\sum\limits_{n=1}^\infty(\frac{1}{2n}-\frac{1}{2n+1})=$ </p> <p>$\enspace\displaystyle =-2+\ln Q_1^*(1)-\ln Q_1^*(-1)+2(1-\ln 2)=\ln\pi -1$</p> Answer 9: <p>We have to prove the relation</p> <p><span class="math-container">$$s:=\sum _{n=1}^{\infty } \left(\frac{1}{n}-\frac{2 \zeta (2 n)}{2 n+1}\right)=\log(\pi)-1\tag{1}$$</span></p> <p>Instead of the sum <span class="math-container">$s$</span> we consider the generating sum (<span class="math-container">$0&lt;z&lt;1$</span>)</p> <p><span class="math-container">$$g(z)=\sum _{n=1}^{\infty } z^{2n}\left(\frac{1}{n}-\frac{2 \zeta (2 n)}{2 n+1}\right)=g_1(z)+g_2(z)\tag{2}$$</span></p> <p>so that we have <span class="math-container">$s=\underset{z\to 1_{-}}{\text{lim}}(g(z))$</span>.</p> <p>Now <span class="math-container">$$g_1(z) = -\log(1-z^2)\tag{3}$$</span></p> <p>and</p> <p><span class="math-container">$$\begin{align} g_2(z) &amp; := -2 \sum _{n=1}^{\infty } \frac{ \zeta (2 n)z^{2 n}}{2 n+1}\\ &amp; = -2 \sum _{n=1}^{\infty } \zeta (2 n) \frac{1}{z}\int_0^z t^{2 n} \, dt \\ &amp; = -\frac{2}{z} \int_0^z \left(\sum _{n=1}^{\infty } \zeta (2 n) t^{2 n}\right) \, dt\\ &amp; = -\frac{2}{z}\sum _{k=1}^{\infty } \int_0^z \left(\sum _{n=1}^{\infty } \frac{t^{2 n}}{k^{2 n}}\right) \, dt \\ &amp; = -\frac{2}{z} \sum _{k=1}^{\infty } \int_0^z \left(-\frac{t^2}{t^2-k^2} \right)\, dt\\ &amp; = \frac{2}{z}\sum _{k=1}^{\infty } \left(z-\frac{1}{2} k \log \left(\frac{k+z}{k-z}\right)\right) \end{align} \tag{4}$$</span></p> <p>In order to do the limit <span class="math-container">$z\to 1$</span> we consider <span class="math-container">$g_1$</span> and the first term of the sum of <span class="math-container">$g_2$</span> together plus the remaining sum:</p> <p><span class="math-container">$$s=\underset{z\to 1}{\text{lim}}(g(z))= \underset{z\to 1}{\text{lim}}(-\log(1-z^2) + \left(z-\frac{1}{2} \log \left(\frac{1+z}{1-z}\right)\right))+ s_3\\= 2 (1-\log (2))+s3\tag{5}$$</span></p> <p>here <span class="math-container">$s3=\underset{m\to \infty}{\text{lim}}s_3(m)$</span> where</p> <p><span class="math-container">$$s_3(m) = 2 \sum _{k=2}^{m } \left(\frac{1}{2} k \log \left(\frac{k+1}{k-1}\right)-1\right)\tag{6}$$</span></p> <p>This tough looking sum can luckily be done exactly by elementary operations, i.e. shifting summation indices and summation limits, with the result</p> <p><span class="math-container">$$s_3(m) =2 \log (m!)+2 (m-1)-m \log \left(\frac{1}{m}+1\right)-2 m \log (m)-\log (m)+\log (2)\tag{7}$$</span></p> <p>For <span class="math-container">$m\to \infty$</span> we have two non-trivial expansions</p> <p><span class="math-container">$$\log \left(\frac{1}{m}+1\right) \simeq \frac{1}{m}-\frac{1}{2 m^2}\tag{8} $$</span></p> <p>and the Stirling expansion</p> <p><span class="math-container">$$m!\simeq \left(\frac{\sqrt{\frac{\pi }{2}}}{6 \sqrt{m}}+\sqrt{2 \color{red}\pi } \sqrt{m}\right) e^{m (\log (m)-1)}\tag{9}$$</span></p> <p>which is responsible for bringing in the constant <span class="math-container">$\pi$</span>.</p> <p>Adding the asymptotic terms and going to the limit <span class="math-container">$m=\infty$</span> we obtain</p> <p><span class="math-container">$$s_3 = -3+\log (2)+\log (2 \pi )\tag{10}$$</span></p> <p>so that finally</p> <p><span class="math-container">$$s=2 (1-\log (2))-3+\log (2)+\log (2 \pi )=\log(\pi)-1\tag{11}$$</span></p> <p>which completes the derivation.</p> Answer 10: <p>Note that $$\sum_{n\geq1}\left(\frac{1}{n}-\frac{2\zeta\left(2n\right)}{2n+1}\right)=\sum_{n\geq1}\left(\frac{2n+1-2n\zeta\left(2n\right)}{n\left(2n+1\right)}\right) $$ $$=2\sum_{n\geq1}\left(\frac{1-\zeta\left(2n\right)}{2n+1}\right)+\sum_{n\geq1}\left(\frac{1}{n\left(2n+1\right)}\right)=S_{1}+S_{2}, $$ say. So let us analyze $S_{1} $. From the generating function $$\sum_{n\geq1}\zeta\left(2n\right)x^{2n}=\frac{1-\pi x\cot\left(\pi x\right)}{2},\,\left|x\right|&lt;1 $$ we get $$\sum_{n\geq1}\frac{\zeta\left(2n\right)x^{2n+1}}{2n+1}=\frac{1}{2}\left(x-\int_{0}^{x}\pi y\cot\left(\pi y\right)dy\right)\tag{1} $$ and the integral in the RHS of $(1)$ is, integrating by parts, $$\int_{0}^{x}\pi y\cot\left(\pi y\right)dy=x\log\left(\sin\left(\pi x\right)\right)-\frac{1}{\pi}\int_{0}^{\pi x}\log\left(\sin\left(u\right)\right)du $$ $$=x\log\left(\sin\left(\pi x\right)\right)+\frac{1}{2\pi}\textrm{Cl}_{2}\left(2\pi x\right)+x\log\left(2\right) $$ where $\textrm{Cl}_{2}\left(x\right) $ is the <a href="https://en.wikipedia.org/wiki/Clausen_function" rel="noreferrer">Clausen function of order $2$</a>. Now recalling Taylor series of $\textrm{arctanh}(x) $ $$\sum_{n\geq1}\frac{x^{2n+1}}{2n+1}=\textrm{arctanh}(x)-x $$ we get $$\sum_{n\geq1}\frac{\left(1-\zeta\left(2n\right)\right)x^{2n+1}}{2n+1}=\textrm{arctanh}(x)-x-\frac{1}{2}\left(x-x\log\left(\sin\left(\pi x\right)\right)-\frac{1}{2\pi}\textrm{Cl}_{2}\left(2\pi x\right)-x\log\left(2\right)\right)=f(x)$$ and recalling that $$\textrm{Cl}_{2}\left(m\pi\right)=0,\, m\in\mathbb{Z} $$ we get $$\sum_{n\geq1}\frac{\left(1-\zeta\left(2n\right)\right)}{2n+1}=\lim_{x\rightarrow1} f(x)=-\frac{3}{2}+\frac{\log\left(4\pi\right)}{2}. $$ Now we consider $S_{2} $. From the Taylor series of $\log\left(1-x\right) $ we observe that $$\sum_{n\geq1}\frac{x^{2n+1}}{n\left(2n+1\right)}=-\int_{0}^{x}\log\left(1-y^{2}\right)dy=-\int_{0}^{x}\log\left(1-y\right)dy-\int_{0}^{x}\log\left(1+y\right)dy $$ $$=-\left(1-x\right)\log\left(1-x\right)-\left(x+1\right)\log\left(x+1\right)+2x $$ hence $$\sum_{n\geq1}\frac{1}{n\left(2n+1\right)}=\lim_{x\rightarrow1}\left(-\left(1-x\right)\log\left(1-x\right)-\left(x+1\right)\log\left(x+1\right)+2x\right) $$ $$=2-\log\left(4\right) $$ so finally $$\sum_{n\geq1}\left(\frac{1}{n}-\frac{2\zeta\left(2n\right)}{2n+1}\right)=\color{red}{\log\left(\pi\right)-1}$$ as wanted. </p> Answer 11: <p>Using the generalisation $Q_m(x)$ of the Gamma function in </p> <p><a href="https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf" rel="nofollow noreferrer">https://www.fernuni-hagen.de/analysis/download/bachelorarbeit_aschauer.pdf</a> </p> <p>with $\enspace\displaystyle \ln Q_m(x)=\frac{(-x)^{m+1}}{m+1}\gamma +\sum\limits_{n=2}^\infty\frac{(-x)^{m+n}}{m+n}\zeta(n)\enspace$ on page $13$, $(4.2)$, $Q_m:=Q_m(1),$</p> <p>and $\enspace\displaystyle Q_m^*(x):=(1+x)Q_m(x)\enspace$ on page $24$, $(4.8)$ and $(4.9)$, </p> <p>and with $\enspace\displaystyle Q_1^*(1)=2Q_1(1)=2\frac{\sqrt{2\pi}}{e}\enspace$ and $\enspace\displaystyle Q_1^*(-1)=\frac{1}{eQ_1(1)}=\frac{1}{\sqrt{2\pi}}\enspace$ we get</p> <p>$\enspace\displaystyle\sum\limits_{n=1}^\infty(\frac{1}{n}-\frac{2\zeta(2n)}{2n+1})=-2\sum\limits_{n=1}^\infty(\frac{\zeta(2n)}{2n+1}-\frac{1}{2n+1})+2\sum\limits_{n=1}^\infty(\frac{1}{2n}-\frac{1}{2n+1})=$ </p> <p>$\enspace\displaystyle =-2+\ln Q_1^*(1)-\ln Q_1^*(-1)+2(1-\ln 2)=\ln\pi -1$</p> Answer 12: <p>We have to prove the relation</p> <p><span class="math-container">$$s:=\sum _{n=1}^{\infty } \left(\frac{1}{n}-\frac{2 \zeta (2 n)}{2 n+1}\right)=\log(\pi)-1\tag{1}$$</span></p> <p>Instead of the sum <span class="math-container">$s$</span> we consider the generating sum (<span class="math-container">$0&lt;z&lt;1$</span>)</p> <p><span class="math-container">$$g(z)=\sum _{n=1}^{\infty } z^{2n}\left(\frac{1}{n}-\frac{2 \zeta (2 n)}{2 n+1}\right)=g_1(z)+g_2(z)\tag{2}$$</span></p> <p>so that we have <span class="math-container">$s=\underset{z\to 1_{-}}{\text{lim}}(g(z))$</span>.</p> <p>Now <span class="math-container">$$g_1(z) = -\log(1-z^2)\tag{3}$$</span></p> <p>and</p> <p><span class="math-container">$$\begin{align} g_2(z) &amp; := -2 \sum _{n=1}^{\infty } \frac{ \zeta (2 n)z^{2 n}}{2 n+1}\\ &amp; = -2 \sum _{n=1}^{\infty } \zeta (2 n) \frac{1}{z}\int_0^z t^{2 n} \, dt \\ &amp; = -\frac{2}{z} \int_0^z \left(\sum _{n=1}^{\infty } \zeta (2 n) t^{2 n}\right) \, dt\\ &amp; = -\frac{2}{z}\sum _{k=1}^{\infty } \int_0^z \left(\sum _{n=1}^{\infty } \frac{t^{2 n}}{k^{2 n}}\right) \, dt \\ &amp; = -\frac{2}{z} \sum _{k=1}^{\infty } \int_0^z \left(-\frac{t^2}{t^2-k^2} \right)\, dt\\ &amp; = \frac{2}{z}\sum _{k=1}^{\infty } \left(z-\frac{1}{2} k \log \left(\frac{k+z}{k-z}\right)\right) \end{align} \tag{4}$$</span></p> <p>In order to do the limit <span class="math-container">$z\to 1$</span> we consider <span class="math-container">$g_1$</span> and the first term of the sum of <span class="math-container">$g_2$</span> together plus the remaining sum:</p> <p><span class="math-container">$$s=\underset{z\to 1}{\text{lim}}(g(z))= \underset{z\to 1}{\text{lim}}(-\log(1-z^2) + \left(z-\frac{1}{2} \log \left(\frac{1+z}{1-z}\right)\right))+ s_3\\= 2 (1-\log (2))+s3\tag{5}$$</span></p> <p>here <span class="math-container">$s3=\underset{m\to \infty}{\text{lim}}s_3(m)$</span> where</p> <p><span class="math-container">$$s_3(m) = 2 \sum _{k=2}^{m } \left(\frac{1}{2} k \log \left(\frac{k+1}{k-1}\right)-1\right)\tag{6}$$</span></p> <p>This tough looking sum can luckily be done exactly by elementary operations, i.e. shifting summation indices and summation limits, with the result</p> <p><span class="math-container">$$s_3(m) =2 \log (m!)+2 (m-1)-m \log \left(\frac{1}{m}+1\right)-2 m \log (m)-\log (m)+\log (2)\tag{7}$$</span></p> <p>For <span class="math-container">$m\to \infty$</span> we have two non-trivial expansions</p> <p><span class="math-container">$$\log \left(\frac{1}{m}+1\right) \simeq \frac{1}{m}-\frac{1}{2 m^2}\tag{8} $$</span></p> <p>and the Stirling expansion</p> <p><span class="math-container">$$m!\simeq \left(\frac{\sqrt{\frac{\pi }{2}}}{6 \sqrt{m}}+\sqrt{2 \color{red}\pi } \sqrt{m}\right) e^{m (\log (m)-1)}\tag{9}$$</span></p> <p>which is responsible for bringing in the constant <span class="math-container">$\pi$</span>.</p> <p>Adding the asymptotic terms and going to the limit <span class="math-container">$m=\infty$</span> we obtain</p> <p><span class="math-container">$$s_3 = -3+\log (2)+\log (2 \pi )\tag{10}$$</span></p> <p>so that finally</p> <p><span class="math-container">$$s=2 (1-\log (2))-3+\log (2)+\log (2 \pi )=\log(\pi)-1\tag{11}$$</span></p> <p>which completes the derivation.</p>
How to prove with Lagrange theorem
2,082,354
4
https://math.stackexchange.com/questions/2082354/how-to-prove-with-lagrange-theorem
<p>How can I prove that:<br> if $f: \Bbb R \rightarrow \Bbb R$ is convex and differentiable, such that $y = 0$ is an asymptote for $x \rightarrow \infty$, then $f(x) \ge 0, \forall x \in \Bbb R$<br> How can I prove that $f(x) \ge 0$ with the Lagrange theorem?</p>
[ { "answer_id": 2096544, "score": 0, "text": "<p>This is a possible solution to the problem (but i'd also like to see some of yours):<br>\n<strong>My Try:</strong> </p>\n\n<p>Given the following theorem:<br>\n<em>TEO.</em> Is $f$ differentiable in an interval $I$. Then $f$ is concave (convex) in $I$, if and only if $f'$is increasing (decreasing) in I.</p>\n\n<p>For the previous theorem we have that $f'(x)$ it is growing broadly.\nSo, reasoning by contradiction, we suppose that exists $x_0 \\in \\Re: f(x) &lt; 0$, then we set $\\epsilon = \\frac{|f(x)|}{3}$.<br>\nFor the definition of asymptote $\\rightarrow \\lim _{x\\to \\infty \\:}\\left(f(x)\\right)=1$, then by definition of limit, we have that exists $M \\in \\Re: |f(x)|&lt;\\epsilon, \\forall x &gt; M$.<br>\nAt this point, we choose $x_1 &gt; max\\{x_0, M\\}$, then we set</p>\n\n<p>$$L_1 = \\frac{f(x_1)-f(x_0)}{x_1-x_0}$$</p>\n\n<p>Then, we choose $x_2&gt;x_1$ so that $x_2-x_1&gt;x_1-x_0$ from which it follows that:</p>\n\n<p>$$L_2 = \\frac{f(x_2)-f(x_1)}{x_2-x_1} \\le \\frac{2\\epsilon}{x_2-x_1}&lt; \\frac{2\\epsilon}{x_1-x_0}&lt;\\frac{f(x_1)-f(x_0)}{x_1-x_0} = L_1$$</p>\n\n<p>where the last inequality follows from the fact that $f(x_0) = -3\\epsilon$ and $f(x_1)&gt; -\\epsilon$.<br>\nSo is $L_2&lt;L_1$. On the other hand, for the Lagrange's theorem applied to the function $f$ initially restricted in $[x_0,x_1]$ and then in $[x_1,x_2]$ you get $L_1=f'(c_1)$ and $L_2=f'(c_2)$, where $c_1, c_2$ are chosen so that $x_0&lt;c_1&lt;x_1$ and $x_1&lt;c_2&lt;x_2$.<br>\nSo $c_1 &lt; c_2$ and $f'(c_1) = L_1 &gt; L_2 = f'(c_2)$ for which you get a\ncontradiction to the growth in the broad sense of the first derivative function. </p>\n" }, { "answer_id": 2096659, "score": 0, "text": "<p>Let $x_0$ such that $y_0 = f(x_0) &lt; 0$ and consider $x_1 &gt; x_0$. I claim that $f(x_1) \\leq y_0 &lt; 0$. If this is true, it's a contradiction since $x_1$ was general, which shows that $y=0$ can not be an asymptote. </p>\n\n<p>To prove the claim, assume $f(x_1) &gt; y_0$, so $\\frac{f(x_1) - y_0}{x_1 - x_0} &gt; 0$. Note that since $y=0$ is an asymptote ( or, just that there is any horizontal asymptote ), as $x\\to \\infty$, we have $\\frac{f(x)-y_0}{x-x_0} \\to 0$, so in particular we can find $x_2 &gt; x_1$ such that $\\frac{f(x_2) - y_0}{x_2 - x_0} &lt; \\frac{f(x_1)-y_0}{x_1 - x_0}$, which means that $(x_1,f(x_1))$ lies above the line connecting $(x_0,f(x_0))$ and $(x_2,f(x_2))$, which is not possible since $f$ is convex. </p>\n" }, { "answer_id": 2096713, "score": 0, "text": "<p>Since $f$ is convex, its derivative is nondecreasing. In particular, the limit\n$$\n\\lim_{x\\to\\infty}f'(x)\n$$\nexists (finite or infinite). By l'Hôpital's theorem,\n$$\n\\lim_{x\\to\\infty}f'(x)=\\lim_{x\\to\\infty}\\frac{f(x)}{x}=0\n$$\nTherefore $f'(x)\\le0$, for every $x$. Thus $f$ is nonincreasing. Then, if $f(x_0)&lt;0$, for some $x_0$, we have $f(x)\\le f(x_0)$, contradicting\n$$\n\\lim_{x\\to\\infty}f(x)=0\n$$</p>\n\n<p><strong>Proposition.</strong> <em>If $f$ is differentiable and convex over $\\mathbb{R}$, then $f'$ is nondecreasing.</em></p>\n\n<p><em>Proof.</em> Let $a&lt;b$. The line through $(a,f(a))$ and $(b,f(b))$ can be written as\n\\begin{gather}\ny=\\frac{f(b)-f(a)}{b-a}(x-a)+f(a)\\\\\ny=\\frac{f(b)-f(a)}{b-a}(x-b)+f(b)\n\\end{gather}\nBy Lagrange's theorem (mean value theorem),\n$$\n\\frac{f(b)-f(a)}{b-a}=f'(c)\n$$\nfor some $c\\in(a,b)$. Then, for $t\\in(a,b)$, convexity says that\n\\begin{gather}\nf(t)\\le f'(c)(t-a)+f(a)\\\\\nf(t)\\le f'(c)(t-b)+f(b)\n\\end{gather}\nso we derive\n$$\n\\frac{f(t)-f(a)}{t-a}\\le f'(c)\\le\\frac{f(t)-f(b)}{t-b}\n$$\nand, taking the limits for $t\\to a$ and for $t\\to b$,\n$$\nf'(a)\\le f'(c)\\le f'(b)\n$$\nproving $f'(a)\\le f'(b)$. QED</p>\n\n<p><strong>Proposition.</strong> <em>Let $f$ be differentiable over $\\mathbb{R}$. If $f'(x)\\le0$, for every $x$, then $f$ is nonincreasing.</em></p>\n\n<p><em>Proof.</em> Let $a&lt;b$. Then\n$$\n\\frac{f(b)-f(a)}{b-a}=f'(c)\\le0\n$$\nby Lagrange's theorem. Thus $f(a)\\ge f(b)$. QED</p>\n" }, { "answer_id": 2096544, "score": 0, "text": "<p>This is a possible solution to the problem (but i'd also like to see some of yours):<br>\n<strong>My Try:</strong> </p>\n\n<p>Given the following theorem:<br>\n<em>TEO.</em> Is $f$ differentiable in an interval $I$. Then $f$ is concave (convex) in $I$, if and only if $f'$is increasing (decreasing) in I.</p>\n\n<p>For the previous theorem we have that $f'(x)$ it is growing broadly.\nSo, reasoning by contradiction, we suppose that exists $x_0 \\in \\Re: f(x) &lt; 0$, then we set $\\epsilon = \\frac{|f(x)|}{3}$.<br>\nFor the definition of asymptote $\\rightarrow \\lim _{x\\to \\infty \\:}\\left(f(x)\\right)=1$, then by definition of limit, we have that exists $M \\in \\Re: |f(x)|&lt;\\epsilon, \\forall x &gt; M$.<br>\nAt this point, we choose $x_1 &gt; max\\{x_0, M\\}$, then we set</p>\n\n<p>$$L_1 = \\frac{f(x_1)-f(x_0)}{x_1-x_0}$$</p>\n\n<p>Then, we choose $x_2&gt;x_1$ so that $x_2-x_1&gt;x_1-x_0$ from which it follows that:</p>\n\n<p>$$L_2 = \\frac{f(x_2)-f(x_1)}{x_2-x_1} \\le \\frac{2\\epsilon}{x_2-x_1}&lt; \\frac{2\\epsilon}{x_1-x_0}&lt;\\frac{f(x_1)-f(x_0)}{x_1-x_0} = L_1$$</p>\n\n<p>where the last inequality follows from the fact that $f(x_0) = -3\\epsilon$ and $f(x_1)&gt; -\\epsilon$.<br>\nSo is $L_2&lt;L_1$. On the other hand, for the Lagrange's theorem applied to the function $f$ initially restricted in $[x_0,x_1]$ and then in $[x_1,x_2]$ you get $L_1=f'(c_1)$ and $L_2=f'(c_2)$, where $c_1, c_2$ are chosen so that $x_0&lt;c_1&lt;x_1$ and $x_1&lt;c_2&lt;x_2$.<br>\nSo $c_1 &lt; c_2$ and $f'(c_1) = L_1 &gt; L_2 = f'(c_2)$ for which you get a\ncontradiction to the growth in the broad sense of the first derivative function. </p>\n" }, { "answer_id": 2096659, "score": 0, "text": "<p>Let $x_0$ such that $y_0 = f(x_0) &lt; 0$ and consider $x_1 &gt; x_0$. I claim that $f(x_1) \\leq y_0 &lt; 0$. If this is true, it's a contradiction since $x_1$ was general, which shows that $y=0$ can not be an asymptote. </p>\n\n<p>To prove the claim, assume $f(x_1) &gt; y_0$, so $\\frac{f(x_1) - y_0}{x_1 - x_0} &gt; 0$. Note that since $y=0$ is an asymptote ( or, just that there is any horizontal asymptote ), as $x\\to \\infty$, we have $\\frac{f(x)-y_0}{x-x_0} \\to 0$, so in particular we can find $x_2 &gt; x_1$ such that $\\frac{f(x_2) - y_0}{x_2 - x_0} &lt; \\frac{f(x_1)-y_0}{x_1 - x_0}$, which means that $(x_1,f(x_1))$ lies above the line connecting $(x_0,f(x_0))$ and $(x_2,f(x_2))$, which is not possible since $f$ is convex. </p>\n" }, { "answer_id": 2096713, "score": 0, "text": "<p>Since $f$ is convex, its derivative is nondecreasing. In particular, the limit\n$$\n\\lim_{x\\to\\infty}f'(x)\n$$\nexists (finite or infinite). By l'Hôpital's theorem,\n$$\n\\lim_{x\\to\\infty}f'(x)=\\lim_{x\\to\\infty}\\frac{f(x)}{x}=0\n$$\nTherefore $f'(x)\\le0$, for every $x$. Thus $f$ is nonincreasing. Then, if $f(x_0)&lt;0$, for some $x_0$, we have $f(x)\\le f(x_0)$, contradicting\n$$\n\\lim_{x\\to\\infty}f(x)=0\n$$</p>\n\n<p><strong>Proposition.</strong> <em>If $f$ is differentiable and convex over $\\mathbb{R}$, then $f'$ is nondecreasing.</em></p>\n\n<p><em>Proof.</em> Let $a&lt;b$. The line through $(a,f(a))$ and $(b,f(b))$ can be written as\n\\begin{gather}\ny=\\frac{f(b)-f(a)}{b-a}(x-a)+f(a)\\\\\ny=\\frac{f(b)-f(a)}{b-a}(x-b)+f(b)\n\\end{gather}\nBy Lagrange's theorem (mean value theorem),\n$$\n\\frac{f(b)-f(a)}{b-a}=f'(c)\n$$\nfor some $c\\in(a,b)$. Then, for $t\\in(a,b)$, convexity says that\n\\begin{gather}\nf(t)\\le f'(c)(t-a)+f(a)\\\\\nf(t)\\le f'(c)(t-b)+f(b)\n\\end{gather}\nso we derive\n$$\n\\frac{f(t)-f(a)}{t-a}\\le f'(c)\\le\\frac{f(t)-f(b)}{t-b}\n$$\nand, taking the limits for $t\\to a$ and for $t\\to b$,\n$$\nf'(a)\\le f'(c)\\le f'(b)\n$$\nproving $f'(a)\\le f'(b)$. QED</p>\n\n<p><strong>Proposition.</strong> <em>Let $f$ be differentiable over $\\mathbb{R}$. If $f'(x)\\le0$, for every $x$, then $f$ is nonincreasing.</em></p>\n\n<p><em>Proof.</em> Let $a&lt;b$. Then\n$$\n\\frac{f(b)-f(a)}{b-a}=f'(c)\\le0\n$$\nby Lagrange's theorem. Thus $f(a)\\ge f(b)$. QED</p>\n" }, { "answer_id": 2096544, "score": 0, "text": "<p>This is a possible solution to the problem (but i'd also like to see some of yours):<br>\n<strong>My Try:</strong> </p>\n\n<p>Given the following theorem:<br>\n<em>TEO.</em> Is $f$ differentiable in an interval $I$. Then $f$ is concave (convex) in $I$, if and only if $f'$is increasing (decreasing) in I.</p>\n\n<p>For the previous theorem we have that $f'(x)$ it is growing broadly.\nSo, reasoning by contradiction, we suppose that exists $x_0 \\in \\Re: f(x) &lt; 0$, then we set $\\epsilon = \\frac{|f(x)|}{3}$.<br>\nFor the definition of asymptote $\\rightarrow \\lim _{x\\to \\infty \\:}\\left(f(x)\\right)=1$, then by definition of limit, we have that exists $M \\in \\Re: |f(x)|&lt;\\epsilon, \\forall x &gt; M$.<br>\nAt this point, we choose $x_1 &gt; max\\{x_0, M\\}$, then we set</p>\n\n<p>$$L_1 = \\frac{f(x_1)-f(x_0)}{x_1-x_0}$$</p>\n\n<p>Then, we choose $x_2&gt;x_1$ so that $x_2-x_1&gt;x_1-x_0$ from which it follows that:</p>\n\n<p>$$L_2 = \\frac{f(x_2)-f(x_1)}{x_2-x_1} \\le \\frac{2\\epsilon}{x_2-x_1}&lt; \\frac{2\\epsilon}{x_1-x_0}&lt;\\frac{f(x_1)-f(x_0)}{x_1-x_0} = L_1$$</p>\n\n<p>where the last inequality follows from the fact that $f(x_0) = -3\\epsilon$ and $f(x_1)&gt; -\\epsilon$.<br>\nSo is $L_2&lt;L_1$. On the other hand, for the Lagrange's theorem applied to the function $f$ initially restricted in $[x_0,x_1]$ and then in $[x_1,x_2]$ you get $L_1=f'(c_1)$ and $L_2=f'(c_2)$, where $c_1, c_2$ are chosen so that $x_0&lt;c_1&lt;x_1$ and $x_1&lt;c_2&lt;x_2$.<br>\nSo $c_1 &lt; c_2$ and $f'(c_1) = L_1 &gt; L_2 = f'(c_2)$ for which you get a\ncontradiction to the growth in the broad sense of the first derivative function. </p>\n" }, { "answer_id": 2096659, "score": 0, "text": "<p>Let $x_0$ such that $y_0 = f(x_0) &lt; 0$ and consider $x_1 &gt; x_0$. I claim that $f(x_1) \\leq y_0 &lt; 0$. If this is true, it's a contradiction since $x_1$ was general, which shows that $y=0$ can not be an asymptote. </p>\n\n<p>To prove the claim, assume $f(x_1) &gt; y_0$, so $\\frac{f(x_1) - y_0}{x_1 - x_0} &gt; 0$. Note that since $y=0$ is an asymptote ( or, just that there is any horizontal asymptote ), as $x\\to \\infty$, we have $\\frac{f(x)-y_0}{x-x_0} \\to 0$, so in particular we can find $x_2 &gt; x_1$ such that $\\frac{f(x_2) - y_0}{x_2 - x_0} &lt; \\frac{f(x_1)-y_0}{x_1 - x_0}$, which means that $(x_1,f(x_1))$ lies above the line connecting $(x_0,f(x_0))$ and $(x_2,f(x_2))$, which is not possible since $f$ is convex. </p>\n" }, { "answer_id": 2096713, "score": 0, "text": "<p>Since $f$ is convex, its derivative is nondecreasing. In particular, the limit\n$$\n\\lim_{x\\to\\infty}f'(x)\n$$\nexists (finite or infinite). By l'Hôpital's theorem,\n$$\n\\lim_{x\\to\\infty}f'(x)=\\lim_{x\\to\\infty}\\frac{f(x)}{x}=0\n$$\nTherefore $f'(x)\\le0$, for every $x$. Thus $f$ is nonincreasing. Then, if $f(x_0)&lt;0$, for some $x_0$, we have $f(x)\\le f(x_0)$, contradicting\n$$\n\\lim_{x\\to\\infty}f(x)=0\n$$</p>\n\n<p><strong>Proposition.</strong> <em>If $f$ is differentiable and convex over $\\mathbb{R}$, then $f'$ is nondecreasing.</em></p>\n\n<p><em>Proof.</em> Let $a&lt;b$. The line through $(a,f(a))$ and $(b,f(b))$ can be written as\n\\begin{gather}\ny=\\frac{f(b)-f(a)}{b-a}(x-a)+f(a)\\\\\ny=\\frac{f(b)-f(a)}{b-a}(x-b)+f(b)\n\\end{gather}\nBy Lagrange's theorem (mean value theorem),\n$$\n\\frac{f(b)-f(a)}{b-a}=f'(c)\n$$\nfor some $c\\in(a,b)$. Then, for $t\\in(a,b)$, convexity says that\n\\begin{gather}\nf(t)\\le f'(c)(t-a)+f(a)\\\\\nf(t)\\le f'(c)(t-b)+f(b)\n\\end{gather}\nso we derive\n$$\n\\frac{f(t)-f(a)}{t-a}\\le f'(c)\\le\\frac{f(t)-f(b)}{t-b}\n$$\nand, taking the limits for $t\\to a$ and for $t\\to b$,\n$$\nf'(a)\\le f'(c)\\le f'(b)\n$$\nproving $f'(a)\\le f'(b)$. QED</p>\n\n<p><strong>Proposition.</strong> <em>Let $f$ be differentiable over $\\mathbb{R}$. If $f'(x)\\le0$, for every $x$, then $f$ is nonincreasing.</em></p>\n\n<p><em>Proof.</em> Let $a&lt;b$. Then\n$$\n\\frac{f(b)-f(a)}{b-a}=f'(c)\\le0\n$$\nby Lagrange's theorem. Thus $f(a)\\ge f(b)$. QED</p>\n" }, { "answer_id": 2096544, "score": 0, "text": "<p>This is a possible solution to the problem (but i'd also like to see some of yours):<br>\n<strong>My Try:</strong> </p>\n\n<p>Given the following theorem:<br>\n<em>TEO.</em> Is $f$ differentiable in an interval $I$. Then $f$ is concave (convex) in $I$, if and only if $f'$is increasing (decreasing) in I.</p>\n\n<p>For the previous theorem we have that $f'(x)$ it is growing broadly.\nSo, reasoning by contradiction, we suppose that exists $x_0 \\in \\Re: f(x) &lt; 0$, then we set $\\epsilon = \\frac{|f(x)|}{3}$.<br>\nFor the definition of asymptote $\\rightarrow \\lim _{x\\to \\infty \\:}\\left(f(x)\\right)=1$, then by definition of limit, we have that exists $M \\in \\Re: |f(x)|&lt;\\epsilon, \\forall x &gt; M$.<br>\nAt this point, we choose $x_1 &gt; max\\{x_0, M\\}$, then we set</p>\n\n<p>$$L_1 = \\frac{f(x_1)-f(x_0)}{x_1-x_0}$$</p>\n\n<p>Then, we choose $x_2&gt;x_1$ so that $x_2-x_1&gt;x_1-x_0$ from which it follows that:</p>\n\n<p>$$L_2 = \\frac{f(x_2)-f(x_1)}{x_2-x_1} \\le \\frac{2\\epsilon}{x_2-x_1}&lt; \\frac{2\\epsilon}{x_1-x_0}&lt;\\frac{f(x_1)-f(x_0)}{x_1-x_0} = L_1$$</p>\n\n<p>where the last inequality follows from the fact that $f(x_0) = -3\\epsilon$ and $f(x_1)&gt; -\\epsilon$.<br>\nSo is $L_2&lt;L_1$. On the other hand, for the Lagrange's theorem applied to the function $f$ initially restricted in $[x_0,x_1]$ and then in $[x_1,x_2]$ you get $L_1=f'(c_1)$ and $L_2=f'(c_2)$, where $c_1, c_2$ are chosen so that $x_0&lt;c_1&lt;x_1$ and $x_1&lt;c_2&lt;x_2$.<br>\nSo $c_1 &lt; c_2$ and $f'(c_1) = L_1 &gt; L_2 = f'(c_2)$ for which you get a\ncontradiction to the growth in the broad sense of the first derivative function. </p>\n" }, { "answer_id": 2096659, "score": 0, "text": "<p>Let $x_0$ such that $y_0 = f(x_0) &lt; 0$ and consider $x_1 &gt; x_0$. I claim that $f(x_1) \\leq y_0 &lt; 0$. If this is true, it's a contradiction since $x_1$ was general, which shows that $y=0$ can not be an asymptote. </p>\n\n<p>To prove the claim, assume $f(x_1) &gt; y_0$, so $\\frac{f(x_1) - y_0}{x_1 - x_0} &gt; 0$. Note that since $y=0$ is an asymptote ( or, just that there is any horizontal asymptote ), as $x\\to \\infty$, we have $\\frac{f(x)-y_0}{x-x_0} \\to 0$, so in particular we can find $x_2 &gt; x_1$ such that $\\frac{f(x_2) - y_0}{x_2 - x_0} &lt; \\frac{f(x_1)-y_0}{x_1 - x_0}$, which means that $(x_1,f(x_1))$ lies above the line connecting $(x_0,f(x_0))$ and $(x_2,f(x_2))$, which is not possible since $f$ is convex. </p>\n" }, { "answer_id": 2096713, "score": 0, "text": "<p>Since $f$ is convex, its derivative is nondecreasing. In particular, the limit\n$$\n\\lim_{x\\to\\infty}f'(x)\n$$\nexists (finite or infinite). By l'Hôpital's theorem,\n$$\n\\lim_{x\\to\\infty}f'(x)=\\lim_{x\\to\\infty}\\frac{f(x)}{x}=0\n$$\nTherefore $f'(x)\\le0$, for every $x$. Thus $f$ is nonincreasing. Then, if $f(x_0)&lt;0$, for some $x_0$, we have $f(x)\\le f(x_0)$, contradicting\n$$\n\\lim_{x\\to\\infty}f(x)=0\n$$</p>\n\n<p><strong>Proposition.</strong> <em>If $f$ is differentiable and convex over $\\mathbb{R}$, then $f'$ is nondecreasing.</em></p>\n\n<p><em>Proof.</em> Let $a&lt;b$. The line through $(a,f(a))$ and $(b,f(b))$ can be written as\n\\begin{gather}\ny=\\frac{f(b)-f(a)}{b-a}(x-a)+f(a)\\\\\ny=\\frac{f(b)-f(a)}{b-a}(x-b)+f(b)\n\\end{gather}\nBy Lagrange's theorem (mean value theorem),\n$$\n\\frac{f(b)-f(a)}{b-a}=f'(c)\n$$\nfor some $c\\in(a,b)$. Then, for $t\\in(a,b)$, convexity says that\n\\begin{gather}\nf(t)\\le f'(c)(t-a)+f(a)\\\\\nf(t)\\le f'(c)(t-b)+f(b)\n\\end{gather}\nso we derive\n$$\n\\frac{f(t)-f(a)}{t-a}\\le f'(c)\\le\\frac{f(t)-f(b)}{t-b}\n$$\nand, taking the limits for $t\\to a$ and for $t\\to b$,\n$$\nf'(a)\\le f'(c)\\le f'(b)\n$$\nproving $f'(a)\\le f'(b)$. QED</p>\n\n<p><strong>Proposition.</strong> <em>Let $f$ be differentiable over $\\mathbb{R}$. If $f'(x)\\le0$, for every $x$, then $f$ is nonincreasing.</em></p>\n\n<p><em>Proof.</em> Let $a&lt;b$. Then\n$$\n\\frac{f(b)-f(a)}{b-a}=f'(c)\\le0\n$$\nby Lagrange's theorem. Thus $f(a)\\ge f(b)$. QED</p>\n" } ]
Question: How to prove with Lagrange theorem Answer 1: <p>This is a possible solution to the problem (but i'd also like to see some of yours):<br> <strong>My Try:</strong> </p> <p>Given the following theorem:<br> <em>TEO.</em> Is $f$ differentiable in an interval $I$. Then $f$ is concave (convex) in $I$, if and only if $f'$is increasing (decreasing) in I.</p> <p>For the previous theorem we have that $f'(x)$ it is growing broadly. So, reasoning by contradiction, we suppose that exists $x_0 \in \Re: f(x) &lt; 0$, then we set $\epsilon = \frac{|f(x)|}{3}$.<br> For the definition of asymptote $\rightarrow \lim _{x\to \infty \:}\left(f(x)\right)=1$, then by definition of limit, we have that exists $M \in \Re: |f(x)|&lt;\epsilon, \forall x &gt; M$.<br> At this point, we choose $x_1 &gt; max\{x_0, M\}$, then we set</p> <p>$$L_1 = \frac{f(x_1)-f(x_0)}{x_1-x_0}$$</p> <p>Then, we choose $x_2&gt;x_1$ so that $x_2-x_1&gt;x_1-x_0$ from which it follows that:</p> <p>$$L_2 = \frac{f(x_2)-f(x_1)}{x_2-x_1} \le \frac{2\epsilon}{x_2-x_1}&lt; \frac{2\epsilon}{x_1-x_0}&lt;\frac{f(x_1)-f(x_0)}{x_1-x_0} = L_1$$</p> <p>where the last inequality follows from the fact that $f(x_0) = -3\epsilon$ and $f(x_1)&gt; -\epsilon$.<br> So is $L_2&lt;L_1$. On the other hand, for the Lagrange's theorem applied to the function $f$ initially restricted in $[x_0,x_1]$ and then in $[x_1,x_2]$ you get $L_1=f'(c_1)$ and $L_2=f'(c_2)$, where $c_1, c_2$ are chosen so that $x_0&lt;c_1&lt;x_1$ and $x_1&lt;c_2&lt;x_2$.<br> So $c_1 &lt; c_2$ and $f'(c_1) = L_1 &gt; L_2 = f'(c_2)$ for which you get a contradiction to the growth in the broad sense of the first derivative function. </p> Answer 2: <p>Let $x_0$ such that $y_0 = f(x_0) &lt; 0$ and consider $x_1 &gt; x_0$. I claim that $f(x_1) \leq y_0 &lt; 0$. If this is true, it's a contradiction since $x_1$ was general, which shows that $y=0$ can not be an asymptote. </p> <p>To prove the claim, assume $f(x_1) &gt; y_0$, so $\frac{f(x_1) - y_0}{x_1 - x_0} &gt; 0$. Note that since $y=0$ is an asymptote ( or, just that there is any horizontal asymptote ), as $x\to \infty$, we have $\frac{f(x)-y_0}{x-x_0} \to 0$, so in particular we can find $x_2 &gt; x_1$ such that $\frac{f(x_2) - y_0}{x_2 - x_0} &lt; \frac{f(x_1)-y_0}{x_1 - x_0}$, which means that $(x_1,f(x_1))$ lies above the line connecting $(x_0,f(x_0))$ and $(x_2,f(x_2))$, which is not possible since $f$ is convex. </p> Answer 3: <p>Since $f$ is convex, its derivative is nondecreasing. In particular, the limit $$ \lim_{x\to\infty}f'(x) $$ exists (finite or infinite). By l'Hôpital's theorem, $$ \lim_{x\to\infty}f'(x)=\lim_{x\to\infty}\frac{f(x)}{x}=0 $$ Therefore $f'(x)\le0$, for every $x$. Thus $f$ is nonincreasing. Then, if $f(x_0)&lt;0$, for some $x_0$, we have $f(x)\le f(x_0)$, contradicting $$ \lim_{x\to\infty}f(x)=0 $$</p> <p><strong>Proposition.</strong> <em>If $f$ is differentiable and convex over $\mathbb{R}$, then $f'$ is nondecreasing.</em></p> <p><em>Proof.</em> Let $a&lt;b$. The line through $(a,f(a))$ and $(b,f(b))$ can be written as \begin{gather} y=\frac{f(b)-f(a)}{b-a}(x-a)+f(a)\\ y=\frac{f(b)-f(a)}{b-a}(x-b)+f(b) \end{gather} By Lagrange's theorem (mean value theorem), $$ \frac{f(b)-f(a)}{b-a}=f'(c) $$ for some $c\in(a,b)$. Then, for $t\in(a,b)$, convexity says that \begin{gather} f(t)\le f'(c)(t-a)+f(a)\\ f(t)\le f'(c)(t-b)+f(b) \end{gather} so we derive $$ \frac{f(t)-f(a)}{t-a}\le f'(c)\le\frac{f(t)-f(b)}{t-b} $$ and, taking the limits for $t\to a$ and for $t\to b$, $$ f'(a)\le f'(c)\le f'(b) $$ proving $f'(a)\le f'(b)$. QED</p> <p><strong>Proposition.</strong> <em>Let $f$ be differentiable over $\mathbb{R}$. If $f'(x)\le0$, for every $x$, then $f$ is nonincreasing.</em></p> <p><em>Proof.</em> Let $a&lt;b$. Then $$ \frac{f(b)-f(a)}{b-a}=f'(c)\le0 $$ by Lagrange's theorem. Thus $f(a)\ge f(b)$. QED</p> Answer 4: <p>This is a possible solution to the problem (but i'd also like to see some of yours):<br> <strong>My Try:</strong> </p> <p>Given the following theorem:<br> <em>TEO.</em> Is $f$ differentiable in an interval $I$. Then $f$ is concave (convex) in $I$, if and only if $f'$is increasing (decreasing) in I.</p> <p>For the previous theorem we have that $f'(x)$ it is growing broadly. So, reasoning by contradiction, we suppose that exists $x_0 \in \Re: f(x) &lt; 0$, then we set $\epsilon = \frac{|f(x)|}{3}$.<br> For the definition of asymptote $\rightarrow \lim _{x\to \infty \:}\left(f(x)\right)=1$, then by definition of limit, we have that exists $M \in \Re: |f(x)|&lt;\epsilon, \forall x &gt; M$.<br> At this point, we choose $x_1 &gt; max\{x_0, M\}$, then we set</p> <p>$$L_1 = \frac{f(x_1)-f(x_0)}{x_1-x_0}$$</p> <p>Then, we choose $x_2&gt;x_1$ so that $x_2-x_1&gt;x_1-x_0$ from which it follows that:</p> <p>$$L_2 = \frac{f(x_2)-f(x_1)}{x_2-x_1} \le \frac{2\epsilon}{x_2-x_1}&lt; \frac{2\epsilon}{x_1-x_0}&lt;\frac{f(x_1)-f(x_0)}{x_1-x_0} = L_1$$</p> <p>where the last inequality follows from the fact that $f(x_0) = -3\epsilon$ and $f(x_1)&gt; -\epsilon$.<br> So is $L_2&lt;L_1$. On the other hand, for the Lagrange's theorem applied to the function $f$ initially restricted in $[x_0,x_1]$ and then in $[x_1,x_2]$ you get $L_1=f'(c_1)$ and $L_2=f'(c_2)$, where $c_1, c_2$ are chosen so that $x_0&lt;c_1&lt;x_1$ and $x_1&lt;c_2&lt;x_2$.<br> So $c_1 &lt; c_2$ and $f'(c_1) = L_1 &gt; L_2 = f'(c_2)$ for which you get a contradiction to the growth in the broad sense of the first derivative function. </p> Answer 5: <p>Let $x_0$ such that $y_0 = f(x_0) &lt; 0$ and consider $x_1 &gt; x_0$. I claim that $f(x_1) \leq y_0 &lt; 0$. If this is true, it's a contradiction since $x_1$ was general, which shows that $y=0$ can not be an asymptote. </p> <p>To prove the claim, assume $f(x_1) &gt; y_0$, so $\frac{f(x_1) - y_0}{x_1 - x_0} &gt; 0$. Note that since $y=0$ is an asymptote ( or, just that there is any horizontal asymptote ), as $x\to \infty$, we have $\frac{f(x)-y_0}{x-x_0} \to 0$, so in particular we can find $x_2 &gt; x_1$ such that $\frac{f(x_2) - y_0}{x_2 - x_0} &lt; \frac{f(x_1)-y_0}{x_1 - x_0}$, which means that $(x_1,f(x_1))$ lies above the line connecting $(x_0,f(x_0))$ and $(x_2,f(x_2))$, which is not possible since $f$ is convex. </p> Answer 6: <p>Since $f$ is convex, its derivative is nondecreasing. In particular, the limit $$ \lim_{x\to\infty}f'(x) $$ exists (finite or infinite). By l'Hôpital's theorem, $$ \lim_{x\to\infty}f'(x)=\lim_{x\to\infty}\frac{f(x)}{x}=0 $$ Therefore $f'(x)\le0$, for every $x$. Thus $f$ is nonincreasing. Then, if $f(x_0)&lt;0$, for some $x_0$, we have $f(x)\le f(x_0)$, contradicting $$ \lim_{x\to\infty}f(x)=0 $$</p> <p><strong>Proposition.</strong> <em>If $f$ is differentiable and convex over $\mathbb{R}$, then $f'$ is nondecreasing.</em></p> <p><em>Proof.</em> Let $a&lt;b$. The line through $(a,f(a))$ and $(b,f(b))$ can be written as \begin{gather} y=\frac{f(b)-f(a)}{b-a}(x-a)+f(a)\\ y=\frac{f(b)-f(a)}{b-a}(x-b)+f(b) \end{gather} By Lagrange's theorem (mean value theorem), $$ \frac{f(b)-f(a)}{b-a}=f'(c) $$ for some $c\in(a,b)$. Then, for $t\in(a,b)$, convexity says that \begin{gather} f(t)\le f'(c)(t-a)+f(a)\\ f(t)\le f'(c)(t-b)+f(b) \end{gather} so we derive $$ \frac{f(t)-f(a)}{t-a}\le f'(c)\le\frac{f(t)-f(b)}{t-b} $$ and, taking the limits for $t\to a$ and for $t\to b$, $$ f'(a)\le f'(c)\le f'(b) $$ proving $f'(a)\le f'(b)$. QED</p> <p><strong>Proposition.</strong> <em>Let $f$ be differentiable over $\mathbb{R}$. If $f'(x)\le0$, for every $x$, then $f$ is nonincreasing.</em></p> <p><em>Proof.</em> Let $a&lt;b$. Then $$ \frac{f(b)-f(a)}{b-a}=f'(c)\le0 $$ by Lagrange's theorem. Thus $f(a)\ge f(b)$. QED</p> Answer 7: <p>This is a possible solution to the problem (but i'd also like to see some of yours):<br> <strong>My Try:</strong> </p> <p>Given the following theorem:<br> <em>TEO.</em> Is $f$ differentiable in an interval $I$. Then $f$ is concave (convex) in $I$, if and only if $f'$is increasing (decreasing) in I.</p> <p>For the previous theorem we have that $f'(x)$ it is growing broadly. So, reasoning by contradiction, we suppose that exists $x_0 \in \Re: f(x) &lt; 0$, then we set $\epsilon = \frac{|f(x)|}{3}$.<br> For the definition of asymptote $\rightarrow \lim _{x\to \infty \:}\left(f(x)\right)=1$, then by definition of limit, we have that exists $M \in \Re: |f(x)|&lt;\epsilon, \forall x &gt; M$.<br> At this point, we choose $x_1 &gt; max\{x_0, M\}$, then we set</p> <p>$$L_1 = \frac{f(x_1)-f(x_0)}{x_1-x_0}$$</p> <p>Then, we choose $x_2&gt;x_1$ so that $x_2-x_1&gt;x_1-x_0$ from which it follows that:</p> <p>$$L_2 = \frac{f(x_2)-f(x_1)}{x_2-x_1} \le \frac{2\epsilon}{x_2-x_1}&lt; \frac{2\epsilon}{x_1-x_0}&lt;\frac{f(x_1)-f(x_0)}{x_1-x_0} = L_1$$</p> <p>where the last inequality follows from the fact that $f(x_0) = -3\epsilon$ and $f(x_1)&gt; -\epsilon$.<br> So is $L_2&lt;L_1$. On the other hand, for the Lagrange's theorem applied to the function $f$ initially restricted in $[x_0,x_1]$ and then in $[x_1,x_2]$ you get $L_1=f'(c_1)$ and $L_2=f'(c_2)$, where $c_1, c_2$ are chosen so that $x_0&lt;c_1&lt;x_1$ and $x_1&lt;c_2&lt;x_2$.<br> So $c_1 &lt; c_2$ and $f'(c_1) = L_1 &gt; L_2 = f'(c_2)$ for which you get a contradiction to the growth in the broad sense of the first derivative function. </p> Answer 8: <p>Let $x_0$ such that $y_0 = f(x_0) &lt; 0$ and consider $x_1 &gt; x_0$. I claim that $f(x_1) \leq y_0 &lt; 0$. If this is true, it's a contradiction since $x_1$ was general, which shows that $y=0$ can not be an asymptote. </p> <p>To prove the claim, assume $f(x_1) &gt; y_0$, so $\frac{f(x_1) - y_0}{x_1 - x_0} &gt; 0$. Note that since $y=0$ is an asymptote ( or, just that there is any horizontal asymptote ), as $x\to \infty$, we have $\frac{f(x)-y_0}{x-x_0} \to 0$, so in particular we can find $x_2 &gt; x_1$ such that $\frac{f(x_2) - y_0}{x_2 - x_0} &lt; \frac{f(x_1)-y_0}{x_1 - x_0}$, which means that $(x_1,f(x_1))$ lies above the line connecting $(x_0,f(x_0))$ and $(x_2,f(x_2))$, which is not possible since $f$ is convex. </p> Answer 9: <p>Since $f$ is convex, its derivative is nondecreasing. In particular, the limit $$ \lim_{x\to\infty}f'(x) $$ exists (finite or infinite). By l'Hôpital's theorem, $$ \lim_{x\to\infty}f'(x)=\lim_{x\to\infty}\frac{f(x)}{x}=0 $$ Therefore $f'(x)\le0$, for every $x$. Thus $f$ is nonincreasing. Then, if $f(x_0)&lt;0$, for some $x_0$, we have $f(x)\le f(x_0)$, contradicting $$ \lim_{x\to\infty}f(x)=0 $$</p> <p><strong>Proposition.</strong> <em>If $f$ is differentiable and convex over $\mathbb{R}$, then $f'$ is nondecreasing.</em></p> <p><em>Proof.</em> Let $a&lt;b$. The line through $(a,f(a))$ and $(b,f(b))$ can be written as \begin{gather} y=\frac{f(b)-f(a)}{b-a}(x-a)+f(a)\\ y=\frac{f(b)-f(a)}{b-a}(x-b)+f(b) \end{gather} By Lagrange's theorem (mean value theorem), $$ \frac{f(b)-f(a)}{b-a}=f'(c) $$ for some $c\in(a,b)$. Then, for $t\in(a,b)$, convexity says that \begin{gather} f(t)\le f'(c)(t-a)+f(a)\\ f(t)\le f'(c)(t-b)+f(b) \end{gather} so we derive $$ \frac{f(t)-f(a)}{t-a}\le f'(c)\le\frac{f(t)-f(b)}{t-b} $$ and, taking the limits for $t\to a$ and for $t\to b$, $$ f'(a)\le f'(c)\le f'(b) $$ proving $f'(a)\le f'(b)$. QED</p> <p><strong>Proposition.</strong> <em>Let $f$ be differentiable over $\mathbb{R}$. If $f'(x)\le0$, for every $x$, then $f$ is nonincreasing.</em></p> <p><em>Proof.</em> Let $a&lt;b$. Then $$ \frac{f(b)-f(a)}{b-a}=f'(c)\le0 $$ by Lagrange's theorem. Thus $f(a)\ge f(b)$. QED</p> Answer 10: <p>This is a possible solution to the problem (but i'd also like to see some of yours):<br> <strong>My Try:</strong> </p> <p>Given the following theorem:<br> <em>TEO.</em> Is $f$ differentiable in an interval $I$. Then $f$ is concave (convex) in $I$, if and only if $f'$is increasing (decreasing) in I.</p> <p>For the previous theorem we have that $f'(x)$ it is growing broadly. So, reasoning by contradiction, we suppose that exists $x_0 \in \Re: f(x) &lt; 0$, then we set $\epsilon = \frac{|f(x)|}{3}$.<br> For the definition of asymptote $\rightarrow \lim _{x\to \infty \:}\left(f(x)\right)=1$, then by definition of limit, we have that exists $M \in \Re: |f(x)|&lt;\epsilon, \forall x &gt; M$.<br> At this point, we choose $x_1 &gt; max\{x_0, M\}$, then we set</p> <p>$$L_1 = \frac{f(x_1)-f(x_0)}{x_1-x_0}$$</p> <p>Then, we choose $x_2&gt;x_1$ so that $x_2-x_1&gt;x_1-x_0$ from which it follows that:</p> <p>$$L_2 = \frac{f(x_2)-f(x_1)}{x_2-x_1} \le \frac{2\epsilon}{x_2-x_1}&lt; \frac{2\epsilon}{x_1-x_0}&lt;\frac{f(x_1)-f(x_0)}{x_1-x_0} = L_1$$</p> <p>where the last inequality follows from the fact that $f(x_0) = -3\epsilon$ and $f(x_1)&gt; -\epsilon$.<br> So is $L_2&lt;L_1$. On the other hand, for the Lagrange's theorem applied to the function $f$ initially restricted in $[x_0,x_1]$ and then in $[x_1,x_2]$ you get $L_1=f'(c_1)$ and $L_2=f'(c_2)$, where $c_1, c_2$ are chosen so that $x_0&lt;c_1&lt;x_1$ and $x_1&lt;c_2&lt;x_2$.<br> So $c_1 &lt; c_2$ and $f'(c_1) = L_1 &gt; L_2 = f'(c_2)$ for which you get a contradiction to the growth in the broad sense of the first derivative function. </p> Answer 11: <p>Let $x_0$ such that $y_0 = f(x_0) &lt; 0$ and consider $x_1 &gt; x_0$. I claim that $f(x_1) \leq y_0 &lt; 0$. If this is true, it's a contradiction since $x_1$ was general, which shows that $y=0$ can not be an asymptote. </p> <p>To prove the claim, assume $f(x_1) &gt; y_0$, so $\frac{f(x_1) - y_0}{x_1 - x_0} &gt; 0$. Note that since $y=0$ is an asymptote ( or, just that there is any horizontal asymptote ), as $x\to \infty$, we have $\frac{f(x)-y_0}{x-x_0} \to 0$, so in particular we can find $x_2 &gt; x_1$ such that $\frac{f(x_2) - y_0}{x_2 - x_0} &lt; \frac{f(x_1)-y_0}{x_1 - x_0}$, which means that $(x_1,f(x_1))$ lies above the line connecting $(x_0,f(x_0))$ and $(x_2,f(x_2))$, which is not possible since $f$ is convex. </p> Answer 12: <p>Since $f$ is convex, its derivative is nondecreasing. In particular, the limit $$ \lim_{x\to\infty}f'(x) $$ exists (finite or infinite). By l'Hôpital's theorem, $$ \lim_{x\to\infty}f'(x)=\lim_{x\to\infty}\frac{f(x)}{x}=0 $$ Therefore $f'(x)\le0$, for every $x$. Thus $f$ is nonincreasing. Then, if $f(x_0)&lt;0$, for some $x_0$, we have $f(x)\le f(x_0)$, contradicting $$ \lim_{x\to\infty}f(x)=0 $$</p> <p><strong>Proposition.</strong> <em>If $f$ is differentiable and convex over $\mathbb{R}$, then $f'$ is nondecreasing.</em></p> <p><em>Proof.</em> Let $a&lt;b$. The line through $(a,f(a))$ and $(b,f(b))$ can be written as \begin{gather} y=\frac{f(b)-f(a)}{b-a}(x-a)+f(a)\\ y=\frac{f(b)-f(a)}{b-a}(x-b)+f(b) \end{gather} By Lagrange's theorem (mean value theorem), $$ \frac{f(b)-f(a)}{b-a}=f'(c) $$ for some $c\in(a,b)$. Then, for $t\in(a,b)$, convexity says that \begin{gather} f(t)\le f'(c)(t-a)+f(a)\\ f(t)\le f'(c)(t-b)+f(b) \end{gather} so we derive $$ \frac{f(t)-f(a)}{t-a}\le f'(c)\le\frac{f(t)-f(b)}{t-b} $$ and, taking the limits for $t\to a$ and for $t\to b$, $$ f'(a)\le f'(c)\le f'(b) $$ proving $f'(a)\le f'(b)$. QED</p> <p><strong>Proposition.</strong> <em>Let $f$ be differentiable over $\mathbb{R}$. If $f'(x)\le0$, for every $x$, then $f$ is nonincreasing.</em></p> <p><em>Proof.</em> Let $a&lt;b$. Then $$ \frac{f(b)-f(a)}{b-a}=f'(c)\le0 $$ by Lagrange's theorem. Thus $f(a)\ge f(b)$. QED</p>
$\lim_{x\to 1^-} \sum_{k=0}^\infty \left( x^{k^2}-x^{(k+\alpha)^2}\right)$
2,082,380
4
https://math.stackexchange.com/questions/2082380/lim-x-to-1-sum-k-0-infty-left-xk2-xk-alpha2-right
<p>Has anybody seen (or can anybody come up with) a proof that $$\lim_{x\to 1^-} \sum_{k=0}^\infty \left( x^{k^2}-x^{(k+\alpha)^2}\right) = \alpha$$ for all $\alpha &gt; 0$?</p> <p>And also that $$\lim_{\alpha\to 0^+} \left[ \lim_{x\to 1^-}\frac{\alpha - \sum_{k=0}^\infty \left( x^{k^2}-x^{(k+\alpha)^2}\right)}{1-x} \right]=\frac16$$</p> <p>I thought I had a proof of the first limit but it was flawed.</p> <p><strong>EDIT</strong> I had written the wrong expression for the second of the limits. It is now corrected.</p>
[ { "answer_id": 2082392, "score": 7, "text": "<p>If we set $x=e^{-t}$ we have to prove that\n$$ \\lim_{t\\to 0^+}\\sum_{k\\geq 0}\\left(e^{-tk^2}-e^{-t(k+\\alpha)^2}\\right) = \\alpha.$$\nAs a function of $t$, the above series is regular enough to make us able to state:\n$$ \\lim_{t\\to 0^+}\\sum_{k\\geq 0}\\left(e^{-tk^2}-e^{-t(k+\\alpha)^2}\\right) = \\lim_{m\\to +\\infty}\\int_{0}^{+\\infty}me^{-mt}\\sum_{k\\geq 0}\\left(e^{-tk^2}-e^{-t(k+\\alpha)^2}\\right)\\,dt$$\n(we use $me^{-mt}$ as an approximation of the Dirac delta distribution) and the original limit is converted into:\n$$ \\lim_{m\\to +\\infty}\\sum_{k\\geq 0}\\left(\\frac{m}{m+k^2}-\\frac{m}{m+(k+\\alpha)^2}\\right)=\\lim_{m\\to +\\infty}\\sum_{k\\geq 0}\\frac{m(2\\alpha k+\\alpha^2)}{(m+k^2)(m+(k+\\alpha)^2)}$$\nNow the last series can be computed through the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"noreferrer\">digamma function</a> and the limit as $m\\to +\\infty$ is exactly $\\alpha$ as wanted, since:\n$$\\frac{i}{2}\\left[\\psi(\\alpha-im)-\\psi(\\alpha+im)\\right] = \\frac{\\pi}{2}+\\frac{\\tfrac{1}{2}-\\alpha}{m}+O\\left(\\frac{1}{m^3}\\right). $$\nThe second question is essentially equivalent to finding an extra term in the previous asymptotic expansion.</p>\n" }, { "answer_id": 2082463, "score": 2, "text": "<p>From the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula#The_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Summation Formula</a>, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{k=1}^K (x^{k^2}-x^{(k+\\alpha)^2})&amp;=\\int_0^K (e^{y^2\\log(x)}-e^{(y+\\alpha)^2\\log(x)})\\,dy\\\\\\\\\n&amp;+\\frac{e^{-K^2|\\log(x)|}-e^{-(K+\\alpha)^2|\\log(x)|}-(1-e^{-\\alpha^2|\\log(x)|})}2 \\\\\\\\\n&amp;+\\log(x)\\int_0^K \\left(2ye^{-y^2|\\log(x)|}-2(y+\\alpha)e^{-(y+\\alpha)^2|\\log(x)|}\\right)P_1(y)\\,dy \\\\\\\\\n&amp;=\\int_0^\\alpha e^{-y^2|\\log(x)|}\\,dy-\\int_K^{K+\\alpha}e^{-y^2|\\log(x)|}\\,dy\\\\\\\\\n&amp;+\\frac{e^{-K^2|\\log(x)|}-e^{-(K+\\alpha)^2|\\log(x)|}-(1-e^{-\\alpha^2|\\log(x)|})}2 \\\\\\\\\n&amp;+\\log(x)\\int_0^K \\left(2ye^{-y^2|\\log(x)|}-2(y+\\alpha)e^{-(y+\\alpha)^2|\\log(x)|}\\right)P_1(y)\\,dy \\tag 1\\\\\\\\\n\\end{align}$$</p>\n\n<p>Taking the limit as $K\\to \\infty$ in $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\sum_{k=1}^\\infty (x^{k^2}-x^{(k+\\alpha)^2})&amp;=\\int_0^\\alpha e^{-y^2|\\log(x)|}\\,dy\n-\\frac{1-e^{-\\alpha^2|\\log(x)|}}2 \\\\\\\\\n&amp;+\\log(x)\\int_0^\\infty \\left(2ye^{-y^2|\\log(x)|}-2(y+\\alpha)e^{-(y+\\alpha)^2|\\log(x)|}\\right)P_1(y)\\,dy \\tag 2\n\\end{align}$$</p>\n\n<p>Taking the limit as $x\\to 1^-$ in $(2)$ yields the coveted result</p>\n\n<p>$$\\lim_{x\\to ^-1}\\sum_{k=0}^\\infty (x^{k^2}-x^{(k+\\alpha)^2})=\\alpha$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2082392, "score": 7, "text": "<p>If we set $x=e^{-t}$ we have to prove that\n$$ \\lim_{t\\to 0^+}\\sum_{k\\geq 0}\\left(e^{-tk^2}-e^{-t(k+\\alpha)^2}\\right) = \\alpha.$$\nAs a function of $t$, the above series is regular enough to make us able to state:\n$$ \\lim_{t\\to 0^+}\\sum_{k\\geq 0}\\left(e^{-tk^2}-e^{-t(k+\\alpha)^2}\\right) = \\lim_{m\\to +\\infty}\\int_{0}^{+\\infty}me^{-mt}\\sum_{k\\geq 0}\\left(e^{-tk^2}-e^{-t(k+\\alpha)^2}\\right)\\,dt$$\n(we use $me^{-mt}$ as an approximation of the Dirac delta distribution) and the original limit is converted into:\n$$ \\lim_{m\\to +\\infty}\\sum_{k\\geq 0}\\left(\\frac{m}{m+k^2}-\\frac{m}{m+(k+\\alpha)^2}\\right)=\\lim_{m\\to +\\infty}\\sum_{k\\geq 0}\\frac{m(2\\alpha k+\\alpha^2)}{(m+k^2)(m+(k+\\alpha)^2)}$$\nNow the last series can be computed through the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"noreferrer\">digamma function</a> and the limit as $m\\to +\\infty$ is exactly $\\alpha$ as wanted, since:\n$$\\frac{i}{2}\\left[\\psi(\\alpha-im)-\\psi(\\alpha+im)\\right] = \\frac{\\pi}{2}+\\frac{\\tfrac{1}{2}-\\alpha}{m}+O\\left(\\frac{1}{m^3}\\right). $$\nThe second question is essentially equivalent to finding an extra term in the previous asymptotic expansion.</p>\n" }, { "answer_id": 2082463, "score": 2, "text": "<p>From the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula#The_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Summation Formula</a>, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{k=1}^K (x^{k^2}-x^{(k+\\alpha)^2})&amp;=\\int_0^K (e^{y^2\\log(x)}-e^{(y+\\alpha)^2\\log(x)})\\,dy\\\\\\\\\n&amp;+\\frac{e^{-K^2|\\log(x)|}-e^{-(K+\\alpha)^2|\\log(x)|}-(1-e^{-\\alpha^2|\\log(x)|})}2 \\\\\\\\\n&amp;+\\log(x)\\int_0^K \\left(2ye^{-y^2|\\log(x)|}-2(y+\\alpha)e^{-(y+\\alpha)^2|\\log(x)|}\\right)P_1(y)\\,dy \\\\\\\\\n&amp;=\\int_0^\\alpha e^{-y^2|\\log(x)|}\\,dy-\\int_K^{K+\\alpha}e^{-y^2|\\log(x)|}\\,dy\\\\\\\\\n&amp;+\\frac{e^{-K^2|\\log(x)|}-e^{-(K+\\alpha)^2|\\log(x)|}-(1-e^{-\\alpha^2|\\log(x)|})}2 \\\\\\\\\n&amp;+\\log(x)\\int_0^K \\left(2ye^{-y^2|\\log(x)|}-2(y+\\alpha)e^{-(y+\\alpha)^2|\\log(x)|}\\right)P_1(y)\\,dy \\tag 1\\\\\\\\\n\\end{align}$$</p>\n\n<p>Taking the limit as $K\\to \\infty$ in $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\sum_{k=1}^\\infty (x^{k^2}-x^{(k+\\alpha)^2})&amp;=\\int_0^\\alpha e^{-y^2|\\log(x)|}\\,dy\n-\\frac{1-e^{-\\alpha^2|\\log(x)|}}2 \\\\\\\\\n&amp;+\\log(x)\\int_0^\\infty \\left(2ye^{-y^2|\\log(x)|}-2(y+\\alpha)e^{-(y+\\alpha)^2|\\log(x)|}\\right)P_1(y)\\,dy \\tag 2\n\\end{align}$$</p>\n\n<p>Taking the limit as $x\\to 1^-$ in $(2)$ yields the coveted result</p>\n\n<p>$$\\lim_{x\\to ^-1}\\sum_{k=0}^\\infty (x^{k^2}-x^{(k+\\alpha)^2})=\\alpha$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2082392, "score": 7, "text": "<p>If we set $x=e^{-t}$ we have to prove that\n$$ \\lim_{t\\to 0^+}\\sum_{k\\geq 0}\\left(e^{-tk^2}-e^{-t(k+\\alpha)^2}\\right) = \\alpha.$$\nAs a function of $t$, the above series is regular enough to make us able to state:\n$$ \\lim_{t\\to 0^+}\\sum_{k\\geq 0}\\left(e^{-tk^2}-e^{-t(k+\\alpha)^2}\\right) = \\lim_{m\\to +\\infty}\\int_{0}^{+\\infty}me^{-mt}\\sum_{k\\geq 0}\\left(e^{-tk^2}-e^{-t(k+\\alpha)^2}\\right)\\,dt$$\n(we use $me^{-mt}$ as an approximation of the Dirac delta distribution) and the original limit is converted into:\n$$ \\lim_{m\\to +\\infty}\\sum_{k\\geq 0}\\left(\\frac{m}{m+k^2}-\\frac{m}{m+(k+\\alpha)^2}\\right)=\\lim_{m\\to +\\infty}\\sum_{k\\geq 0}\\frac{m(2\\alpha k+\\alpha^2)}{(m+k^2)(m+(k+\\alpha)^2)}$$\nNow the last series can be computed through the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"noreferrer\">digamma function</a> and the limit as $m\\to +\\infty$ is exactly $\\alpha$ as wanted, since:\n$$\\frac{i}{2}\\left[\\psi(\\alpha-im)-\\psi(\\alpha+im)\\right] = \\frac{\\pi}{2}+\\frac{\\tfrac{1}{2}-\\alpha}{m}+O\\left(\\frac{1}{m^3}\\right). $$\nThe second question is essentially equivalent to finding an extra term in the previous asymptotic expansion.</p>\n" }, { "answer_id": 2082463, "score": 2, "text": "<p>From the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula#The_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Summation Formula</a>, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{k=1}^K (x^{k^2}-x^{(k+\\alpha)^2})&amp;=\\int_0^K (e^{y^2\\log(x)}-e^{(y+\\alpha)^2\\log(x)})\\,dy\\\\\\\\\n&amp;+\\frac{e^{-K^2|\\log(x)|}-e^{-(K+\\alpha)^2|\\log(x)|}-(1-e^{-\\alpha^2|\\log(x)|})}2 \\\\\\\\\n&amp;+\\log(x)\\int_0^K \\left(2ye^{-y^2|\\log(x)|}-2(y+\\alpha)e^{-(y+\\alpha)^2|\\log(x)|}\\right)P_1(y)\\,dy \\\\\\\\\n&amp;=\\int_0^\\alpha e^{-y^2|\\log(x)|}\\,dy-\\int_K^{K+\\alpha}e^{-y^2|\\log(x)|}\\,dy\\\\\\\\\n&amp;+\\frac{e^{-K^2|\\log(x)|}-e^{-(K+\\alpha)^2|\\log(x)|}-(1-e^{-\\alpha^2|\\log(x)|})}2 \\\\\\\\\n&amp;+\\log(x)\\int_0^K \\left(2ye^{-y^2|\\log(x)|}-2(y+\\alpha)e^{-(y+\\alpha)^2|\\log(x)|}\\right)P_1(y)\\,dy \\tag 1\\\\\\\\\n\\end{align}$$</p>\n\n<p>Taking the limit as $K\\to \\infty$ in $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\sum_{k=1}^\\infty (x^{k^2}-x^{(k+\\alpha)^2})&amp;=\\int_0^\\alpha e^{-y^2|\\log(x)|}\\,dy\n-\\frac{1-e^{-\\alpha^2|\\log(x)|}}2 \\\\\\\\\n&amp;+\\log(x)\\int_0^\\infty \\left(2ye^{-y^2|\\log(x)|}-2(y+\\alpha)e^{-(y+\\alpha)^2|\\log(x)|}\\right)P_1(y)\\,dy \\tag 2\n\\end{align}$$</p>\n\n<p>Taking the limit as $x\\to 1^-$ in $(2)$ yields the coveted result</p>\n\n<p>$$\\lim_{x\\to ^-1}\\sum_{k=0}^\\infty (x^{k^2}-x^{(k+\\alpha)^2})=\\alpha$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2082392, "score": 7, "text": "<p>If we set $x=e^{-t}$ we have to prove that\n$$ \\lim_{t\\to 0^+}\\sum_{k\\geq 0}\\left(e^{-tk^2}-e^{-t(k+\\alpha)^2}\\right) = \\alpha.$$\nAs a function of $t$, the above series is regular enough to make us able to state:\n$$ \\lim_{t\\to 0^+}\\sum_{k\\geq 0}\\left(e^{-tk^2}-e^{-t(k+\\alpha)^2}\\right) = \\lim_{m\\to +\\infty}\\int_{0}^{+\\infty}me^{-mt}\\sum_{k\\geq 0}\\left(e^{-tk^2}-e^{-t(k+\\alpha)^2}\\right)\\,dt$$\n(we use $me^{-mt}$ as an approximation of the Dirac delta distribution) and the original limit is converted into:\n$$ \\lim_{m\\to +\\infty}\\sum_{k\\geq 0}\\left(\\frac{m}{m+k^2}-\\frac{m}{m+(k+\\alpha)^2}\\right)=\\lim_{m\\to +\\infty}\\sum_{k\\geq 0}\\frac{m(2\\alpha k+\\alpha^2)}{(m+k^2)(m+(k+\\alpha)^2)}$$\nNow the last series can be computed through the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"noreferrer\">digamma function</a> and the limit as $m\\to +\\infty$ is exactly $\\alpha$ as wanted, since:\n$$\\frac{i}{2}\\left[\\psi(\\alpha-im)-\\psi(\\alpha+im)\\right] = \\frac{\\pi}{2}+\\frac{\\tfrac{1}{2}-\\alpha}{m}+O\\left(\\frac{1}{m^3}\\right). $$\nThe second question is essentially equivalent to finding an extra term in the previous asymptotic expansion.</p>\n" }, { "answer_id": 2082463, "score": 2, "text": "<p>From the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula#The_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Summation Formula</a>, we have</p>\n\n<p>$$\\begin{align}\n\\sum_{k=1}^K (x^{k^2}-x^{(k+\\alpha)^2})&amp;=\\int_0^K (e^{y^2\\log(x)}-e^{(y+\\alpha)^2\\log(x)})\\,dy\\\\\\\\\n&amp;+\\frac{e^{-K^2|\\log(x)|}-e^{-(K+\\alpha)^2|\\log(x)|}-(1-e^{-\\alpha^2|\\log(x)|})}2 \\\\\\\\\n&amp;+\\log(x)\\int_0^K \\left(2ye^{-y^2|\\log(x)|}-2(y+\\alpha)e^{-(y+\\alpha)^2|\\log(x)|}\\right)P_1(y)\\,dy \\\\\\\\\n&amp;=\\int_0^\\alpha e^{-y^2|\\log(x)|}\\,dy-\\int_K^{K+\\alpha}e^{-y^2|\\log(x)|}\\,dy\\\\\\\\\n&amp;+\\frac{e^{-K^2|\\log(x)|}-e^{-(K+\\alpha)^2|\\log(x)|}-(1-e^{-\\alpha^2|\\log(x)|})}2 \\\\\\\\\n&amp;+\\log(x)\\int_0^K \\left(2ye^{-y^2|\\log(x)|}-2(y+\\alpha)e^{-(y+\\alpha)^2|\\log(x)|}\\right)P_1(y)\\,dy \\tag 1\\\\\\\\\n\\end{align}$$</p>\n\n<p>Taking the limit as $K\\to \\infty$ in $(1)$ reveals</p>\n\n<p>$$\\begin{align}\n\\sum_{k=1}^\\infty (x^{k^2}-x^{(k+\\alpha)^2})&amp;=\\int_0^\\alpha e^{-y^2|\\log(x)|}\\,dy\n-\\frac{1-e^{-\\alpha^2|\\log(x)|}}2 \\\\\\\\\n&amp;+\\log(x)\\int_0^\\infty \\left(2ye^{-y^2|\\log(x)|}-2(y+\\alpha)e^{-(y+\\alpha)^2|\\log(x)|}\\right)P_1(y)\\,dy \\tag 2\n\\end{align}$$</p>\n\n<p>Taking the limit as $x\\to 1^-$ in $(2)$ yields the coveted result</p>\n\n<p>$$\\lim_{x\\to ^-1}\\sum_{k=0}^\\infty (x^{k^2}-x^{(k+\\alpha)^2})=\\alpha$$</p>\n\n<p>as was to be shown!</p>\n" } ]
Question: $\lim_{x\to 1^-} \sum_{k=0}^\infty \left( x^{k^2}-x^{(k+\alpha)^2}\right)$ Answer 1: <p>If we set $x=e^{-t}$ we have to prove that $$ \lim_{t\to 0^+}\sum_{k\geq 0}\left(e^{-tk^2}-e^{-t(k+\alpha)^2}\right) = \alpha.$$ As a function of $t$, the above series is regular enough to make us able to state: $$ \lim_{t\to 0^+}\sum_{k\geq 0}\left(e^{-tk^2}-e^{-t(k+\alpha)^2}\right) = \lim_{m\to +\infty}\int_{0}^{+\infty}me^{-mt}\sum_{k\geq 0}\left(e^{-tk^2}-e^{-t(k+\alpha)^2}\right)\,dt$$ (we use $me^{-mt}$ as an approximation of the Dirac delta distribution) and the original limit is converted into: $$ \lim_{m\to +\infty}\sum_{k\geq 0}\left(\frac{m}{m+k^2}-\frac{m}{m+(k+\alpha)^2}\right)=\lim_{m\to +\infty}\sum_{k\geq 0}\frac{m(2\alpha k+\alpha^2)}{(m+k^2)(m+(k+\alpha)^2)}$$ Now the last series can be computed through the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="noreferrer">digamma function</a> and the limit as $m\to +\infty$ is exactly $\alpha$ as wanted, since: $$\frac{i}{2}\left[\psi(\alpha-im)-\psi(\alpha+im)\right] = \frac{\pi}{2}+\frac{\tfrac{1}{2}-\alpha}{m}+O\left(\frac{1}{m^3}\right). $$ The second question is essentially equivalent to finding an extra term in the previous asymptotic expansion.</p> Answer 2: <p>From the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula#The_formula" rel="nofollow noreferrer">Euler-Maclaurin Summation Formula</a>, we have</p> <p>$$\begin{align} \sum_{k=1}^K (x^{k^2}-x^{(k+\alpha)^2})&amp;=\int_0^K (e^{y^2\log(x)}-e^{(y+\alpha)^2\log(x)})\,dy\\\\ &amp;+\frac{e^{-K^2|\log(x)|}-e^{-(K+\alpha)^2|\log(x)|}-(1-e^{-\alpha^2|\log(x)|})}2 \\\\ &amp;+\log(x)\int_0^K \left(2ye^{-y^2|\log(x)|}-2(y+\alpha)e^{-(y+\alpha)^2|\log(x)|}\right)P_1(y)\,dy \\\\ &amp;=\int_0^\alpha e^{-y^2|\log(x)|}\,dy-\int_K^{K+\alpha}e^{-y^2|\log(x)|}\,dy\\\\ &amp;+\frac{e^{-K^2|\log(x)|}-e^{-(K+\alpha)^2|\log(x)|}-(1-e^{-\alpha^2|\log(x)|})}2 \\\\ &amp;+\log(x)\int_0^K \left(2ye^{-y^2|\log(x)|}-2(y+\alpha)e^{-(y+\alpha)^2|\log(x)|}\right)P_1(y)\,dy \tag 1\\\\ \end{align}$$</p> <p>Taking the limit as $K\to \infty$ in $(1)$ reveals</p> <p>$$\begin{align} \sum_{k=1}^\infty (x^{k^2}-x^{(k+\alpha)^2})&amp;=\int_0^\alpha e^{-y^2|\log(x)|}\,dy -\frac{1-e^{-\alpha^2|\log(x)|}}2 \\\\ &amp;+\log(x)\int_0^\infty \left(2ye^{-y^2|\log(x)|}-2(y+\alpha)e^{-(y+\alpha)^2|\log(x)|}\right)P_1(y)\,dy \tag 2 \end{align}$$</p> <p>Taking the limit as $x\to 1^-$ in $(2)$ yields the coveted result</p> <p>$$\lim_{x\to ^-1}\sum_{k=0}^\infty (x^{k^2}-x^{(k+\alpha)^2})=\alpha$$</p> <p>as was to be shown!</p> Answer 3: <p>If we set $x=e^{-t}$ we have to prove that $$ \lim_{t\to 0^+}\sum_{k\geq 0}\left(e^{-tk^2}-e^{-t(k+\alpha)^2}\right) = \alpha.$$ As a function of $t$, the above series is regular enough to make us able to state: $$ \lim_{t\to 0^+}\sum_{k\geq 0}\left(e^{-tk^2}-e^{-t(k+\alpha)^2}\right) = \lim_{m\to +\infty}\int_{0}^{+\infty}me^{-mt}\sum_{k\geq 0}\left(e^{-tk^2}-e^{-t(k+\alpha)^2}\right)\,dt$$ (we use $me^{-mt}$ as an approximation of the Dirac delta distribution) and the original limit is converted into: $$ \lim_{m\to +\infty}\sum_{k\geq 0}\left(\frac{m}{m+k^2}-\frac{m}{m+(k+\alpha)^2}\right)=\lim_{m\to +\infty}\sum_{k\geq 0}\frac{m(2\alpha k+\alpha^2)}{(m+k^2)(m+(k+\alpha)^2)}$$ Now the last series can be computed through the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="noreferrer">digamma function</a> and the limit as $m\to +\infty$ is exactly $\alpha$ as wanted, since: $$\frac{i}{2}\left[\psi(\alpha-im)-\psi(\alpha+im)\right] = \frac{\pi}{2}+\frac{\tfrac{1}{2}-\alpha}{m}+O\left(\frac{1}{m^3}\right). $$ The second question is essentially equivalent to finding an extra term in the previous asymptotic expansion.</p> Answer 4: <p>From the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula#The_formula" rel="nofollow noreferrer">Euler-Maclaurin Summation Formula</a>, we have</p> <p>$$\begin{align} \sum_{k=1}^K (x^{k^2}-x^{(k+\alpha)^2})&amp;=\int_0^K (e^{y^2\log(x)}-e^{(y+\alpha)^2\log(x)})\,dy\\\\ &amp;+\frac{e^{-K^2|\log(x)|}-e^{-(K+\alpha)^2|\log(x)|}-(1-e^{-\alpha^2|\log(x)|})}2 \\\\ &amp;+\log(x)\int_0^K \left(2ye^{-y^2|\log(x)|}-2(y+\alpha)e^{-(y+\alpha)^2|\log(x)|}\right)P_1(y)\,dy \\\\ &amp;=\int_0^\alpha e^{-y^2|\log(x)|}\,dy-\int_K^{K+\alpha}e^{-y^2|\log(x)|}\,dy\\\\ &amp;+\frac{e^{-K^2|\log(x)|}-e^{-(K+\alpha)^2|\log(x)|}-(1-e^{-\alpha^2|\log(x)|})}2 \\\\ &amp;+\log(x)\int_0^K \left(2ye^{-y^2|\log(x)|}-2(y+\alpha)e^{-(y+\alpha)^2|\log(x)|}\right)P_1(y)\,dy \tag 1\\\\ \end{align}$$</p> <p>Taking the limit as $K\to \infty$ in $(1)$ reveals</p> <p>$$\begin{align} \sum_{k=1}^\infty (x^{k^2}-x^{(k+\alpha)^2})&amp;=\int_0^\alpha e^{-y^2|\log(x)|}\,dy -\frac{1-e^{-\alpha^2|\log(x)|}}2 \\\\ &amp;+\log(x)\int_0^\infty \left(2ye^{-y^2|\log(x)|}-2(y+\alpha)e^{-(y+\alpha)^2|\log(x)|}\right)P_1(y)\,dy \tag 2 \end{align}$$</p> <p>Taking the limit as $x\to 1^-$ in $(2)$ yields the coveted result</p> <p>$$\lim_{x\to ^-1}\sum_{k=0}^\infty (x^{k^2}-x^{(k+\alpha)^2})=\alpha$$</p> <p>as was to be shown!</p> Answer 5: <p>If we set $x=e^{-t}$ we have to prove that $$ \lim_{t\to 0^+}\sum_{k\geq 0}\left(e^{-tk^2}-e^{-t(k+\alpha)^2}\right) = \alpha.$$ As a function of $t$, the above series is regular enough to make us able to state: $$ \lim_{t\to 0^+}\sum_{k\geq 0}\left(e^{-tk^2}-e^{-t(k+\alpha)^2}\right) = \lim_{m\to +\infty}\int_{0}^{+\infty}me^{-mt}\sum_{k\geq 0}\left(e^{-tk^2}-e^{-t(k+\alpha)^2}\right)\,dt$$ (we use $me^{-mt}$ as an approximation of the Dirac delta distribution) and the original limit is converted into: $$ \lim_{m\to +\infty}\sum_{k\geq 0}\left(\frac{m}{m+k^2}-\frac{m}{m+(k+\alpha)^2}\right)=\lim_{m\to +\infty}\sum_{k\geq 0}\frac{m(2\alpha k+\alpha^2)}{(m+k^2)(m+(k+\alpha)^2)}$$ Now the last series can be computed through the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="noreferrer">digamma function</a> and the limit as $m\to +\infty$ is exactly $\alpha$ as wanted, since: $$\frac{i}{2}\left[\psi(\alpha-im)-\psi(\alpha+im)\right] = \frac{\pi}{2}+\frac{\tfrac{1}{2}-\alpha}{m}+O\left(\frac{1}{m^3}\right). $$ The second question is essentially equivalent to finding an extra term in the previous asymptotic expansion.</p> Answer 6: <p>From the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula#The_formula" rel="nofollow noreferrer">Euler-Maclaurin Summation Formula</a>, we have</p> <p>$$\begin{align} \sum_{k=1}^K (x^{k^2}-x^{(k+\alpha)^2})&amp;=\int_0^K (e^{y^2\log(x)}-e^{(y+\alpha)^2\log(x)})\,dy\\\\ &amp;+\frac{e^{-K^2|\log(x)|}-e^{-(K+\alpha)^2|\log(x)|}-(1-e^{-\alpha^2|\log(x)|})}2 \\\\ &amp;+\log(x)\int_0^K \left(2ye^{-y^2|\log(x)|}-2(y+\alpha)e^{-(y+\alpha)^2|\log(x)|}\right)P_1(y)\,dy \\\\ &amp;=\int_0^\alpha e^{-y^2|\log(x)|}\,dy-\int_K^{K+\alpha}e^{-y^2|\log(x)|}\,dy\\\\ &amp;+\frac{e^{-K^2|\log(x)|}-e^{-(K+\alpha)^2|\log(x)|}-(1-e^{-\alpha^2|\log(x)|})}2 \\\\ &amp;+\log(x)\int_0^K \left(2ye^{-y^2|\log(x)|}-2(y+\alpha)e^{-(y+\alpha)^2|\log(x)|}\right)P_1(y)\,dy \tag 1\\\\ \end{align}$$</p> <p>Taking the limit as $K\to \infty$ in $(1)$ reveals</p> <p>$$\begin{align} \sum_{k=1}^\infty (x^{k^2}-x^{(k+\alpha)^2})&amp;=\int_0^\alpha e^{-y^2|\log(x)|}\,dy -\frac{1-e^{-\alpha^2|\log(x)|}}2 \\\\ &amp;+\log(x)\int_0^\infty \left(2ye^{-y^2|\log(x)|}-2(y+\alpha)e^{-(y+\alpha)^2|\log(x)|}\right)P_1(y)\,dy \tag 2 \end{align}$$</p> <p>Taking the limit as $x\to 1^-$ in $(2)$ yields the coveted result</p> <p>$$\lim_{x\to ^-1}\sum_{k=0}^\infty (x^{k^2}-x^{(k+\alpha)^2})=\alpha$$</p> <p>as was to be shown!</p> Answer 7: <p>If we set $x=e^{-t}$ we have to prove that $$ \lim_{t\to 0^+}\sum_{k\geq 0}\left(e^{-tk^2}-e^{-t(k+\alpha)^2}\right) = \alpha.$$ As a function of $t$, the above series is regular enough to make us able to state: $$ \lim_{t\to 0^+}\sum_{k\geq 0}\left(e^{-tk^2}-e^{-t(k+\alpha)^2}\right) = \lim_{m\to +\infty}\int_{0}^{+\infty}me^{-mt}\sum_{k\geq 0}\left(e^{-tk^2}-e^{-t(k+\alpha)^2}\right)\,dt$$ (we use $me^{-mt}$ as an approximation of the Dirac delta distribution) and the original limit is converted into: $$ \lim_{m\to +\infty}\sum_{k\geq 0}\left(\frac{m}{m+k^2}-\frac{m}{m+(k+\alpha)^2}\right)=\lim_{m\to +\infty}\sum_{k\geq 0}\frac{m(2\alpha k+\alpha^2)}{(m+k^2)(m+(k+\alpha)^2)}$$ Now the last series can be computed through the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="noreferrer">digamma function</a> and the limit as $m\to +\infty$ is exactly $\alpha$ as wanted, since: $$\frac{i}{2}\left[\psi(\alpha-im)-\psi(\alpha+im)\right] = \frac{\pi}{2}+\frac{\tfrac{1}{2}-\alpha}{m}+O\left(\frac{1}{m^3}\right). $$ The second question is essentially equivalent to finding an extra term in the previous asymptotic expansion.</p> Answer 8: <p>From the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula#The_formula" rel="nofollow noreferrer">Euler-Maclaurin Summation Formula</a>, we have</p> <p>$$\begin{align} \sum_{k=1}^K (x^{k^2}-x^{(k+\alpha)^2})&amp;=\int_0^K (e^{y^2\log(x)}-e^{(y+\alpha)^2\log(x)})\,dy\\\\ &amp;+\frac{e^{-K^2|\log(x)|}-e^{-(K+\alpha)^2|\log(x)|}-(1-e^{-\alpha^2|\log(x)|})}2 \\\\ &amp;+\log(x)\int_0^K \left(2ye^{-y^2|\log(x)|}-2(y+\alpha)e^{-(y+\alpha)^2|\log(x)|}\right)P_1(y)\,dy \\\\ &amp;=\int_0^\alpha e^{-y^2|\log(x)|}\,dy-\int_K^{K+\alpha}e^{-y^2|\log(x)|}\,dy\\\\ &amp;+\frac{e^{-K^2|\log(x)|}-e^{-(K+\alpha)^2|\log(x)|}-(1-e^{-\alpha^2|\log(x)|})}2 \\\\ &amp;+\log(x)\int_0^K \left(2ye^{-y^2|\log(x)|}-2(y+\alpha)e^{-(y+\alpha)^2|\log(x)|}\right)P_1(y)\,dy \tag 1\\\\ \end{align}$$</p> <p>Taking the limit as $K\to \infty$ in $(1)$ reveals</p> <p>$$\begin{align} \sum_{k=1}^\infty (x^{k^2}-x^{(k+\alpha)^2})&amp;=\int_0^\alpha e^{-y^2|\log(x)|}\,dy -\frac{1-e^{-\alpha^2|\log(x)|}}2 \\\\ &amp;+\log(x)\int_0^\infty \left(2ye^{-y^2|\log(x)|}-2(y+\alpha)e^{-(y+\alpha)^2|\log(x)|}\right)P_1(y)\,dy \tag 2 \end{align}$$</p> <p>Taking the limit as $x\to 1^-$ in $(2)$ yields the coveted result</p> <p>$$\lim_{x\to ^-1}\sum_{k=0}^\infty (x^{k^2}-x^{(k+\alpha)^2})=\alpha$$</p> <p>as was to be shown!</p>
Prove $\int_0^1 {3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \, dx={1\over 2}$ using an alternative method
2,082,389
4
https://math.stackexchange.com/questions/2082389/prove-int-01-3-sqrt5x-over-1-sqrt5x3-dx-1-over-2-using-an-al
<p>Prove that</p> <p>$$\int_0^1 {3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \,dx={1\over 2}\tag1$$</p> <p>My try:</p> <p>$u=1+\sqrt{5}x$ then $du=\sqrt{5} \, dx$</p> <p>$${1\over \sqrt 5}\int_1^{1+\sqrt{5}}(4u^{-3}-u^{-2}) \, du$$</p> <p>$$\left. {1\over \sqrt{5}}(-2u^{-2}+u^{-1}) \right|_1^{1+\sqrt{5}}={1\over 2}$$</p> <p>Prove $(1)$ using an alternative method other than substitution method.</p>
[ { "answer_id": 2082403, "score": 6, "text": "<p>Using integration by parts : $$\\begin{align}\\int_0^1 {3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} \\, dx &amp;= \\int_0^1 \\left( - \\frac{1}{2\\sqrt{5} (1+\\sqrt{5}x)^2} \\right)'(3-\\sqrt{5}x) \\, dx \\\\[8pt] &amp;= \\left[ \\frac{\\sqrt{5}x - 3}{2\\sqrt{5}(1+\\sqrt{5}x)^2}\\right]_0^1 - \\frac{1}{2} \\int_0^1 \\frac{dx}{(1+\\sqrt{5}x)^2} \\\\[8pt] &amp;= \\left[ \\frac{\\sqrt{5}x - 3}{2\\sqrt{5}(1+\\sqrt{5}x)^2} + \\frac{1}{10x + 2\\sqrt{5}}\\right]_0^1 \\\\[8pt] &amp;= \\frac{1}{2} \\end{align} $$</p>\n" }, { "answer_id": 2082442, "score": 6, "text": "<p>Set $\\frac{3-\\sqrt{5}x}{1+\\sqrt{5}x}=u$, we have $\\frac{-4\\sqrt{5}} \n{(1+\\sqrt{5}x)^2} dx=du$ thus\n$$\\int_{0}^{1} {3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} \\,dx=\\frac{1}{-4\\sqrt{5}}\\int_{3}^{\\sqrt{5}-2}u\\,du=\\frac 12$$</p>\n" }, { "answer_id": 2082416, "score": 2, "text": "<p>Set $x=\\frac{\\sinh^2u}{\\sqrt{5}}$ and $dx=\\frac{2\\sinh u\\cosh u}{\\sqrt{5}}$</p>\n\n<p>So $$\\int{3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} \\, dx=\\int \\frac{6\\sinh u\\cosh u}{\\sqrt{5}\\cosh^6 u}du -\\int\\frac{2\\sinh^3u \\cosh u}{\\sqrt{5}\\cosh^6 u} \\, du = \\int \\frac{6\\sinh u}{\\sqrt{5}\\cosh^5 u} \\, du -\\int \\frac{2(1+\\cosh^2 u ) \\sinh u}{\\sqrt{5}\\cosh^5 u} \\, du$$</p>\n\n<p>And you can calculate these integrals using the substitution $t=\\cosh u$ and $dt=\\sinh u \\, du$</p>\n\n<p>Another way using partial fractions: </p>\n\n<p>$$\\int{3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} dx= \\int \\frac{20\\sqrt{5}}{(5x+\\sqrt{5})^3}- \\frac{5}{(5x+\\sqrt{5})^2}dx $$</p>\n" }, { "answer_id": 2082425, "score": 2, "text": "<p>$$\n{3-\\sqrt{5}x\\over (1+\\sqrt 5 \\, x)^3} = \\frac A {1+\\sqrt 5\\,x} + \\frac B {(1+\\sqrt 5\\,x)^2} + \\frac C {(1+\\sqrt 5\\,x)^3}\n$$\nFind $A$, $B$, and $C$ and then integrate each term separately.</p>\n" }, { "answer_id": 2082403, "score": 6, "text": "<p>Using integration by parts : $$\\begin{align}\\int_0^1 {3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} \\, dx &amp;= \\int_0^1 \\left( - \\frac{1}{2\\sqrt{5} (1+\\sqrt{5}x)^2} \\right)'(3-\\sqrt{5}x) \\, dx \\\\[8pt] &amp;= \\left[ \\frac{\\sqrt{5}x - 3}{2\\sqrt{5}(1+\\sqrt{5}x)^2}\\right]_0^1 - \\frac{1}{2} \\int_0^1 \\frac{dx}{(1+\\sqrt{5}x)^2} \\\\[8pt] &amp;= \\left[ \\frac{\\sqrt{5}x - 3}{2\\sqrt{5}(1+\\sqrt{5}x)^2} + \\frac{1}{10x + 2\\sqrt{5}}\\right]_0^1 \\\\[8pt] &amp;= \\frac{1}{2} \\end{align} $$</p>\n" }, { "answer_id": 2082442, "score": 6, "text": "<p>Set $\\frac{3-\\sqrt{5}x}{1+\\sqrt{5}x}=u$, we have $\\frac{-4\\sqrt{5}} \n{(1+\\sqrt{5}x)^2} dx=du$ thus\n$$\\int_{0}^{1} {3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} \\,dx=\\frac{1}{-4\\sqrt{5}}\\int_{3}^{\\sqrt{5}-2}u\\,du=\\frac 12$$</p>\n" }, { "answer_id": 2082416, "score": 2, "text": "<p>Set $x=\\frac{\\sinh^2u}{\\sqrt{5}}$ and $dx=\\frac{2\\sinh u\\cosh u}{\\sqrt{5}}$</p>\n\n<p>So $$\\int{3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} \\, dx=\\int \\frac{6\\sinh u\\cosh u}{\\sqrt{5}\\cosh^6 u}du -\\int\\frac{2\\sinh^3u \\cosh u}{\\sqrt{5}\\cosh^6 u} \\, du = \\int \\frac{6\\sinh u}{\\sqrt{5}\\cosh^5 u} \\, du -\\int \\frac{2(1+\\cosh^2 u ) \\sinh u}{\\sqrt{5}\\cosh^5 u} \\, du$$</p>\n\n<p>And you can calculate these integrals using the substitution $t=\\cosh u$ and $dt=\\sinh u \\, du$</p>\n\n<p>Another way using partial fractions: </p>\n\n<p>$$\\int{3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} dx= \\int \\frac{20\\sqrt{5}}{(5x+\\sqrt{5})^3}- \\frac{5}{(5x+\\sqrt{5})^2}dx $$</p>\n" }, { "answer_id": 2082425, "score": 2, "text": "<p>$$\n{3-\\sqrt{5}x\\over (1+\\sqrt 5 \\, x)^3} = \\frac A {1+\\sqrt 5\\,x} + \\frac B {(1+\\sqrt 5\\,x)^2} + \\frac C {(1+\\sqrt 5\\,x)^3}\n$$\nFind $A$, $B$, and $C$ and then integrate each term separately.</p>\n" }, { "answer_id": 2082403, "score": 6, "text": "<p>Using integration by parts : $$\\begin{align}\\int_0^1 {3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} \\, dx &amp;= \\int_0^1 \\left( - \\frac{1}{2\\sqrt{5} (1+\\sqrt{5}x)^2} \\right)'(3-\\sqrt{5}x) \\, dx \\\\[8pt] &amp;= \\left[ \\frac{\\sqrt{5}x - 3}{2\\sqrt{5}(1+\\sqrt{5}x)^2}\\right]_0^1 - \\frac{1}{2} \\int_0^1 \\frac{dx}{(1+\\sqrt{5}x)^2} \\\\[8pt] &amp;= \\left[ \\frac{\\sqrt{5}x - 3}{2\\sqrt{5}(1+\\sqrt{5}x)^2} + \\frac{1}{10x + 2\\sqrt{5}}\\right]_0^1 \\\\[8pt] &amp;= \\frac{1}{2} \\end{align} $$</p>\n" }, { "answer_id": 2082442, "score": 6, "text": "<p>Set $\\frac{3-\\sqrt{5}x}{1+\\sqrt{5}x}=u$, we have $\\frac{-4\\sqrt{5}} \n{(1+\\sqrt{5}x)^2} dx=du$ thus\n$$\\int_{0}^{1} {3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} \\,dx=\\frac{1}{-4\\sqrt{5}}\\int_{3}^{\\sqrt{5}-2}u\\,du=\\frac 12$$</p>\n" }, { "answer_id": 2082416, "score": 2, "text": "<p>Set $x=\\frac{\\sinh^2u}{\\sqrt{5}}$ and $dx=\\frac{2\\sinh u\\cosh u}{\\sqrt{5}}$</p>\n\n<p>So $$\\int{3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} \\, dx=\\int \\frac{6\\sinh u\\cosh u}{\\sqrt{5}\\cosh^6 u}du -\\int\\frac{2\\sinh^3u \\cosh u}{\\sqrt{5}\\cosh^6 u} \\, du = \\int \\frac{6\\sinh u}{\\sqrt{5}\\cosh^5 u} \\, du -\\int \\frac{2(1+\\cosh^2 u ) \\sinh u}{\\sqrt{5}\\cosh^5 u} \\, du$$</p>\n\n<p>And you can calculate these integrals using the substitution $t=\\cosh u$ and $dt=\\sinh u \\, du$</p>\n\n<p>Another way using partial fractions: </p>\n\n<p>$$\\int{3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} dx= \\int \\frac{20\\sqrt{5}}{(5x+\\sqrt{5})^3}- \\frac{5}{(5x+\\sqrt{5})^2}dx $$</p>\n" }, { "answer_id": 2082425, "score": 2, "text": "<p>$$\n{3-\\sqrt{5}x\\over (1+\\sqrt 5 \\, x)^3} = \\frac A {1+\\sqrt 5\\,x} + \\frac B {(1+\\sqrt 5\\,x)^2} + \\frac C {(1+\\sqrt 5\\,x)^3}\n$$\nFind $A$, $B$, and $C$ and then integrate each term separately.</p>\n" }, { "answer_id": 2082403, "score": 6, "text": "<p>Using integration by parts : $$\\begin{align}\\int_0^1 {3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} \\, dx &amp;= \\int_0^1 \\left( - \\frac{1}{2\\sqrt{5} (1+\\sqrt{5}x)^2} \\right)'(3-\\sqrt{5}x) \\, dx \\\\[8pt] &amp;= \\left[ \\frac{\\sqrt{5}x - 3}{2\\sqrt{5}(1+\\sqrt{5}x)^2}\\right]_0^1 - \\frac{1}{2} \\int_0^1 \\frac{dx}{(1+\\sqrt{5}x)^2} \\\\[8pt] &amp;= \\left[ \\frac{\\sqrt{5}x - 3}{2\\sqrt{5}(1+\\sqrt{5}x)^2} + \\frac{1}{10x + 2\\sqrt{5}}\\right]_0^1 \\\\[8pt] &amp;= \\frac{1}{2} \\end{align} $$</p>\n" }, { "answer_id": 2082442, "score": 6, "text": "<p>Set $\\frac{3-\\sqrt{5}x}{1+\\sqrt{5}x}=u$, we have $\\frac{-4\\sqrt{5}} \n{(1+\\sqrt{5}x)^2} dx=du$ thus\n$$\\int_{0}^{1} {3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} \\,dx=\\frac{1}{-4\\sqrt{5}}\\int_{3}^{\\sqrt{5}-2}u\\,du=\\frac 12$$</p>\n" }, { "answer_id": 2082416, "score": 2, "text": "<p>Set $x=\\frac{\\sinh^2u}{\\sqrt{5}}$ and $dx=\\frac{2\\sinh u\\cosh u}{\\sqrt{5}}$</p>\n\n<p>So $$\\int{3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} \\, dx=\\int \\frac{6\\sinh u\\cosh u}{\\sqrt{5}\\cosh^6 u}du -\\int\\frac{2\\sinh^3u \\cosh u}{\\sqrt{5}\\cosh^6 u} \\, du = \\int \\frac{6\\sinh u}{\\sqrt{5}\\cosh^5 u} \\, du -\\int \\frac{2(1+\\cosh^2 u ) \\sinh u}{\\sqrt{5}\\cosh^5 u} \\, du$$</p>\n\n<p>And you can calculate these integrals using the substitution $t=\\cosh u$ and $dt=\\sinh u \\, du$</p>\n\n<p>Another way using partial fractions: </p>\n\n<p>$$\\int{3-\\sqrt{5}x\\over (1+\\sqrt{5}x)^3} dx= \\int \\frac{20\\sqrt{5}}{(5x+\\sqrt{5})^3}- \\frac{5}{(5x+\\sqrt{5})^2}dx $$</p>\n" }, { "answer_id": 2082425, "score": 2, "text": "<p>$$\n{3-\\sqrt{5}x\\over (1+\\sqrt 5 \\, x)^3} = \\frac A {1+\\sqrt 5\\,x} + \\frac B {(1+\\sqrt 5\\,x)^2} + \\frac C {(1+\\sqrt 5\\,x)^3}\n$$\nFind $A$, $B$, and $C$ and then integrate each term separately.</p>\n" } ]
Question: Prove $\int_0^1 {3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \, dx={1\over 2}$ using an alternative method Answer 1: <p>Using integration by parts : $$\begin{align}\int_0^1 {3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \, dx &amp;= \int_0^1 \left( - \frac{1}{2\sqrt{5} (1+\sqrt{5}x)^2} \right)'(3-\sqrt{5}x) \, dx \\[8pt] &amp;= \left[ \frac{\sqrt{5}x - 3}{2\sqrt{5}(1+\sqrt{5}x)^2}\right]_0^1 - \frac{1}{2} \int_0^1 \frac{dx}{(1+\sqrt{5}x)^2} \\[8pt] &amp;= \left[ \frac{\sqrt{5}x - 3}{2\sqrt{5}(1+\sqrt{5}x)^2} + \frac{1}{10x + 2\sqrt{5}}\right]_0^1 \\[8pt] &amp;= \frac{1}{2} \end{align} $$</p> Answer 2: <p>Set $\frac{3-\sqrt{5}x}{1+\sqrt{5}x}=u$, we have $\frac{-4\sqrt{5}} {(1+\sqrt{5}x)^2} dx=du$ thus $$\int_{0}^{1} {3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \,dx=\frac{1}{-4\sqrt{5}}\int_{3}^{\sqrt{5}-2}u\,du=\frac 12$$</p> Answer 3: <p>Set $x=\frac{\sinh^2u}{\sqrt{5}}$ and $dx=\frac{2\sinh u\cosh u}{\sqrt{5}}$</p> <p>So $$\int{3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \, dx=\int \frac{6\sinh u\cosh u}{\sqrt{5}\cosh^6 u}du -\int\frac{2\sinh^3u \cosh u}{\sqrt{5}\cosh^6 u} \, du = \int \frac{6\sinh u}{\sqrt{5}\cosh^5 u} \, du -\int \frac{2(1+\cosh^2 u ) \sinh u}{\sqrt{5}\cosh^5 u} \, du$$</p> <p>And you can calculate these integrals using the substitution $t=\cosh u$ and $dt=\sinh u \, du$</p> <p>Another way using partial fractions: </p> <p>$$\int{3-\sqrt{5}x\over (1+\sqrt{5}x)^3} dx= \int \frac{20\sqrt{5}}{(5x+\sqrt{5})^3}- \frac{5}{(5x+\sqrt{5})^2}dx $$</p> Answer 4: <p>$$ {3-\sqrt{5}x\over (1+\sqrt 5 \, x)^3} = \frac A {1+\sqrt 5\,x} + \frac B {(1+\sqrt 5\,x)^2} + \frac C {(1+\sqrt 5\,x)^3} $$ Find $A$, $B$, and $C$ and then integrate each term separately.</p> Answer 5: <p>Using integration by parts : $$\begin{align}\int_0^1 {3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \, dx &amp;= \int_0^1 \left( - \frac{1}{2\sqrt{5} (1+\sqrt{5}x)^2} \right)'(3-\sqrt{5}x) \, dx \\[8pt] &amp;= \left[ \frac{\sqrt{5}x - 3}{2\sqrt{5}(1+\sqrt{5}x)^2}\right]_0^1 - \frac{1}{2} \int_0^1 \frac{dx}{(1+\sqrt{5}x)^2} \\[8pt] &amp;= \left[ \frac{\sqrt{5}x - 3}{2\sqrt{5}(1+\sqrt{5}x)^2} + \frac{1}{10x + 2\sqrt{5}}\right]_0^1 \\[8pt] &amp;= \frac{1}{2} \end{align} $$</p> Answer 6: <p>Set $\frac{3-\sqrt{5}x}{1+\sqrt{5}x}=u$, we have $\frac{-4\sqrt{5}} {(1+\sqrt{5}x)^2} dx=du$ thus $$\int_{0}^{1} {3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \,dx=\frac{1}{-4\sqrt{5}}\int_{3}^{\sqrt{5}-2}u\,du=\frac 12$$</p> Answer 7: <p>Set $x=\frac{\sinh^2u}{\sqrt{5}}$ and $dx=\frac{2\sinh u\cosh u}{\sqrt{5}}$</p> <p>So $$\int{3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \, dx=\int \frac{6\sinh u\cosh u}{\sqrt{5}\cosh^6 u}du -\int\frac{2\sinh^3u \cosh u}{\sqrt{5}\cosh^6 u} \, du = \int \frac{6\sinh u}{\sqrt{5}\cosh^5 u} \, du -\int \frac{2(1+\cosh^2 u ) \sinh u}{\sqrt{5}\cosh^5 u} \, du$$</p> <p>And you can calculate these integrals using the substitution $t=\cosh u$ and $dt=\sinh u \, du$</p> <p>Another way using partial fractions: </p> <p>$$\int{3-\sqrt{5}x\over (1+\sqrt{5}x)^3} dx= \int \frac{20\sqrt{5}}{(5x+\sqrt{5})^3}- \frac{5}{(5x+\sqrt{5})^2}dx $$</p> Answer 8: <p>$$ {3-\sqrt{5}x\over (1+\sqrt 5 \, x)^3} = \frac A {1+\sqrt 5\,x} + \frac B {(1+\sqrt 5\,x)^2} + \frac C {(1+\sqrt 5\,x)^3} $$ Find $A$, $B$, and $C$ and then integrate each term separately.</p> Answer 9: <p>Using integration by parts : $$\begin{align}\int_0^1 {3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \, dx &amp;= \int_0^1 \left( - \frac{1}{2\sqrt{5} (1+\sqrt{5}x)^2} \right)'(3-\sqrt{5}x) \, dx \\[8pt] &amp;= \left[ \frac{\sqrt{5}x - 3}{2\sqrt{5}(1+\sqrt{5}x)^2}\right]_0^1 - \frac{1}{2} \int_0^1 \frac{dx}{(1+\sqrt{5}x)^2} \\[8pt] &amp;= \left[ \frac{\sqrt{5}x - 3}{2\sqrt{5}(1+\sqrt{5}x)^2} + \frac{1}{10x + 2\sqrt{5}}\right]_0^1 \\[8pt] &amp;= \frac{1}{2} \end{align} $$</p> Answer 10: <p>Set $\frac{3-\sqrt{5}x}{1+\sqrt{5}x}=u$, we have $\frac{-4\sqrt{5}} {(1+\sqrt{5}x)^2} dx=du$ thus $$\int_{0}^{1} {3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \,dx=\frac{1}{-4\sqrt{5}}\int_{3}^{\sqrt{5}-2}u\,du=\frac 12$$</p> Answer 11: <p>Set $x=\frac{\sinh^2u}{\sqrt{5}}$ and $dx=\frac{2\sinh u\cosh u}{\sqrt{5}}$</p> <p>So $$\int{3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \, dx=\int \frac{6\sinh u\cosh u}{\sqrt{5}\cosh^6 u}du -\int\frac{2\sinh^3u \cosh u}{\sqrt{5}\cosh^6 u} \, du = \int \frac{6\sinh u}{\sqrt{5}\cosh^5 u} \, du -\int \frac{2(1+\cosh^2 u ) \sinh u}{\sqrt{5}\cosh^5 u} \, du$$</p> <p>And you can calculate these integrals using the substitution $t=\cosh u$ and $dt=\sinh u \, du$</p> <p>Another way using partial fractions: </p> <p>$$\int{3-\sqrt{5}x\over (1+\sqrt{5}x)^3} dx= \int \frac{20\sqrt{5}}{(5x+\sqrt{5})^3}- \frac{5}{(5x+\sqrt{5})^2}dx $$</p> Answer 12: <p>$$ {3-\sqrt{5}x\over (1+\sqrt 5 \, x)^3} = \frac A {1+\sqrt 5\,x} + \frac B {(1+\sqrt 5\,x)^2} + \frac C {(1+\sqrt 5\,x)^3} $$ Find $A$, $B$, and $C$ and then integrate each term separately.</p> Answer 13: <p>Using integration by parts : $$\begin{align}\int_0^1 {3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \, dx &amp;= \int_0^1 \left( - \frac{1}{2\sqrt{5} (1+\sqrt{5}x)^2} \right)'(3-\sqrt{5}x) \, dx \\[8pt] &amp;= \left[ \frac{\sqrt{5}x - 3}{2\sqrt{5}(1+\sqrt{5}x)^2}\right]_0^1 - \frac{1}{2} \int_0^1 \frac{dx}{(1+\sqrt{5}x)^2} \\[8pt] &amp;= \left[ \frac{\sqrt{5}x - 3}{2\sqrt{5}(1+\sqrt{5}x)^2} + \frac{1}{10x + 2\sqrt{5}}\right]_0^1 \\[8pt] &amp;= \frac{1}{2} \end{align} $$</p> Answer 14: <p>Set $\frac{3-\sqrt{5}x}{1+\sqrt{5}x}=u$, we have $\frac{-4\sqrt{5}} {(1+\sqrt{5}x)^2} dx=du$ thus $$\int_{0}^{1} {3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \,dx=\frac{1}{-4\sqrt{5}}\int_{3}^{\sqrt{5}-2}u\,du=\frac 12$$</p> Answer 15: <p>Set $x=\frac{\sinh^2u}{\sqrt{5}}$ and $dx=\frac{2\sinh u\cosh u}{\sqrt{5}}$</p> <p>So $$\int{3-\sqrt{5}x\over (1+\sqrt{5}x)^3} \, dx=\int \frac{6\sinh u\cosh u}{\sqrt{5}\cosh^6 u}du -\int\frac{2\sinh^3u \cosh u}{\sqrt{5}\cosh^6 u} \, du = \int \frac{6\sinh u}{\sqrt{5}\cosh^5 u} \, du -\int \frac{2(1+\cosh^2 u ) \sinh u}{\sqrt{5}\cosh^5 u} \, du$$</p> <p>And you can calculate these integrals using the substitution $t=\cosh u$ and $dt=\sinh u \, du$</p> <p>Another way using partial fractions: </p> <p>$$\int{3-\sqrt{5}x\over (1+\sqrt{5}x)^3} dx= \int \frac{20\sqrt{5}}{(5x+\sqrt{5})^3}- \frac{5}{(5x+\sqrt{5})^2}dx $$</p> Answer 16: <p>$$ {3-\sqrt{5}x\over (1+\sqrt 5 \, x)^3} = \frac A {1+\sqrt 5\,x} + \frac B {(1+\sqrt 5\,x)^2} + \frac C {(1+\sqrt 5\,x)^3} $$ Find $A$, $B$, and $C$ and then integrate each term separately.</p>
Can you solve this Diophantine equation really fast: $31x + 30y + 29z = 366$
2,082,417
4
https://math.stackexchange.com/questions/2082417/can-you-solve-this-diophantine-equation-really-fast-31x-30y-29z-366
<p>Is there a "trick" to finding a solution $(x,y,z)$ of the Diophantine equation $$31x + 30y + 29z = 366$$ where $0 \leq z \leq y \leq x $?</p>
[ { "answer_id": 2082427, "score": 16, "text": "<p>Hint: How many months have $31$ days? $30$ days? $29$ days?</p>\n" }, { "answer_id": 2082426, "score": 3, "text": "<p>$$31*6=186$$ </p>\n\n<p>Then see $60=31+29$ and you can sum $30$'s and $60$'s to get the condition.</p>\n\n<p>$$366=31*7+30*4+29*1$$</p>\n" }, { "answer_id": 2082424, "score": 1, "text": "<p>Solving $(N+1)x + Ny = M$ is the same as dividing $M$ by $N$: $366 = 30*12 + 6 = 31*6 + 30*6$. $z$ equals $0$.</p>\n" }, { "answer_id": 2082427, "score": 16, "text": "<p>Hint: How many months have $31$ days? $30$ days? $29$ days?</p>\n" }, { "answer_id": 2082426, "score": 3, "text": "<p>$$31*6=186$$ </p>\n\n<p>Then see $60=31+29$ and you can sum $30$'s and $60$'s to get the condition.</p>\n\n<p>$$366=31*7+30*4+29*1$$</p>\n" }, { "answer_id": 2082424, "score": 1, "text": "<p>Solving $(N+1)x + Ny = M$ is the same as dividing $M$ by $N$: $366 = 30*12 + 6 = 31*6 + 30*6$. $z$ equals $0$.</p>\n" }, { "answer_id": 2082427, "score": 16, "text": "<p>Hint: How many months have $31$ days? $30$ days? $29$ days?</p>\n" }, { "answer_id": 2082426, "score": 3, "text": "<p>$$31*6=186$$ </p>\n\n<p>Then see $60=31+29$ and you can sum $30$'s and $60$'s to get the condition.</p>\n\n<p>$$366=31*7+30*4+29*1$$</p>\n" }, { "answer_id": 2082424, "score": 1, "text": "<p>Solving $(N+1)x + Ny = M$ is the same as dividing $M$ by $N$: $366 = 30*12 + 6 = 31*6 + 30*6$. $z$ equals $0$.</p>\n" }, { "answer_id": 2082427, "score": 16, "text": "<p>Hint: How many months have $31$ days? $30$ days? $29$ days?</p>\n" }, { "answer_id": 2082426, "score": 3, "text": "<p>$$31*6=186$$ </p>\n\n<p>Then see $60=31+29$ and you can sum $30$'s and $60$'s to get the condition.</p>\n\n<p>$$366=31*7+30*4+29*1$$</p>\n" }, { "answer_id": 2082424, "score": 1, "text": "<p>Solving $(N+1)x + Ny = M$ is the same as dividing $M$ by $N$: $366 = 30*12 + 6 = 31*6 + 30*6$. $z$ equals $0$.</p>\n" } ]
Question: Can you solve this Diophantine equation really fast: $31x + 30y + 29z = 366$ Answer 1: <p>Hint: How many months have $31$ days? $30$ days? $29$ days?</p> Answer 2: <p>$$31*6=186$$ </p> <p>Then see $60=31+29$ and you can sum $30$'s and $60$'s to get the condition.</p> <p>$$366=31*7+30*4+29*1$$</p> Answer 3: <p>Solving $(N+1)x + Ny = M$ is the same as dividing $M$ by $N$: $366 = 30*12 + 6 = 31*6 + 30*6$. $z$ equals $0$.</p> Answer 4: <p>Hint: How many months have $31$ days? $30$ days? $29$ days?</p> Answer 5: <p>$$31*6=186$$ </p> <p>Then see $60=31+29$ and you can sum $30$'s and $60$'s to get the condition.</p> <p>$$366=31*7+30*4+29*1$$</p> Answer 6: <p>Solving $(N+1)x + Ny = M$ is the same as dividing $M$ by $N$: $366 = 30*12 + 6 = 31*6 + 30*6$. $z$ equals $0$.</p> Answer 7: <p>Hint: How many months have $31$ days? $30$ days? $29$ days?</p> Answer 8: <p>$$31*6=186$$ </p> <p>Then see $60=31+29$ and you can sum $30$'s and $60$'s to get the condition.</p> <p>$$366=31*7+30*4+29*1$$</p> Answer 9: <p>Solving $(N+1)x + Ny = M$ is the same as dividing $M$ by $N$: $366 = 30*12 + 6 = 31*6 + 30*6$. $z$ equals $0$.</p> Answer 10: <p>Hint: How many months have $31$ days? $30$ days? $29$ days?</p> Answer 11: <p>$$31*6=186$$ </p> <p>Then see $60=31+29$ and you can sum $30$'s and $60$'s to get the condition.</p> <p>$$366=31*7+30*4+29*1$$</p> Answer 12: <p>Solving $(N+1)x + Ny = M$ is the same as dividing $M$ by $N$: $366 = 30*12 + 6 = 31*6 + 30*6$. $z$ equals $0$.</p>
Why does it make sense to describe complex numbers through powers of $e$?
2,082,455
4
https://math.stackexchange.com/questions/2082455/why-does-it-make-sense-to-describe-complex-numbers-through-powers-of-e
<p>I'm studying in-depth complex numbers but I keep wondering why it makes sense to describe complex numbers through powers of $e$.</p>
[ { "answer_id": 2082491, "score": 4, "text": "<p><em>Because of the convenience of logarithms and antilogarithms.</em></p>\n\n<p>Logarithms were invented to turn multiplications and divisions into additions and subtractions, and to implement arbitrary powers.</p>\n\n<p>This feature carries to the complex numbers, and you can write</p>\n\n<p>$$e^{\\log w+\\log z}=wz.$$</p>\n\n<p>This creates a close connection to the polar form of the complex numbers, i.e. as vectors characterized by a magnitude ($|z|=\\sqrt{x^2+y^2}$) and an argument ($\\angle z=\\arctan y/x$) instead of Cartesian coordinates.</p>\n\n<p>Indeed, the product of two complex number is a complex number having for magnitude the product of the magnitudes, and for argument the sum of the arguments. Then if you form a complex number with the real part equal to the logarithm for the magnitude and the imaginary part equal to the argument, you define a complex logarithm:</p>\n\n<p>$$\\log wz=\\log|wz|+i\\angle wz=\\log|w|+\\log|z|+i\\angle w+i\\angle z=\\log w+\\log z.$$</p>\n\n<p>Conversely, the antilogarithm (i.e. the exponential) is the number such that the logarithm of its module is the real part of the given number, and its argument is the imaginary part:</p>\n\n<p>$$\\log\\rho=x,\\\\\\theta=y$$ which you can invert with</p>\n\n<p>$$e^{x+iy}=\\rho(\\cos\\theta+i\\sin\\theta).$$</p>\n\n<p>There is much more to be said on the topic. For now, remember that the exponential/logarithm create interesting connections between Cartesian/polar coordinates and a symbiosis with the trigonometric functions, in such a way that many computations are simplified.</p>\n" }, { "answer_id": 2082461, "score": 2, "text": "<p>Well when you are working with complex numbers in the polar form, expressing them in exponential forms allows easier multiplication of such complex numbers. Thats one reason.</p>\n" }, { "answer_id": 2082459, "score": 1, "text": "<p>An exponentiated complex number $a+ib;a,b \\in \\mathbb{R}$ has the form $e^{a+ib} = e^a e^{ib}$.</p>\n\n<p>With Taylor series expansion of $e^{ib}$ it holds</p>\n\n<p>$e^{ib} = 1+ib-\\frac{1}{2!}b^2-\\frac{i}{3!}b^3+\\frac{1}{4!}b^4+\\dots$.</p>\n\n<p>If you order it by real and imaginary part you will obtain elementary trigonometric functions. Therefore exponentials of complex numbers are a very good and elegant tool to handle trigonometric entities.</p>\n" }, { "answer_id": 2082471, "score": 1, "text": "<p>Suppose that we have the following function:</p>\n\n<p>$$f(x)=\\frac{\\cos(x)+i\\sin(x)}{e^{ix}}$$</p>\n\n<p>Differentiating once, assuming $e^{ix}$ can be reasonably defined for differentiation,</p>\n\n<p>$$f'(x)=0$$</p>\n\n<p>Thus, it follows that $f(x)$ is constant. Since it is trivial that $f(0)=1$, then $f(x)=1$ for all $x$, or</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n\n<hr>\n\n<p>One may also alternatively note that $y=e^x$ is analytic, and thus, by Taylor's theorem,</p>\n\n<p>$$e^x=\\sum_{k=0}^\\infty\\frac{x^k}{k!}$$</p>\n\n<p>Similarly, sine and cosine are analytic, with the following Taylor expansions:</p>\n\n<p>$$\\cos(x)=\\sum_{k=0}^\\infty\\frac{(-1)^kx^{2k}}{(2k)!};\\quad\\sin(x)=\\sum_{k=0}^\\infty\\frac{(-1)^kx^{2k+1}}{(2k+1)!}$$</p>\n\n<p>Simply by adding, Euler noticed that</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n" }, { "answer_id": 2082491, "score": 4, "text": "<p><em>Because of the convenience of logarithms and antilogarithms.</em></p>\n\n<p>Logarithms were invented to turn multiplications and divisions into additions and subtractions, and to implement arbitrary powers.</p>\n\n<p>This feature carries to the complex numbers, and you can write</p>\n\n<p>$$e^{\\log w+\\log z}=wz.$$</p>\n\n<p>This creates a close connection to the polar form of the complex numbers, i.e. as vectors characterized by a magnitude ($|z|=\\sqrt{x^2+y^2}$) and an argument ($\\angle z=\\arctan y/x$) instead of Cartesian coordinates.</p>\n\n<p>Indeed, the product of two complex number is a complex number having for magnitude the product of the magnitudes, and for argument the sum of the arguments. Then if you form a complex number with the real part equal to the logarithm for the magnitude and the imaginary part equal to the argument, you define a complex logarithm:</p>\n\n<p>$$\\log wz=\\log|wz|+i\\angle wz=\\log|w|+\\log|z|+i\\angle w+i\\angle z=\\log w+\\log z.$$</p>\n\n<p>Conversely, the antilogarithm (i.e. the exponential) is the number such that the logarithm of its module is the real part of the given number, and its argument is the imaginary part:</p>\n\n<p>$$\\log\\rho=x,\\\\\\theta=y$$ which you can invert with</p>\n\n<p>$$e^{x+iy}=\\rho(\\cos\\theta+i\\sin\\theta).$$</p>\n\n<p>There is much more to be said on the topic. For now, remember that the exponential/logarithm create interesting connections between Cartesian/polar coordinates and a symbiosis with the trigonometric functions, in such a way that many computations are simplified.</p>\n" }, { "answer_id": 2082461, "score": 2, "text": "<p>Well when you are working with complex numbers in the polar form, expressing them in exponential forms allows easier multiplication of such complex numbers. Thats one reason.</p>\n" }, { "answer_id": 2082459, "score": 1, "text": "<p>An exponentiated complex number $a+ib;a,b \\in \\mathbb{R}$ has the form $e^{a+ib} = e^a e^{ib}$.</p>\n\n<p>With Taylor series expansion of $e^{ib}$ it holds</p>\n\n<p>$e^{ib} = 1+ib-\\frac{1}{2!}b^2-\\frac{i}{3!}b^3+\\frac{1}{4!}b^4+\\dots$.</p>\n\n<p>If you order it by real and imaginary part you will obtain elementary trigonometric functions. Therefore exponentials of complex numbers are a very good and elegant tool to handle trigonometric entities.</p>\n" }, { "answer_id": 2082471, "score": 1, "text": "<p>Suppose that we have the following function:</p>\n\n<p>$$f(x)=\\frac{\\cos(x)+i\\sin(x)}{e^{ix}}$$</p>\n\n<p>Differentiating once, assuming $e^{ix}$ can be reasonably defined for differentiation,</p>\n\n<p>$$f'(x)=0$$</p>\n\n<p>Thus, it follows that $f(x)$ is constant. Since it is trivial that $f(0)=1$, then $f(x)=1$ for all $x$, or</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n\n<hr>\n\n<p>One may also alternatively note that $y=e^x$ is analytic, and thus, by Taylor's theorem,</p>\n\n<p>$$e^x=\\sum_{k=0}^\\infty\\frac{x^k}{k!}$$</p>\n\n<p>Similarly, sine and cosine are analytic, with the following Taylor expansions:</p>\n\n<p>$$\\cos(x)=\\sum_{k=0}^\\infty\\frac{(-1)^kx^{2k}}{(2k)!};\\quad\\sin(x)=\\sum_{k=0}^\\infty\\frac{(-1)^kx^{2k+1}}{(2k+1)!}$$</p>\n\n<p>Simply by adding, Euler noticed that</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n" }, { "answer_id": 2082491, "score": 4, "text": "<p><em>Because of the convenience of logarithms and antilogarithms.</em></p>\n\n<p>Logarithms were invented to turn multiplications and divisions into additions and subtractions, and to implement arbitrary powers.</p>\n\n<p>This feature carries to the complex numbers, and you can write</p>\n\n<p>$$e^{\\log w+\\log z}=wz.$$</p>\n\n<p>This creates a close connection to the polar form of the complex numbers, i.e. as vectors characterized by a magnitude ($|z|=\\sqrt{x^2+y^2}$) and an argument ($\\angle z=\\arctan y/x$) instead of Cartesian coordinates.</p>\n\n<p>Indeed, the product of two complex number is a complex number having for magnitude the product of the magnitudes, and for argument the sum of the arguments. Then if you form a complex number with the real part equal to the logarithm for the magnitude and the imaginary part equal to the argument, you define a complex logarithm:</p>\n\n<p>$$\\log wz=\\log|wz|+i\\angle wz=\\log|w|+\\log|z|+i\\angle w+i\\angle z=\\log w+\\log z.$$</p>\n\n<p>Conversely, the antilogarithm (i.e. the exponential) is the number such that the logarithm of its module is the real part of the given number, and its argument is the imaginary part:</p>\n\n<p>$$\\log\\rho=x,\\\\\\theta=y$$ which you can invert with</p>\n\n<p>$$e^{x+iy}=\\rho(\\cos\\theta+i\\sin\\theta).$$</p>\n\n<p>There is much more to be said on the topic. For now, remember that the exponential/logarithm create interesting connections between Cartesian/polar coordinates and a symbiosis with the trigonometric functions, in such a way that many computations are simplified.</p>\n" }, { "answer_id": 2082461, "score": 2, "text": "<p>Well when you are working with complex numbers in the polar form, expressing them in exponential forms allows easier multiplication of such complex numbers. Thats one reason.</p>\n" }, { "answer_id": 2082459, "score": 1, "text": "<p>An exponentiated complex number $a+ib;a,b \\in \\mathbb{R}$ has the form $e^{a+ib} = e^a e^{ib}$.</p>\n\n<p>With Taylor series expansion of $e^{ib}$ it holds</p>\n\n<p>$e^{ib} = 1+ib-\\frac{1}{2!}b^2-\\frac{i}{3!}b^3+\\frac{1}{4!}b^4+\\dots$.</p>\n\n<p>If you order it by real and imaginary part you will obtain elementary trigonometric functions. Therefore exponentials of complex numbers are a very good and elegant tool to handle trigonometric entities.</p>\n" }, { "answer_id": 2082471, "score": 1, "text": "<p>Suppose that we have the following function:</p>\n\n<p>$$f(x)=\\frac{\\cos(x)+i\\sin(x)}{e^{ix}}$$</p>\n\n<p>Differentiating once, assuming $e^{ix}$ can be reasonably defined for differentiation,</p>\n\n<p>$$f'(x)=0$$</p>\n\n<p>Thus, it follows that $f(x)$ is constant. Since it is trivial that $f(0)=1$, then $f(x)=1$ for all $x$, or</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n\n<hr>\n\n<p>One may also alternatively note that $y=e^x$ is analytic, and thus, by Taylor's theorem,</p>\n\n<p>$$e^x=\\sum_{k=0}^\\infty\\frac{x^k}{k!}$$</p>\n\n<p>Similarly, sine and cosine are analytic, with the following Taylor expansions:</p>\n\n<p>$$\\cos(x)=\\sum_{k=0}^\\infty\\frac{(-1)^kx^{2k}}{(2k)!};\\quad\\sin(x)=\\sum_{k=0}^\\infty\\frac{(-1)^kx^{2k+1}}{(2k+1)!}$$</p>\n\n<p>Simply by adding, Euler noticed that</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n" }, { "answer_id": 2082491, "score": 4, "text": "<p><em>Because of the convenience of logarithms and antilogarithms.</em></p>\n\n<p>Logarithms were invented to turn multiplications and divisions into additions and subtractions, and to implement arbitrary powers.</p>\n\n<p>This feature carries to the complex numbers, and you can write</p>\n\n<p>$$e^{\\log w+\\log z}=wz.$$</p>\n\n<p>This creates a close connection to the polar form of the complex numbers, i.e. as vectors characterized by a magnitude ($|z|=\\sqrt{x^2+y^2}$) and an argument ($\\angle z=\\arctan y/x$) instead of Cartesian coordinates.</p>\n\n<p>Indeed, the product of two complex number is a complex number having for magnitude the product of the magnitudes, and for argument the sum of the arguments. Then if you form a complex number with the real part equal to the logarithm for the magnitude and the imaginary part equal to the argument, you define a complex logarithm:</p>\n\n<p>$$\\log wz=\\log|wz|+i\\angle wz=\\log|w|+\\log|z|+i\\angle w+i\\angle z=\\log w+\\log z.$$</p>\n\n<p>Conversely, the antilogarithm (i.e. the exponential) is the number such that the logarithm of its module is the real part of the given number, and its argument is the imaginary part:</p>\n\n<p>$$\\log\\rho=x,\\\\\\theta=y$$ which you can invert with</p>\n\n<p>$$e^{x+iy}=\\rho(\\cos\\theta+i\\sin\\theta).$$</p>\n\n<p>There is much more to be said on the topic. For now, remember that the exponential/logarithm create interesting connections between Cartesian/polar coordinates and a symbiosis with the trigonometric functions, in such a way that many computations are simplified.</p>\n" }, { "answer_id": 2082461, "score": 2, "text": "<p>Well when you are working with complex numbers in the polar form, expressing them in exponential forms allows easier multiplication of such complex numbers. Thats one reason.</p>\n" }, { "answer_id": 2082459, "score": 1, "text": "<p>An exponentiated complex number $a+ib;a,b \\in \\mathbb{R}$ has the form $e^{a+ib} = e^a e^{ib}$.</p>\n\n<p>With Taylor series expansion of $e^{ib}$ it holds</p>\n\n<p>$e^{ib} = 1+ib-\\frac{1}{2!}b^2-\\frac{i}{3!}b^3+\\frac{1}{4!}b^4+\\dots$.</p>\n\n<p>If you order it by real and imaginary part you will obtain elementary trigonometric functions. Therefore exponentials of complex numbers are a very good and elegant tool to handle trigonometric entities.</p>\n" }, { "answer_id": 2082471, "score": 1, "text": "<p>Suppose that we have the following function:</p>\n\n<p>$$f(x)=\\frac{\\cos(x)+i\\sin(x)}{e^{ix}}$$</p>\n\n<p>Differentiating once, assuming $e^{ix}$ can be reasonably defined for differentiation,</p>\n\n<p>$$f'(x)=0$$</p>\n\n<p>Thus, it follows that $f(x)$ is constant. Since it is trivial that $f(0)=1$, then $f(x)=1$ for all $x$, or</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n\n<hr>\n\n<p>One may also alternatively note that $y=e^x$ is analytic, and thus, by Taylor's theorem,</p>\n\n<p>$$e^x=\\sum_{k=0}^\\infty\\frac{x^k}{k!}$$</p>\n\n<p>Similarly, sine and cosine are analytic, with the following Taylor expansions:</p>\n\n<p>$$\\cos(x)=\\sum_{k=0}^\\infty\\frac{(-1)^kx^{2k}}{(2k)!};\\quad\\sin(x)=\\sum_{k=0}^\\infty\\frac{(-1)^kx^{2k+1}}{(2k+1)!}$$</p>\n\n<p>Simply by adding, Euler noticed that</p>\n\n<p>$$e^{ix}=\\cos(x)+i\\sin(x)$$</p>\n" } ]
Question: Why does it make sense to describe complex numbers through powers of $e$? Answer 1: <p><em>Because of the convenience of logarithms and antilogarithms.</em></p> <p>Logarithms were invented to turn multiplications and divisions into additions and subtractions, and to implement arbitrary powers.</p> <p>This feature carries to the complex numbers, and you can write</p> <p>$$e^{\log w+\log z}=wz.$$</p> <p>This creates a close connection to the polar form of the complex numbers, i.e. as vectors characterized by a magnitude ($|z|=\sqrt{x^2+y^2}$) and an argument ($\angle z=\arctan y/x$) instead of Cartesian coordinates.</p> <p>Indeed, the product of two complex number is a complex number having for magnitude the product of the magnitudes, and for argument the sum of the arguments. Then if you form a complex number with the real part equal to the logarithm for the magnitude and the imaginary part equal to the argument, you define a complex logarithm:</p> <p>$$\log wz=\log|wz|+i\angle wz=\log|w|+\log|z|+i\angle w+i\angle z=\log w+\log z.$$</p> <p>Conversely, the antilogarithm (i.e. the exponential) is the number such that the logarithm of its module is the real part of the given number, and its argument is the imaginary part:</p> <p>$$\log\rho=x,\\\theta=y$$ which you can invert with</p> <p>$$e^{x+iy}=\rho(\cos\theta+i\sin\theta).$$</p> <p>There is much more to be said on the topic. For now, remember that the exponential/logarithm create interesting connections between Cartesian/polar coordinates and a symbiosis with the trigonometric functions, in such a way that many computations are simplified.</p> Answer 2: <p>Well when you are working with complex numbers in the polar form, expressing them in exponential forms allows easier multiplication of such complex numbers. Thats one reason.</p> Answer 3: <p>An exponentiated complex number $a+ib;a,b \in \mathbb{R}$ has the form $e^{a+ib} = e^a e^{ib}$.</p> <p>With Taylor series expansion of $e^{ib}$ it holds</p> <p>$e^{ib} = 1+ib-\frac{1}{2!}b^2-\frac{i}{3!}b^3+\frac{1}{4!}b^4+\dots$.</p> <p>If you order it by real and imaginary part you will obtain elementary trigonometric functions. Therefore exponentials of complex numbers are a very good and elegant tool to handle trigonometric entities.</p> Answer 4: <p>Suppose that we have the following function:</p> <p>$$f(x)=\frac{\cos(x)+i\sin(x)}{e^{ix}}$$</p> <p>Differentiating once, assuming $e^{ix}$ can be reasonably defined for differentiation,</p> <p>$$f'(x)=0$$</p> <p>Thus, it follows that $f(x)$ is constant. Since it is trivial that $f(0)=1$, then $f(x)=1$ for all $x$, or</p> <p>$$e^{ix}=\cos(x)+i\sin(x)$$</p> <hr> <p>One may also alternatively note that $y=e^x$ is analytic, and thus, by Taylor's theorem,</p> <p>$$e^x=\sum_{k=0}^\infty\frac{x^k}{k!}$$</p> <p>Similarly, sine and cosine are analytic, with the following Taylor expansions:</p> <p>$$\cos(x)=\sum_{k=0}^\infty\frac{(-1)^kx^{2k}}{(2k)!};\quad\sin(x)=\sum_{k=0}^\infty\frac{(-1)^kx^{2k+1}}{(2k+1)!}$$</p> <p>Simply by adding, Euler noticed that</p> <p>$$e^{ix}=\cos(x)+i\sin(x)$$</p> Answer 5: <p><em>Because of the convenience of logarithms and antilogarithms.</em></p> <p>Logarithms were invented to turn multiplications and divisions into additions and subtractions, and to implement arbitrary powers.</p> <p>This feature carries to the complex numbers, and you can write</p> <p>$$e^{\log w+\log z}=wz.$$</p> <p>This creates a close connection to the polar form of the complex numbers, i.e. as vectors characterized by a magnitude ($|z|=\sqrt{x^2+y^2}$) and an argument ($\angle z=\arctan y/x$) instead of Cartesian coordinates.</p> <p>Indeed, the product of two complex number is a complex number having for magnitude the product of the magnitudes, and for argument the sum of the arguments. Then if you form a complex number with the real part equal to the logarithm for the magnitude and the imaginary part equal to the argument, you define a complex logarithm:</p> <p>$$\log wz=\log|wz|+i\angle wz=\log|w|+\log|z|+i\angle w+i\angle z=\log w+\log z.$$</p> <p>Conversely, the antilogarithm (i.e. the exponential) is the number such that the logarithm of its module is the real part of the given number, and its argument is the imaginary part:</p> <p>$$\log\rho=x,\\\theta=y$$ which you can invert with</p> <p>$$e^{x+iy}=\rho(\cos\theta+i\sin\theta).$$</p> <p>There is much more to be said on the topic. For now, remember that the exponential/logarithm create interesting connections between Cartesian/polar coordinates and a symbiosis with the trigonometric functions, in such a way that many computations are simplified.</p> Answer 6: <p>Well when you are working with complex numbers in the polar form, expressing them in exponential forms allows easier multiplication of such complex numbers. Thats one reason.</p> Answer 7: <p>An exponentiated complex number $a+ib;a,b \in \mathbb{R}$ has the form $e^{a+ib} = e^a e^{ib}$.</p> <p>With Taylor series expansion of $e^{ib}$ it holds</p> <p>$e^{ib} = 1+ib-\frac{1}{2!}b^2-\frac{i}{3!}b^3+\frac{1}{4!}b^4+\dots$.</p> <p>If you order it by real and imaginary part you will obtain elementary trigonometric functions. Therefore exponentials of complex numbers are a very good and elegant tool to handle trigonometric entities.</p> Answer 8: <p>Suppose that we have the following function:</p> <p>$$f(x)=\frac{\cos(x)+i\sin(x)}{e^{ix}}$$</p> <p>Differentiating once, assuming $e^{ix}$ can be reasonably defined for differentiation,</p> <p>$$f'(x)=0$$</p> <p>Thus, it follows that $f(x)$ is constant. Since it is trivial that $f(0)=1$, then $f(x)=1$ for all $x$, or</p> <p>$$e^{ix}=\cos(x)+i\sin(x)$$</p> <hr> <p>One may also alternatively note that $y=e^x$ is analytic, and thus, by Taylor's theorem,</p> <p>$$e^x=\sum_{k=0}^\infty\frac{x^k}{k!}$$</p> <p>Similarly, sine and cosine are analytic, with the following Taylor expansions:</p> <p>$$\cos(x)=\sum_{k=0}^\infty\frac{(-1)^kx^{2k}}{(2k)!};\quad\sin(x)=\sum_{k=0}^\infty\frac{(-1)^kx^{2k+1}}{(2k+1)!}$$</p> <p>Simply by adding, Euler noticed that</p> <p>$$e^{ix}=\cos(x)+i\sin(x)$$</p> Answer 9: <p><em>Because of the convenience of logarithms and antilogarithms.</em></p> <p>Logarithms were invented to turn multiplications and divisions into additions and subtractions, and to implement arbitrary powers.</p> <p>This feature carries to the complex numbers, and you can write</p> <p>$$e^{\log w+\log z}=wz.$$</p> <p>This creates a close connection to the polar form of the complex numbers, i.e. as vectors characterized by a magnitude ($|z|=\sqrt{x^2+y^2}$) and an argument ($\angle z=\arctan y/x$) instead of Cartesian coordinates.</p> <p>Indeed, the product of two complex number is a complex number having for magnitude the product of the magnitudes, and for argument the sum of the arguments. Then if you form a complex number with the real part equal to the logarithm for the magnitude and the imaginary part equal to the argument, you define a complex logarithm:</p> <p>$$\log wz=\log|wz|+i\angle wz=\log|w|+\log|z|+i\angle w+i\angle z=\log w+\log z.$$</p> <p>Conversely, the antilogarithm (i.e. the exponential) is the number such that the logarithm of its module is the real part of the given number, and its argument is the imaginary part:</p> <p>$$\log\rho=x,\\\theta=y$$ which you can invert with</p> <p>$$e^{x+iy}=\rho(\cos\theta+i\sin\theta).$$</p> <p>There is much more to be said on the topic. For now, remember that the exponential/logarithm create interesting connections between Cartesian/polar coordinates and a symbiosis with the trigonometric functions, in such a way that many computations are simplified.</p> Answer 10: <p>Well when you are working with complex numbers in the polar form, expressing them in exponential forms allows easier multiplication of such complex numbers. Thats one reason.</p> Answer 11: <p>An exponentiated complex number $a+ib;a,b \in \mathbb{R}$ has the form $e^{a+ib} = e^a e^{ib}$.</p> <p>With Taylor series expansion of $e^{ib}$ it holds</p> <p>$e^{ib} = 1+ib-\frac{1}{2!}b^2-\frac{i}{3!}b^3+\frac{1}{4!}b^4+\dots$.</p> <p>If you order it by real and imaginary part you will obtain elementary trigonometric functions. Therefore exponentials of complex numbers are a very good and elegant tool to handle trigonometric entities.</p> Answer 12: <p>Suppose that we have the following function:</p> <p>$$f(x)=\frac{\cos(x)+i\sin(x)}{e^{ix}}$$</p> <p>Differentiating once, assuming $e^{ix}$ can be reasonably defined for differentiation,</p> <p>$$f'(x)=0$$</p> <p>Thus, it follows that $f(x)$ is constant. Since it is trivial that $f(0)=1$, then $f(x)=1$ for all $x$, or</p> <p>$$e^{ix}=\cos(x)+i\sin(x)$$</p> <hr> <p>One may also alternatively note that $y=e^x$ is analytic, and thus, by Taylor's theorem,</p> <p>$$e^x=\sum_{k=0}^\infty\frac{x^k}{k!}$$</p> <p>Similarly, sine and cosine are analytic, with the following Taylor expansions:</p> <p>$$\cos(x)=\sum_{k=0}^\infty\frac{(-1)^kx^{2k}}{(2k)!};\quad\sin(x)=\sum_{k=0}^\infty\frac{(-1)^kx^{2k+1}}{(2k+1)!}$$</p> <p>Simply by adding, Euler noticed that</p> <p>$$e^{ix}=\cos(x)+i\sin(x)$$</p> Answer 13: <p><em>Because of the convenience of logarithms and antilogarithms.</em></p> <p>Logarithms were invented to turn multiplications and divisions into additions and subtractions, and to implement arbitrary powers.</p> <p>This feature carries to the complex numbers, and you can write</p> <p>$$e^{\log w+\log z}=wz.$$</p> <p>This creates a close connection to the polar form of the complex numbers, i.e. as vectors characterized by a magnitude ($|z|=\sqrt{x^2+y^2}$) and an argument ($\angle z=\arctan y/x$) instead of Cartesian coordinates.</p> <p>Indeed, the product of two complex number is a complex number having for magnitude the product of the magnitudes, and for argument the sum of the arguments. Then if you form a complex number with the real part equal to the logarithm for the magnitude and the imaginary part equal to the argument, you define a complex logarithm:</p> <p>$$\log wz=\log|wz|+i\angle wz=\log|w|+\log|z|+i\angle w+i\angle z=\log w+\log z.$$</p> <p>Conversely, the antilogarithm (i.e. the exponential) is the number such that the logarithm of its module is the real part of the given number, and its argument is the imaginary part:</p> <p>$$\log\rho=x,\\\theta=y$$ which you can invert with</p> <p>$$e^{x+iy}=\rho(\cos\theta+i\sin\theta).$$</p> <p>There is much more to be said on the topic. For now, remember that the exponential/logarithm create interesting connections between Cartesian/polar coordinates and a symbiosis with the trigonometric functions, in such a way that many computations are simplified.</p> Answer 14: <p>Well when you are working with complex numbers in the polar form, expressing them in exponential forms allows easier multiplication of such complex numbers. Thats one reason.</p> Answer 15: <p>An exponentiated complex number $a+ib;a,b \in \mathbb{R}$ has the form $e^{a+ib} = e^a e^{ib}$.</p> <p>With Taylor series expansion of $e^{ib}$ it holds</p> <p>$e^{ib} = 1+ib-\frac{1}{2!}b^2-\frac{i}{3!}b^3+\frac{1}{4!}b^4+\dots$.</p> <p>If you order it by real and imaginary part you will obtain elementary trigonometric functions. Therefore exponentials of complex numbers are a very good and elegant tool to handle trigonometric entities.</p> Answer 16: <p>Suppose that we have the following function:</p> <p>$$f(x)=\frac{\cos(x)+i\sin(x)}{e^{ix}}$$</p> <p>Differentiating once, assuming $e^{ix}$ can be reasonably defined for differentiation,</p> <p>$$f'(x)=0$$</p> <p>Thus, it follows that $f(x)$ is constant. Since it is trivial that $f(0)=1$, then $f(x)=1$ for all $x$, or</p> <p>$$e^{ix}=\cos(x)+i\sin(x)$$</p> <hr> <p>One may also alternatively note that $y=e^x$ is analytic, and thus, by Taylor's theorem,</p> <p>$$e^x=\sum_{k=0}^\infty\frac{x^k}{k!}$$</p> <p>Similarly, sine and cosine are analytic, with the following Taylor expansions:</p> <p>$$\cos(x)=\sum_{k=0}^\infty\frac{(-1)^kx^{2k}}{(2k)!};\quad\sin(x)=\sum_{k=0}^\infty\frac{(-1)^kx^{2k+1}}{(2k+1)!}$$</p> <p>Simply by adding, Euler noticed that</p> <p>$$e^{ix}=\cos(x)+i\sin(x)$$</p>
Subfields of the splitting field of $x^4 - 2$ over $\mathbf Q$
2,082,676
4
https://math.stackexchange.com/questions/2082676/subfields-of-the-splitting-field-of-x4-2-over-mathbf-q
<p>I understand why the Galois group is isomorphic to $D_8$ let's say with \begin{align*} r: 2^{\frac{1}{4}} &amp;\mapsto 2^{\frac{1}{4}}i &amp;&amp;&amp; s: 2^{\frac{1}{4}} &amp;\mapsto 2^{\frac{1}{4}} \\ i &amp;\mapsto i &amp;&amp;&amp; i &amp;\mapsto -i\\ \end{align*}</p> <p>as generators. </p> <p>I don't understand how to find the fixed fields given a subgroup of the the Galois group. For example, I believe the fixed field of $\langle r^3s\rangle$ is $\mathbf Q((1 + i)2^{\frac{1}{4}}$) and the fixed field of $\langle rs \rangle$ is $\mathbf Q((1 - i)2^{\frac{1}{4}}$). However, is there an easy way I could see this without just plugging in many random elements to see if they are fixed?</p>
[ { "answer_id": 2082727, "score": 2, "text": "<p>Let $L/\\mathbb{Q}$ be a finite Galois extension and $K \\subseteq L$ a subfield. Then the trace $$\\mathrm{Tr}_{L/K} = \\sum_{\\sigma \\in \\mathrm{Gal}(L/K)} \\sigma : L \\rightarrow K$$ is surjective. Using this you can get a basis of $K/\\mathbb{Q}$.</p>\n\n<p>In your case there is a basis $$L = \\mathrm{Span}\\Big( 1,\\sqrt[4]{2},\\sqrt{2},\\sqrt[4]{2}^3,i,\\sqrt[4]{2}i,\\sqrt{2}i,\\sqrt[4]{2}^3 i \\Big)$$ so the fixed field under $&lt;rs&gt; = \\{\\mathrm{id},rs\\}$ is the image of $$\\mathrm{Tr}_{L/K} = \\mathrm{id} + rs : \\begin{cases} 1 \\mapsto 2; \\\\ \\sqrt[4]{2} \\mapsto \\sqrt[4]{2} (1+i); \\\\ \\sqrt{2} \\mapsto 0; \\\\ \\sqrt[4]{2}^3 \\mapsto \\sqrt[4]{2}^3 (1-i); \\\\ i \\mapsto 0; \\\\ \\sqrt[4]{2} i \\mapsto \\sqrt[4]{2} (1 +i); \\\\ \\sqrt{2} i \\mapsto 2 \\sqrt{2} i; \\\\ \\sqrt[4]{2}^3 i \\mapsto \\sqrt[4]{2}^3 (1-i). \\end{cases}$$</p>\n\n<p>and therefore $K = \\mathbb{Q}(\\sqrt[4]{2} (1+i)).$</p>\n" }, { "answer_id": 2082752, "score": 1, "text": "<p>let be $x\\in K; K=\\mathbb Q(\\alpha,i); \\alpha^4=2, i^2=-1$</p>\n\n<p>$K$ splitting and separable field then $|Gal(K,\\mathbb Q)|=[K,\\mathbb Q]$</p>\n\n<p>Galois group is $\\{ id,\\sigma,\\sigma^2,\\sigma^3,\\tau, \\sigma\\tau,\\sigma^2\\tau,\\sigma^3\\tau\\} ; \\sigma: \\sqrt[4] {2}\\to i \\sqrt[4]{2}; \\tau: i\\to -i $</p>\n\n<p>let be $H=\\{id,\\sigma\\tau \\}$</p>\n\n<p>$id(x)=a_1+a_2\\alpha+a_3\\alpha^2+a_4\\alpha^3+a_5i+a_6i\\alpha+a_7i\\alpha^2+a_8i\\alpha^3$</p>\n\n<p>$\\sigma \\tau(x)=a_1+a_6\\alpha-a_3\\alpha^2-a_8\\alpha^3-a_5i+a_2i\\alpha+a_7i\\alpha^2-a_4i\\alpha^3$</p>\n\n<p>then $id(x)=\\sigma\\tau(x) \\Leftrightarrow a_2=a_6, a_3=-a_3, a_4=-a_8, a_5=-a_5$\nthen $x\\in K_H \\Leftrightarrow x=a_1+a_2(1+i)+a_4(1-i)\\alpha^3+a_7i\\alpha^2$ then $K_H=\\mathbb Q((1+i)\\alpha)$</p>\n" }, { "answer_id": 2082727, "score": 2, "text": "<p>Let $L/\\mathbb{Q}$ be a finite Galois extension and $K \\subseteq L$ a subfield. Then the trace $$\\mathrm{Tr}_{L/K} = \\sum_{\\sigma \\in \\mathrm{Gal}(L/K)} \\sigma : L \\rightarrow K$$ is surjective. Using this you can get a basis of $K/\\mathbb{Q}$.</p>\n\n<p>In your case there is a basis $$L = \\mathrm{Span}\\Big( 1,\\sqrt[4]{2},\\sqrt{2},\\sqrt[4]{2}^3,i,\\sqrt[4]{2}i,\\sqrt{2}i,\\sqrt[4]{2}^3 i \\Big)$$ so the fixed field under $&lt;rs&gt; = \\{\\mathrm{id},rs\\}$ is the image of $$\\mathrm{Tr}_{L/K} = \\mathrm{id} + rs : \\begin{cases} 1 \\mapsto 2; \\\\ \\sqrt[4]{2} \\mapsto \\sqrt[4]{2} (1+i); \\\\ \\sqrt{2} \\mapsto 0; \\\\ \\sqrt[4]{2}^3 \\mapsto \\sqrt[4]{2}^3 (1-i); \\\\ i \\mapsto 0; \\\\ \\sqrt[4]{2} i \\mapsto \\sqrt[4]{2} (1 +i); \\\\ \\sqrt{2} i \\mapsto 2 \\sqrt{2} i; \\\\ \\sqrt[4]{2}^3 i \\mapsto \\sqrt[4]{2}^3 (1-i). \\end{cases}$$</p>\n\n<p>and therefore $K = \\mathbb{Q}(\\sqrt[4]{2} (1+i)).$</p>\n" }, { "answer_id": 2082752, "score": 1, "text": "<p>let be $x\\in K; K=\\mathbb Q(\\alpha,i); \\alpha^4=2, i^2=-1$</p>\n\n<p>$K$ splitting and separable field then $|Gal(K,\\mathbb Q)|=[K,\\mathbb Q]$</p>\n\n<p>Galois group is $\\{ id,\\sigma,\\sigma^2,\\sigma^3,\\tau, \\sigma\\tau,\\sigma^2\\tau,\\sigma^3\\tau\\} ; \\sigma: \\sqrt[4] {2}\\to i \\sqrt[4]{2}; \\tau: i\\to -i $</p>\n\n<p>let be $H=\\{id,\\sigma\\tau \\}$</p>\n\n<p>$id(x)=a_1+a_2\\alpha+a_3\\alpha^2+a_4\\alpha^3+a_5i+a_6i\\alpha+a_7i\\alpha^2+a_8i\\alpha^3$</p>\n\n<p>$\\sigma \\tau(x)=a_1+a_6\\alpha-a_3\\alpha^2-a_8\\alpha^3-a_5i+a_2i\\alpha+a_7i\\alpha^2-a_4i\\alpha^3$</p>\n\n<p>then $id(x)=\\sigma\\tau(x) \\Leftrightarrow a_2=a_6, a_3=-a_3, a_4=-a_8, a_5=-a_5$\nthen $x\\in K_H \\Leftrightarrow x=a_1+a_2(1+i)+a_4(1-i)\\alpha^3+a_7i\\alpha^2$ then $K_H=\\mathbb Q((1+i)\\alpha)$</p>\n" }, { "answer_id": 2082727, "score": 2, "text": "<p>Let $L/\\mathbb{Q}$ be a finite Galois extension and $K \\subseteq L$ a subfield. Then the trace $$\\mathrm{Tr}_{L/K} = \\sum_{\\sigma \\in \\mathrm{Gal}(L/K)} \\sigma : L \\rightarrow K$$ is surjective. Using this you can get a basis of $K/\\mathbb{Q}$.</p>\n\n<p>In your case there is a basis $$L = \\mathrm{Span}\\Big( 1,\\sqrt[4]{2},\\sqrt{2},\\sqrt[4]{2}^3,i,\\sqrt[4]{2}i,\\sqrt{2}i,\\sqrt[4]{2}^3 i \\Big)$$ so the fixed field under $&lt;rs&gt; = \\{\\mathrm{id},rs\\}$ is the image of $$\\mathrm{Tr}_{L/K} = \\mathrm{id} + rs : \\begin{cases} 1 \\mapsto 2; \\\\ \\sqrt[4]{2} \\mapsto \\sqrt[4]{2} (1+i); \\\\ \\sqrt{2} \\mapsto 0; \\\\ \\sqrt[4]{2}^3 \\mapsto \\sqrt[4]{2}^3 (1-i); \\\\ i \\mapsto 0; \\\\ \\sqrt[4]{2} i \\mapsto \\sqrt[4]{2} (1 +i); \\\\ \\sqrt{2} i \\mapsto 2 \\sqrt{2} i; \\\\ \\sqrt[4]{2}^3 i \\mapsto \\sqrt[4]{2}^3 (1-i). \\end{cases}$$</p>\n\n<p>and therefore $K = \\mathbb{Q}(\\sqrt[4]{2} (1+i)).$</p>\n" }, { "answer_id": 2082752, "score": 1, "text": "<p>let be $x\\in K; K=\\mathbb Q(\\alpha,i); \\alpha^4=2, i^2=-1$</p>\n\n<p>$K$ splitting and separable field then $|Gal(K,\\mathbb Q)|=[K,\\mathbb Q]$</p>\n\n<p>Galois group is $\\{ id,\\sigma,\\sigma^2,\\sigma^3,\\tau, \\sigma\\tau,\\sigma^2\\tau,\\sigma^3\\tau\\} ; \\sigma: \\sqrt[4] {2}\\to i \\sqrt[4]{2}; \\tau: i\\to -i $</p>\n\n<p>let be $H=\\{id,\\sigma\\tau \\}$</p>\n\n<p>$id(x)=a_1+a_2\\alpha+a_3\\alpha^2+a_4\\alpha^3+a_5i+a_6i\\alpha+a_7i\\alpha^2+a_8i\\alpha^3$</p>\n\n<p>$\\sigma \\tau(x)=a_1+a_6\\alpha-a_3\\alpha^2-a_8\\alpha^3-a_5i+a_2i\\alpha+a_7i\\alpha^2-a_4i\\alpha^3$</p>\n\n<p>then $id(x)=\\sigma\\tau(x) \\Leftrightarrow a_2=a_6, a_3=-a_3, a_4=-a_8, a_5=-a_5$\nthen $x\\in K_H \\Leftrightarrow x=a_1+a_2(1+i)+a_4(1-i)\\alpha^3+a_7i\\alpha^2$ then $K_H=\\mathbb Q((1+i)\\alpha)$</p>\n" }, { "answer_id": 2082727, "score": 2, "text": "<p>Let $L/\\mathbb{Q}$ be a finite Galois extension and $K \\subseteq L$ a subfield. Then the trace $$\\mathrm{Tr}_{L/K} = \\sum_{\\sigma \\in \\mathrm{Gal}(L/K)} \\sigma : L \\rightarrow K$$ is surjective. Using this you can get a basis of $K/\\mathbb{Q}$.</p>\n\n<p>In your case there is a basis $$L = \\mathrm{Span}\\Big( 1,\\sqrt[4]{2},\\sqrt{2},\\sqrt[4]{2}^3,i,\\sqrt[4]{2}i,\\sqrt{2}i,\\sqrt[4]{2}^3 i \\Big)$$ so the fixed field under $&lt;rs&gt; = \\{\\mathrm{id},rs\\}$ is the image of $$\\mathrm{Tr}_{L/K} = \\mathrm{id} + rs : \\begin{cases} 1 \\mapsto 2; \\\\ \\sqrt[4]{2} \\mapsto \\sqrt[4]{2} (1+i); \\\\ \\sqrt{2} \\mapsto 0; \\\\ \\sqrt[4]{2}^3 \\mapsto \\sqrt[4]{2}^3 (1-i); \\\\ i \\mapsto 0; \\\\ \\sqrt[4]{2} i \\mapsto \\sqrt[4]{2} (1 +i); \\\\ \\sqrt{2} i \\mapsto 2 \\sqrt{2} i; \\\\ \\sqrt[4]{2}^3 i \\mapsto \\sqrt[4]{2}^3 (1-i). \\end{cases}$$</p>\n\n<p>and therefore $K = \\mathbb{Q}(\\sqrt[4]{2} (1+i)).$</p>\n" }, { "answer_id": 2082752, "score": 1, "text": "<p>let be $x\\in K; K=\\mathbb Q(\\alpha,i); \\alpha^4=2, i^2=-1$</p>\n\n<p>$K$ splitting and separable field then $|Gal(K,\\mathbb Q)|=[K,\\mathbb Q]$</p>\n\n<p>Galois group is $\\{ id,\\sigma,\\sigma^2,\\sigma^3,\\tau, \\sigma\\tau,\\sigma^2\\tau,\\sigma^3\\tau\\} ; \\sigma: \\sqrt[4] {2}\\to i \\sqrt[4]{2}; \\tau: i\\to -i $</p>\n\n<p>let be $H=\\{id,\\sigma\\tau \\}$</p>\n\n<p>$id(x)=a_1+a_2\\alpha+a_3\\alpha^2+a_4\\alpha^3+a_5i+a_6i\\alpha+a_7i\\alpha^2+a_8i\\alpha^3$</p>\n\n<p>$\\sigma \\tau(x)=a_1+a_6\\alpha-a_3\\alpha^2-a_8\\alpha^3-a_5i+a_2i\\alpha+a_7i\\alpha^2-a_4i\\alpha^3$</p>\n\n<p>then $id(x)=\\sigma\\tau(x) \\Leftrightarrow a_2=a_6, a_3=-a_3, a_4=-a_8, a_5=-a_5$\nthen $x\\in K_H \\Leftrightarrow x=a_1+a_2(1+i)+a_4(1-i)\\alpha^3+a_7i\\alpha^2$ then $K_H=\\mathbb Q((1+i)\\alpha)$</p>\n" } ]
Question: Subfields of the splitting field of $x^4 - 2$ over $\mathbf Q$ Answer 1: <p>Let $L/\mathbb{Q}$ be a finite Galois extension and $K \subseteq L$ a subfield. Then the trace $$\mathrm{Tr}_{L/K} = \sum_{\sigma \in \mathrm{Gal}(L/K)} \sigma : L \rightarrow K$$ is surjective. Using this you can get a basis of $K/\mathbb{Q}$.</p> <p>In your case there is a basis $$L = \mathrm{Span}\Big( 1,\sqrt[4]{2},\sqrt{2},\sqrt[4]{2}^3,i,\sqrt[4]{2}i,\sqrt{2}i,\sqrt[4]{2}^3 i \Big)$$ so the fixed field under $&lt;rs&gt; = \{\mathrm{id},rs\}$ is the image of $$\mathrm{Tr}_{L/K} = \mathrm{id} + rs : \begin{cases} 1 \mapsto 2; \\ \sqrt[4]{2} \mapsto \sqrt[4]{2} (1+i); \\ \sqrt{2} \mapsto 0; \\ \sqrt[4]{2}^3 \mapsto \sqrt[4]{2}^3 (1-i); \\ i \mapsto 0; \\ \sqrt[4]{2} i \mapsto \sqrt[4]{2} (1 +i); \\ \sqrt{2} i \mapsto 2 \sqrt{2} i; \\ \sqrt[4]{2}^3 i \mapsto \sqrt[4]{2}^3 (1-i). \end{cases}$$</p> <p>and therefore $K = \mathbb{Q}(\sqrt[4]{2} (1+i)).$</p> Answer 2: <p>let be $x\in K; K=\mathbb Q(\alpha,i); \alpha^4=2, i^2=-1$</p> <p>$K$ splitting and separable field then $|Gal(K,\mathbb Q)|=[K,\mathbb Q]$</p> <p>Galois group is $\{ id,\sigma,\sigma^2,\sigma^3,\tau, \sigma\tau,\sigma^2\tau,\sigma^3\tau\} ; \sigma: \sqrt[4] {2}\to i \sqrt[4]{2}; \tau: i\to -i $</p> <p>let be $H=\{id,\sigma\tau \}$</p> <p>$id(x)=a_1+a_2\alpha+a_3\alpha^2+a_4\alpha^3+a_5i+a_6i\alpha+a_7i\alpha^2+a_8i\alpha^3$</p> <p>$\sigma \tau(x)=a_1+a_6\alpha-a_3\alpha^2-a_8\alpha^3-a_5i+a_2i\alpha+a_7i\alpha^2-a_4i\alpha^3$</p> <p>then $id(x)=\sigma\tau(x) \Leftrightarrow a_2=a_6, a_3=-a_3, a_4=-a_8, a_5=-a_5$ then $x\in K_H \Leftrightarrow x=a_1+a_2(1+i)+a_4(1-i)\alpha^3+a_7i\alpha^2$ then $K_H=\mathbb Q((1+i)\alpha)$</p> Answer 3: <p>Let $L/\mathbb{Q}$ be a finite Galois extension and $K \subseteq L$ a subfield. Then the trace $$\mathrm{Tr}_{L/K} = \sum_{\sigma \in \mathrm{Gal}(L/K)} \sigma : L \rightarrow K$$ is surjective. Using this you can get a basis of $K/\mathbb{Q}$.</p> <p>In your case there is a basis $$L = \mathrm{Span}\Big( 1,\sqrt[4]{2},\sqrt{2},\sqrt[4]{2}^3,i,\sqrt[4]{2}i,\sqrt{2}i,\sqrt[4]{2}^3 i \Big)$$ so the fixed field under $&lt;rs&gt; = \{\mathrm{id},rs\}$ is the image of $$\mathrm{Tr}_{L/K} = \mathrm{id} + rs : \begin{cases} 1 \mapsto 2; \\ \sqrt[4]{2} \mapsto \sqrt[4]{2} (1+i); \\ \sqrt{2} \mapsto 0; \\ \sqrt[4]{2}^3 \mapsto \sqrt[4]{2}^3 (1-i); \\ i \mapsto 0; \\ \sqrt[4]{2} i \mapsto \sqrt[4]{2} (1 +i); \\ \sqrt{2} i \mapsto 2 \sqrt{2} i; \\ \sqrt[4]{2}^3 i \mapsto \sqrt[4]{2}^3 (1-i). \end{cases}$$</p> <p>and therefore $K = \mathbb{Q}(\sqrt[4]{2} (1+i)).$</p> Answer 4: <p>let be $x\in K; K=\mathbb Q(\alpha,i); \alpha^4=2, i^2=-1$</p> <p>$K$ splitting and separable field then $|Gal(K,\mathbb Q)|=[K,\mathbb Q]$</p> <p>Galois group is $\{ id,\sigma,\sigma^2,\sigma^3,\tau, \sigma\tau,\sigma^2\tau,\sigma^3\tau\} ; \sigma: \sqrt[4] {2}\to i \sqrt[4]{2}; \tau: i\to -i $</p> <p>let be $H=\{id,\sigma\tau \}$</p> <p>$id(x)=a_1+a_2\alpha+a_3\alpha^2+a_4\alpha^3+a_5i+a_6i\alpha+a_7i\alpha^2+a_8i\alpha^3$</p> <p>$\sigma \tau(x)=a_1+a_6\alpha-a_3\alpha^2-a_8\alpha^3-a_5i+a_2i\alpha+a_7i\alpha^2-a_4i\alpha^3$</p> <p>then $id(x)=\sigma\tau(x) \Leftrightarrow a_2=a_6, a_3=-a_3, a_4=-a_8, a_5=-a_5$ then $x\in K_H \Leftrightarrow x=a_1+a_2(1+i)+a_4(1-i)\alpha^3+a_7i\alpha^2$ then $K_H=\mathbb Q((1+i)\alpha)$</p> Answer 5: <p>Let $L/\mathbb{Q}$ be a finite Galois extension and $K \subseteq L$ a subfield. Then the trace $$\mathrm{Tr}_{L/K} = \sum_{\sigma \in \mathrm{Gal}(L/K)} \sigma : L \rightarrow K$$ is surjective. Using this you can get a basis of $K/\mathbb{Q}$.</p> <p>In your case there is a basis $$L = \mathrm{Span}\Big( 1,\sqrt[4]{2},\sqrt{2},\sqrt[4]{2}^3,i,\sqrt[4]{2}i,\sqrt{2}i,\sqrt[4]{2}^3 i \Big)$$ so the fixed field under $&lt;rs&gt; = \{\mathrm{id},rs\}$ is the image of $$\mathrm{Tr}_{L/K} = \mathrm{id} + rs : \begin{cases} 1 \mapsto 2; \\ \sqrt[4]{2} \mapsto \sqrt[4]{2} (1+i); \\ \sqrt{2} \mapsto 0; \\ \sqrt[4]{2}^3 \mapsto \sqrt[4]{2}^3 (1-i); \\ i \mapsto 0; \\ \sqrt[4]{2} i \mapsto \sqrt[4]{2} (1 +i); \\ \sqrt{2} i \mapsto 2 \sqrt{2} i; \\ \sqrt[4]{2}^3 i \mapsto \sqrt[4]{2}^3 (1-i). \end{cases}$$</p> <p>and therefore $K = \mathbb{Q}(\sqrt[4]{2} (1+i)).$</p> Answer 6: <p>let be $x\in K; K=\mathbb Q(\alpha,i); \alpha^4=2, i^2=-1$</p> <p>$K$ splitting and separable field then $|Gal(K,\mathbb Q)|=[K,\mathbb Q]$</p> <p>Galois group is $\{ id,\sigma,\sigma^2,\sigma^3,\tau, \sigma\tau,\sigma^2\tau,\sigma^3\tau\} ; \sigma: \sqrt[4] {2}\to i \sqrt[4]{2}; \tau: i\to -i $</p> <p>let be $H=\{id,\sigma\tau \}$</p> <p>$id(x)=a_1+a_2\alpha+a_3\alpha^2+a_4\alpha^3+a_5i+a_6i\alpha+a_7i\alpha^2+a_8i\alpha^3$</p> <p>$\sigma \tau(x)=a_1+a_6\alpha-a_3\alpha^2-a_8\alpha^3-a_5i+a_2i\alpha+a_7i\alpha^2-a_4i\alpha^3$</p> <p>then $id(x)=\sigma\tau(x) \Leftrightarrow a_2=a_6, a_3=-a_3, a_4=-a_8, a_5=-a_5$ then $x\in K_H \Leftrightarrow x=a_1+a_2(1+i)+a_4(1-i)\alpha^3+a_7i\alpha^2$ then $K_H=\mathbb Q((1+i)\alpha)$</p> Answer 7: <p>Let $L/\mathbb{Q}$ be a finite Galois extension and $K \subseteq L$ a subfield. Then the trace $$\mathrm{Tr}_{L/K} = \sum_{\sigma \in \mathrm{Gal}(L/K)} \sigma : L \rightarrow K$$ is surjective. Using this you can get a basis of $K/\mathbb{Q}$.</p> <p>In your case there is a basis $$L = \mathrm{Span}\Big( 1,\sqrt[4]{2},\sqrt{2},\sqrt[4]{2}^3,i,\sqrt[4]{2}i,\sqrt{2}i,\sqrt[4]{2}^3 i \Big)$$ so the fixed field under $&lt;rs&gt; = \{\mathrm{id},rs\}$ is the image of $$\mathrm{Tr}_{L/K} = \mathrm{id} + rs : \begin{cases} 1 \mapsto 2; \\ \sqrt[4]{2} \mapsto \sqrt[4]{2} (1+i); \\ \sqrt{2} \mapsto 0; \\ \sqrt[4]{2}^3 \mapsto \sqrt[4]{2}^3 (1-i); \\ i \mapsto 0; \\ \sqrt[4]{2} i \mapsto \sqrt[4]{2} (1 +i); \\ \sqrt{2} i \mapsto 2 \sqrt{2} i; \\ \sqrt[4]{2}^3 i \mapsto \sqrt[4]{2}^3 (1-i). \end{cases}$$</p> <p>and therefore $K = \mathbb{Q}(\sqrt[4]{2} (1+i)).$</p> Answer 8: <p>let be $x\in K; K=\mathbb Q(\alpha,i); \alpha^4=2, i^2=-1$</p> <p>$K$ splitting and separable field then $|Gal(K,\mathbb Q)|=[K,\mathbb Q]$</p> <p>Galois group is $\{ id,\sigma,\sigma^2,\sigma^3,\tau, \sigma\tau,\sigma^2\tau,\sigma^3\tau\} ; \sigma: \sqrt[4] {2}\to i \sqrt[4]{2}; \tau: i\to -i $</p> <p>let be $H=\{id,\sigma\tau \}$</p> <p>$id(x)=a_1+a_2\alpha+a_3\alpha^2+a_4\alpha^3+a_5i+a_6i\alpha+a_7i\alpha^2+a_8i\alpha^3$</p> <p>$\sigma \tau(x)=a_1+a_6\alpha-a_3\alpha^2-a_8\alpha^3-a_5i+a_2i\alpha+a_7i\alpha^2-a_4i\alpha^3$</p> <p>then $id(x)=\sigma\tau(x) \Leftrightarrow a_2=a_6, a_3=-a_3, a_4=-a_8, a_5=-a_5$ then $x\in K_H \Leftrightarrow x=a_1+a_2(1+i)+a_4(1-i)\alpha^3+a_7i\alpha^2$ then $K_H=\mathbb Q((1+i)\alpha)$</p>
Combinations problem involving a standard pack of $52$ playing cards and a $4$ sided die: Part 1
2,082,856
4
https://math.stackexchange.com/questions/2082856/combinations-problem-involving-a-standard-pack-of-52-playing-cards-and-a-4-s
<blockquote> <p>Consider a standard pack of $52$ playing cards. The cards are distributed into $4$ piles completely randomly by tossing a four-sided die for each card. How many possible arrangements are there for distributing the cards among the four piles?</p> </blockquote> <hr> <p>My attempt goes as follows:</p> <p>$$\frac{52!}{(52-13)!\cdot13!\cdot2!}\approx 3.18\times 10^{11}$$ My reasoning for calculating it this way is because we are choosing $13$ cards from a selection of $52$ cards but there are $2$ different colors so I divide by these repeats.</p> <p>The correct answer is $2.03\times 10^{31}$ or in exact form, it is </p> <p>$20282409603651670423947251286016$.</p> <p>I am $20$ orders of magnitude away from the correct answer which shows that combinatorics is not my strong point. </p> <p>Could anyone please give me any hints or tips on how to reach the correct answer?</p> <hr> <h2>Edit:</h2> <p>Answers below indicate there is ambiguity in the word "arrangements" shown in the quote (the ordering of each pile may or may <em>not</em> matter). In which case I can only apologize for this ambiguity but I am simply <em>quoting</em> the professors question <strong>word for word</strong>.</p>
[ { "answer_id": 2082860, "score": 3, "text": "<p>For each of the 52 cards there are four possible choices for what pile it is in. So there's 4 choices for the first card, times 4 choices for the second card, times 4 for the third and so on.</p>\n" }, { "answer_id": 2082876, "score": 3, "text": "<p>Assuming arrangement within each pile matters (unlike how the other answerers assumed) imagine three additional cards which are used to indicate when to switch to the next pile.</p>\n\n<p>There are then $\\frac{55!}{3!}\\approx 2.1\\cdot 10^{72}$ such arrangements.</p>\n" }, { "answer_id": 2082878, "score": 3, "text": "<p>EDIT: (This all depends on what is meant by 'arrangements'; I think that in this case the correct answer is probably <strong>not</strong> what I give below (which assumes the deck is also randomized); but instead the correct answer is $4^{52}$ (we don;t care about the ordering in each pile)).</p>\n\n<p>Suppose we take the usual deck of 52 cards; and add three blank White cards to the deck.</p>\n\n<p>Shuffle that deck and lay the cards out on the table, face up.</p>\n\n<p>Take the cards to the left of the leftmost White card (possibly none), and place them in a pile. Take the cards between the leftmost White and the second leftmost White card and place them in a second pile. Take the cards between the second leftmost White card and the third leftmost White and place them in a third pile. Take the remaining cards (those to the right of all the White cards) and place them in a fourth pile.</p>\n\n<p>This is the number of possibilities as in your problem statement; so there are $\\frac{(52+3)!}{3!}$ possible arrangements.</p>\n" }, { "answer_id": 2082864, "score": 2, "text": "<p><strong>The answer is simply $4^{52} $!!</strong></p>\n\n<hr>\n\n<p>Take for suppose you have to divide the cards into four groups numbered $1$, $2$, $3$ and $4$ respectively. These same numbers are also on the die. Each card has four ways of going into any one of the groups so hence for $52$ cards, we have the required number of possibilities. Hope it helps. </p>\n" }, { "answer_id": 2082860, "score": 3, "text": "<p>For each of the 52 cards there are four possible choices for what pile it is in. So there's 4 choices for the first card, times 4 choices for the second card, times 4 for the third and so on.</p>\n" }, { "answer_id": 2082876, "score": 3, "text": "<p>Assuming arrangement within each pile matters (unlike how the other answerers assumed) imagine three additional cards which are used to indicate when to switch to the next pile.</p>\n\n<p>There are then $\\frac{55!}{3!}\\approx 2.1\\cdot 10^{72}$ such arrangements.</p>\n" }, { "answer_id": 2082878, "score": 3, "text": "<p>EDIT: (This all depends on what is meant by 'arrangements'; I think that in this case the correct answer is probably <strong>not</strong> what I give below (which assumes the deck is also randomized); but instead the correct answer is $4^{52}$ (we don;t care about the ordering in each pile)).</p>\n\n<p>Suppose we take the usual deck of 52 cards; and add three blank White cards to the deck.</p>\n\n<p>Shuffle that deck and lay the cards out on the table, face up.</p>\n\n<p>Take the cards to the left of the leftmost White card (possibly none), and place them in a pile. Take the cards between the leftmost White and the second leftmost White card and place them in a second pile. Take the cards between the second leftmost White card and the third leftmost White and place them in a third pile. Take the remaining cards (those to the right of all the White cards) and place them in a fourth pile.</p>\n\n<p>This is the number of possibilities as in your problem statement; so there are $\\frac{(52+3)!}{3!}$ possible arrangements.</p>\n" }, { "answer_id": 2082864, "score": 2, "text": "<p><strong>The answer is simply $4^{52} $!!</strong></p>\n\n<hr>\n\n<p>Take for suppose you have to divide the cards into four groups numbered $1$, $2$, $3$ and $4$ respectively. These same numbers are also on the die. Each card has four ways of going into any one of the groups so hence for $52$ cards, we have the required number of possibilities. Hope it helps. </p>\n" }, { "answer_id": 2082860, "score": 3, "text": "<p>For each of the 52 cards there are four possible choices for what pile it is in. So there's 4 choices for the first card, times 4 choices for the second card, times 4 for the third and so on.</p>\n" }, { "answer_id": 2082876, "score": 3, "text": "<p>Assuming arrangement within each pile matters (unlike how the other answerers assumed) imagine three additional cards which are used to indicate when to switch to the next pile.</p>\n\n<p>There are then $\\frac{55!}{3!}\\approx 2.1\\cdot 10^{72}$ such arrangements.</p>\n" }, { "answer_id": 2082878, "score": 3, "text": "<p>EDIT: (This all depends on what is meant by 'arrangements'; I think that in this case the correct answer is probably <strong>not</strong> what I give below (which assumes the deck is also randomized); but instead the correct answer is $4^{52}$ (we don;t care about the ordering in each pile)).</p>\n\n<p>Suppose we take the usual deck of 52 cards; and add three blank White cards to the deck.</p>\n\n<p>Shuffle that deck and lay the cards out on the table, face up.</p>\n\n<p>Take the cards to the left of the leftmost White card (possibly none), and place them in a pile. Take the cards between the leftmost White and the second leftmost White card and place them in a second pile. Take the cards between the second leftmost White card and the third leftmost White and place them in a third pile. Take the remaining cards (those to the right of all the White cards) and place them in a fourth pile.</p>\n\n<p>This is the number of possibilities as in your problem statement; so there are $\\frac{(52+3)!}{3!}$ possible arrangements.</p>\n" }, { "answer_id": 2082864, "score": 2, "text": "<p><strong>The answer is simply $4^{52} $!!</strong></p>\n\n<hr>\n\n<p>Take for suppose you have to divide the cards into four groups numbered $1$, $2$, $3$ and $4$ respectively. These same numbers are also on the die. Each card has four ways of going into any one of the groups so hence for $52$ cards, we have the required number of possibilities. Hope it helps. </p>\n" }, { "answer_id": 2082860, "score": 3, "text": "<p>For each of the 52 cards there are four possible choices for what pile it is in. So there's 4 choices for the first card, times 4 choices for the second card, times 4 for the third and so on.</p>\n" }, { "answer_id": 2082876, "score": 3, "text": "<p>Assuming arrangement within each pile matters (unlike how the other answerers assumed) imagine three additional cards which are used to indicate when to switch to the next pile.</p>\n\n<p>There are then $\\frac{55!}{3!}\\approx 2.1\\cdot 10^{72}$ such arrangements.</p>\n" }, { "answer_id": 2082878, "score": 3, "text": "<p>EDIT: (This all depends on what is meant by 'arrangements'; I think that in this case the correct answer is probably <strong>not</strong> what I give below (which assumes the deck is also randomized); but instead the correct answer is $4^{52}$ (we don;t care about the ordering in each pile)).</p>\n\n<p>Suppose we take the usual deck of 52 cards; and add three blank White cards to the deck.</p>\n\n<p>Shuffle that deck and lay the cards out on the table, face up.</p>\n\n<p>Take the cards to the left of the leftmost White card (possibly none), and place them in a pile. Take the cards between the leftmost White and the second leftmost White card and place them in a second pile. Take the cards between the second leftmost White card and the third leftmost White and place them in a third pile. Take the remaining cards (those to the right of all the White cards) and place them in a fourth pile.</p>\n\n<p>This is the number of possibilities as in your problem statement; so there are $\\frac{(52+3)!}{3!}$ possible arrangements.</p>\n" }, { "answer_id": 2082864, "score": 2, "text": "<p><strong>The answer is simply $4^{52} $!!</strong></p>\n\n<hr>\n\n<p>Take for suppose you have to divide the cards into four groups numbered $1$, $2$, $3$ and $4$ respectively. These same numbers are also on the die. Each card has four ways of going into any one of the groups so hence for $52$ cards, we have the required number of possibilities. Hope it helps. </p>\n" } ]
Question: Combinations problem involving a standard pack of $52$ playing cards and a $4$ sided die: Part 1 Answer 1: <p>For each of the 52 cards there are four possible choices for what pile it is in. So there's 4 choices for the first card, times 4 choices for the second card, times 4 for the third and so on.</p> Answer 2: <p>Assuming arrangement within each pile matters (unlike how the other answerers assumed) imagine three additional cards which are used to indicate when to switch to the next pile.</p> <p>There are then $\frac{55!}{3!}\approx 2.1\cdot 10^{72}$ such arrangements.</p> Answer 3: <p>EDIT: (This all depends on what is meant by 'arrangements'; I think that in this case the correct answer is probably <strong>not</strong> what I give below (which assumes the deck is also randomized); but instead the correct answer is $4^{52}$ (we don;t care about the ordering in each pile)).</p> <p>Suppose we take the usual deck of 52 cards; and add three blank White cards to the deck.</p> <p>Shuffle that deck and lay the cards out on the table, face up.</p> <p>Take the cards to the left of the leftmost White card (possibly none), and place them in a pile. Take the cards between the leftmost White and the second leftmost White card and place them in a second pile. Take the cards between the second leftmost White card and the third leftmost White and place them in a third pile. Take the remaining cards (those to the right of all the White cards) and place them in a fourth pile.</p> <p>This is the number of possibilities as in your problem statement; so there are $\frac{(52+3)!}{3!}$ possible arrangements.</p> Answer 4: <p><strong>The answer is simply $4^{52} $!!</strong></p> <hr> <p>Take for suppose you have to divide the cards into four groups numbered $1$, $2$, $3$ and $4$ respectively. These same numbers are also on the die. Each card has four ways of going into any one of the groups so hence for $52$ cards, we have the required number of possibilities. Hope it helps. </p> Answer 5: <p>For each of the 52 cards there are four possible choices for what pile it is in. So there's 4 choices for the first card, times 4 choices for the second card, times 4 for the third and so on.</p> Answer 6: <p>Assuming arrangement within each pile matters (unlike how the other answerers assumed) imagine three additional cards which are used to indicate when to switch to the next pile.</p> <p>There are then $\frac{55!}{3!}\approx 2.1\cdot 10^{72}$ such arrangements.</p> Answer 7: <p>EDIT: (This all depends on what is meant by 'arrangements'; I think that in this case the correct answer is probably <strong>not</strong> what I give below (which assumes the deck is also randomized); but instead the correct answer is $4^{52}$ (we don;t care about the ordering in each pile)).</p> <p>Suppose we take the usual deck of 52 cards; and add three blank White cards to the deck.</p> <p>Shuffle that deck and lay the cards out on the table, face up.</p> <p>Take the cards to the left of the leftmost White card (possibly none), and place them in a pile. Take the cards between the leftmost White and the second leftmost White card and place them in a second pile. Take the cards between the second leftmost White card and the third leftmost White and place them in a third pile. Take the remaining cards (those to the right of all the White cards) and place them in a fourth pile.</p> <p>This is the number of possibilities as in your problem statement; so there are $\frac{(52+3)!}{3!}$ possible arrangements.</p> Answer 8: <p><strong>The answer is simply $4^{52} $!!</strong></p> <hr> <p>Take for suppose you have to divide the cards into four groups numbered $1$, $2$, $3$ and $4$ respectively. These same numbers are also on the die. Each card has four ways of going into any one of the groups so hence for $52$ cards, we have the required number of possibilities. Hope it helps. </p> Answer 9: <p>For each of the 52 cards there are four possible choices for what pile it is in. So there's 4 choices for the first card, times 4 choices for the second card, times 4 for the third and so on.</p> Answer 10: <p>Assuming arrangement within each pile matters (unlike how the other answerers assumed) imagine three additional cards which are used to indicate when to switch to the next pile.</p> <p>There are then $\frac{55!}{3!}\approx 2.1\cdot 10^{72}$ such arrangements.</p> Answer 11: <p>EDIT: (This all depends on what is meant by 'arrangements'; I think that in this case the correct answer is probably <strong>not</strong> what I give below (which assumes the deck is also randomized); but instead the correct answer is $4^{52}$ (we don;t care about the ordering in each pile)).</p> <p>Suppose we take the usual deck of 52 cards; and add three blank White cards to the deck.</p> <p>Shuffle that deck and lay the cards out on the table, face up.</p> <p>Take the cards to the left of the leftmost White card (possibly none), and place them in a pile. Take the cards between the leftmost White and the second leftmost White card and place them in a second pile. Take the cards between the second leftmost White card and the third leftmost White and place them in a third pile. Take the remaining cards (those to the right of all the White cards) and place them in a fourth pile.</p> <p>This is the number of possibilities as in your problem statement; so there are $\frac{(52+3)!}{3!}$ possible arrangements.</p> Answer 12: <p><strong>The answer is simply $4^{52} $!!</strong></p> <hr> <p>Take for suppose you have to divide the cards into four groups numbered $1$, $2$, $3$ and $4$ respectively. These same numbers are also on the die. Each card has four ways of going into any one of the groups so hence for $52$ cards, we have the required number of possibilities. Hope it helps. </p> Answer 13: <p>For each of the 52 cards there are four possible choices for what pile it is in. So there's 4 choices for the first card, times 4 choices for the second card, times 4 for the third and so on.</p> Answer 14: <p>Assuming arrangement within each pile matters (unlike how the other answerers assumed) imagine three additional cards which are used to indicate when to switch to the next pile.</p> <p>There are then $\frac{55!}{3!}\approx 2.1\cdot 10^{72}$ such arrangements.</p> Answer 15: <p>EDIT: (This all depends on what is meant by 'arrangements'; I think that in this case the correct answer is probably <strong>not</strong> what I give below (which assumes the deck is also randomized); but instead the correct answer is $4^{52}$ (we don;t care about the ordering in each pile)).</p> <p>Suppose we take the usual deck of 52 cards; and add three blank White cards to the deck.</p> <p>Shuffle that deck and lay the cards out on the table, face up.</p> <p>Take the cards to the left of the leftmost White card (possibly none), and place them in a pile. Take the cards between the leftmost White and the second leftmost White card and place them in a second pile. Take the cards between the second leftmost White card and the third leftmost White and place them in a third pile. Take the remaining cards (those to the right of all the White cards) and place them in a fourth pile.</p> <p>This is the number of possibilities as in your problem statement; so there are $\frac{(52+3)!}{3!}$ possible arrangements.</p> Answer 16: <p><strong>The answer is simply $4^{52} $!!</strong></p> <hr> <p>Take for suppose you have to divide the cards into four groups numbered $1$, $2$, $3$ and $4$ respectively. These same numbers are also on the die. Each card has four ways of going into any one of the groups so hence for $52$ cards, we have the required number of possibilities. Hope it helps. </p>
Solving for inverse-functions involving $[x]$ and $\{x\}$
2,083,029
4
https://math.stackexchange.com/questions/2083029/solving-for-inverse-functions-involving-x-and-x
<blockquote> <p>Given $f(x)=x+[x]$ where $$f:\Bbb{R}\to\bigcup_{n\in \Bbb &gt; Z}[2n,2n+1)$$</p> <p>Find $f^{-1}(x)$ ?</p> </blockquote> <hr> <p><strong>My Attempt:</strong></p> <p>$y=x+[x]=2[x]+\{x\}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $Since $x=[x]+\{x\}$</p> <p>$\Rightarrow y-\{x\}=2[x]$</p> <p>The above equation implies $y-\{x\}$ is an integer, therefore it can be written as $y-\{y\}$.Hence, it implies $\{x\}=\{y\}.$</p> <p>$\Rightarrow y-\{y\}=2[x]$</p> <p>$\Rightarrow [x]=\frac{[y]}{2}$</p> <p>Adding {x} both sides we get,</p> <p>$\Rightarrow [x]+\{x\}=\frac{[y]}{2}+\{x\}$</p> <p>$\Rightarrow x=\frac{[y]}{2}+\{y\}=y-\frac{[y]}{2}$</p> <p>Hence, </p> <p>$$f^{-1}(x)=x-\frac{[x]}{2}$$</p> <hr> <p><strong>What are some other ways to solve this problem and to find inverses of such type of function ?</strong></p> <p>In the above context, $[x]$ is Greatest Integer Function(floor) and $\{x\}$ is fractional part of $x$.</p>
[ { "answer_id": 2083094, "score": 3, "text": "<p>Another way:\n\\begin{eqnarray}\ny &amp;=&amp; x+[x] \\\\\nx &amp;=&amp; y-[x] ~~;~~(1)\\\\\ny &amp;=&amp; x+[x] ~~;~~(From~main~function) \\\\\ny &amp;=&amp; x+[y-[x]] ~~;~~From(1)\\\\\ny &amp;=&amp; x+[y]-[x] \\\\\ny &amp;=&amp; x+[y]-(y-x) \\\\\ny &amp;=&amp; x+[y]-y+x \\\\\n2y &amp;=&amp; 2x+[y] \\\\\nx &amp;=&amp; \\frac12(2y-[y]) \\\\\nx &amp;=&amp; y-\\frac{[y]}{2} \n\\end{eqnarray}</p>\n" }, { "answer_id": 2083044, "score": 0, "text": "<p>The floor function is piecewise constant, so that $f$ is piecewise linear. Hence every piece is invertible separately, and the whole function is invertible provided there is no overlap of the codomains (which is the case here).</p>\n\n<p>With $x\\in[n,n+1)$, $f(x)=x+[x]=x+n$, so that $f^{-1}(y)=y-n$.</p>\n\n<p>As the codomain is $[2n,2n+1)$, you can retrieve $n$ using the fact that $[y]=[x]+[x]=2n$.</p>\n" }, { "answer_id": 2083050, "score": 0, "text": "<p>Another method is to plot the function $y=x+[x]$ and take its reflection in $y=x$ which will give you the plot of $y = x - \\frac{[x]}{2}$.</p>\n" }, { "answer_id": 2083094, "score": 3, "text": "<p>Another way:\n\\begin{eqnarray}\ny &amp;=&amp; x+[x] \\\\\nx &amp;=&amp; y-[x] ~~;~~(1)\\\\\ny &amp;=&amp; x+[x] ~~;~~(From~main~function) \\\\\ny &amp;=&amp; x+[y-[x]] ~~;~~From(1)\\\\\ny &amp;=&amp; x+[y]-[x] \\\\\ny &amp;=&amp; x+[y]-(y-x) \\\\\ny &amp;=&amp; x+[y]-y+x \\\\\n2y &amp;=&amp; 2x+[y] \\\\\nx &amp;=&amp; \\frac12(2y-[y]) \\\\\nx &amp;=&amp; y-\\frac{[y]}{2} \n\\end{eqnarray}</p>\n" }, { "answer_id": 2083044, "score": 0, "text": "<p>The floor function is piecewise constant, so that $f$ is piecewise linear. Hence every piece is invertible separately, and the whole function is invertible provided there is no overlap of the codomains (which is the case here).</p>\n\n<p>With $x\\in[n,n+1)$, $f(x)=x+[x]=x+n$, so that $f^{-1}(y)=y-n$.</p>\n\n<p>As the codomain is $[2n,2n+1)$, you can retrieve $n$ using the fact that $[y]=[x]+[x]=2n$.</p>\n" }, { "answer_id": 2083050, "score": 0, "text": "<p>Another method is to plot the function $y=x+[x]$ and take its reflection in $y=x$ which will give you the plot of $y = x - \\frac{[x]}{2}$.</p>\n" }, { "answer_id": 2083094, "score": 3, "text": "<p>Another way:\n\\begin{eqnarray}\ny &amp;=&amp; x+[x] \\\\\nx &amp;=&amp; y-[x] ~~;~~(1)\\\\\ny &amp;=&amp; x+[x] ~~;~~(From~main~function) \\\\\ny &amp;=&amp; x+[y-[x]] ~~;~~From(1)\\\\\ny &amp;=&amp; x+[y]-[x] \\\\\ny &amp;=&amp; x+[y]-(y-x) \\\\\ny &amp;=&amp; x+[y]-y+x \\\\\n2y &amp;=&amp; 2x+[y] \\\\\nx &amp;=&amp; \\frac12(2y-[y]) \\\\\nx &amp;=&amp; y-\\frac{[y]}{2} \n\\end{eqnarray}</p>\n" }, { "answer_id": 2083044, "score": 0, "text": "<p>The floor function is piecewise constant, so that $f$ is piecewise linear. Hence every piece is invertible separately, and the whole function is invertible provided there is no overlap of the codomains (which is the case here).</p>\n\n<p>With $x\\in[n,n+1)$, $f(x)=x+[x]=x+n$, so that $f^{-1}(y)=y-n$.</p>\n\n<p>As the codomain is $[2n,2n+1)$, you can retrieve $n$ using the fact that $[y]=[x]+[x]=2n$.</p>\n" }, { "answer_id": 2083050, "score": 0, "text": "<p>Another method is to plot the function $y=x+[x]$ and take its reflection in $y=x$ which will give you the plot of $y = x - \\frac{[x]}{2}$.</p>\n" }, { "answer_id": 2083094, "score": 3, "text": "<p>Another way:\n\\begin{eqnarray}\ny &amp;=&amp; x+[x] \\\\\nx &amp;=&amp; y-[x] ~~;~~(1)\\\\\ny &amp;=&amp; x+[x] ~~;~~(From~main~function) \\\\\ny &amp;=&amp; x+[y-[x]] ~~;~~From(1)\\\\\ny &amp;=&amp; x+[y]-[x] \\\\\ny &amp;=&amp; x+[y]-(y-x) \\\\\ny &amp;=&amp; x+[y]-y+x \\\\\n2y &amp;=&amp; 2x+[y] \\\\\nx &amp;=&amp; \\frac12(2y-[y]) \\\\\nx &amp;=&amp; y-\\frac{[y]}{2} \n\\end{eqnarray}</p>\n" }, { "answer_id": 2083044, "score": 0, "text": "<p>The floor function is piecewise constant, so that $f$ is piecewise linear. Hence every piece is invertible separately, and the whole function is invertible provided there is no overlap of the codomains (which is the case here).</p>\n\n<p>With $x\\in[n,n+1)$, $f(x)=x+[x]=x+n$, so that $f^{-1}(y)=y-n$.</p>\n\n<p>As the codomain is $[2n,2n+1)$, you can retrieve $n$ using the fact that $[y]=[x]+[x]=2n$.</p>\n" }, { "answer_id": 2083050, "score": 0, "text": "<p>Another method is to plot the function $y=x+[x]$ and take its reflection in $y=x$ which will give you the plot of $y = x - \\frac{[x]}{2}$.</p>\n" } ]
Question: Solving for inverse-functions involving $[x]$ and $\{x\}$ Answer 1: <p>Another way: \begin{eqnarray} y &amp;=&amp; x+[x] \\ x &amp;=&amp; y-[x] ~~;~~(1)\\ y &amp;=&amp; x+[x] ~~;~~(From~main~function) \\ y &amp;=&amp; x+[y-[x]] ~~;~~From(1)\\ y &amp;=&amp; x+[y]-[x] \\ y &amp;=&amp; x+[y]-(y-x) \\ y &amp;=&amp; x+[y]-y+x \\ 2y &amp;=&amp; 2x+[y] \\ x &amp;=&amp; \frac12(2y-[y]) \\ x &amp;=&amp; y-\frac{[y]}{2} \end{eqnarray}</p> Answer 2: <p>The floor function is piecewise constant, so that $f$ is piecewise linear. Hence every piece is invertible separately, and the whole function is invertible provided there is no overlap of the codomains (which is the case here).</p> <p>With $x\in[n,n+1)$, $f(x)=x+[x]=x+n$, so that $f^{-1}(y)=y-n$.</p> <p>As the codomain is $[2n,2n+1)$, you can retrieve $n$ using the fact that $[y]=[x]+[x]=2n$.</p> Answer 3: <p>Another method is to plot the function $y=x+[x]$ and take its reflection in $y=x$ which will give you the plot of $y = x - \frac{[x]}{2}$.</p> Answer 4: <p>Another way: \begin{eqnarray} y &amp;=&amp; x+[x] \\ x &amp;=&amp; y-[x] ~~;~~(1)\\ y &amp;=&amp; x+[x] ~~;~~(From~main~function) \\ y &amp;=&amp; x+[y-[x]] ~~;~~From(1)\\ y &amp;=&amp; x+[y]-[x] \\ y &amp;=&amp; x+[y]-(y-x) \\ y &amp;=&amp; x+[y]-y+x \\ 2y &amp;=&amp; 2x+[y] \\ x &amp;=&amp; \frac12(2y-[y]) \\ x &amp;=&amp; y-\frac{[y]}{2} \end{eqnarray}</p> Answer 5: <p>The floor function is piecewise constant, so that $f$ is piecewise linear. Hence every piece is invertible separately, and the whole function is invertible provided there is no overlap of the codomains (which is the case here).</p> <p>With $x\in[n,n+1)$, $f(x)=x+[x]=x+n$, so that $f^{-1}(y)=y-n$.</p> <p>As the codomain is $[2n,2n+1)$, you can retrieve $n$ using the fact that $[y]=[x]+[x]=2n$.</p> Answer 6: <p>Another method is to plot the function $y=x+[x]$ and take its reflection in $y=x$ which will give you the plot of $y = x - \frac{[x]}{2}$.</p> Answer 7: <p>Another way: \begin{eqnarray} y &amp;=&amp; x+[x] \\ x &amp;=&amp; y-[x] ~~;~~(1)\\ y &amp;=&amp; x+[x] ~~;~~(From~main~function) \\ y &amp;=&amp; x+[y-[x]] ~~;~~From(1)\\ y &amp;=&amp; x+[y]-[x] \\ y &amp;=&amp; x+[y]-(y-x) \\ y &amp;=&amp; x+[y]-y+x \\ 2y &amp;=&amp; 2x+[y] \\ x &amp;=&amp; \frac12(2y-[y]) \\ x &amp;=&amp; y-\frac{[y]}{2} \end{eqnarray}</p> Answer 8: <p>The floor function is piecewise constant, so that $f$ is piecewise linear. Hence every piece is invertible separately, and the whole function is invertible provided there is no overlap of the codomains (which is the case here).</p> <p>With $x\in[n,n+1)$, $f(x)=x+[x]=x+n$, so that $f^{-1}(y)=y-n$.</p> <p>As the codomain is $[2n,2n+1)$, you can retrieve $n$ using the fact that $[y]=[x]+[x]=2n$.</p> Answer 9: <p>Another method is to plot the function $y=x+[x]$ and take its reflection in $y=x$ which will give you the plot of $y = x - \frac{[x]}{2}$.</p> Answer 10: <p>Another way: \begin{eqnarray} y &amp;=&amp; x+[x] \\ x &amp;=&amp; y-[x] ~~;~~(1)\\ y &amp;=&amp; x+[x] ~~;~~(From~main~function) \\ y &amp;=&amp; x+[y-[x]] ~~;~~From(1)\\ y &amp;=&amp; x+[y]-[x] \\ y &amp;=&amp; x+[y]-(y-x) \\ y &amp;=&amp; x+[y]-y+x \\ 2y &amp;=&amp; 2x+[y] \\ x &amp;=&amp; \frac12(2y-[y]) \\ x &amp;=&amp; y-\frac{[y]}{2} \end{eqnarray}</p> Answer 11: <p>The floor function is piecewise constant, so that $f$ is piecewise linear. Hence every piece is invertible separately, and the whole function is invertible provided there is no overlap of the codomains (which is the case here).</p> <p>With $x\in[n,n+1)$, $f(x)=x+[x]=x+n$, so that $f^{-1}(y)=y-n$.</p> <p>As the codomain is $[2n,2n+1)$, you can retrieve $n$ using the fact that $[y]=[x]+[x]=2n$.</p> Answer 12: <p>Another method is to plot the function $y=x+[x]$ and take its reflection in $y=x$ which will give you the plot of $y = x - \frac{[x]}{2}$.</p>
Identifying the digits of $37 \cdot aaaa\ldots a$.
2,083,110
4
https://math.stackexchange.com/questions/2083110/identifying-the-digits-of-37-cdot-aaaa-ldots-a
<p>With a calculator, I have noticed that the integer $37$ multiplied with some particular numbers yields numbers with some structures. </p> <p>For instance, let $aaaa\ldots a$ be a natural number of $n$ identical digits. Then, $ 37 \cdot aaaa\ldots a$ is a number with $n+1$ or $n+2$ digits of the form</p> <blockquote> <p>$$\underbrace{4\cdot a}_{1\text{ or }2}~ \underbrace{aaa\ldots a}_{n-2} ~\underbrace{7\cdot a}_{2^{*}}.$$</p> </blockquote> <p>$*$ if $a=1$, then $7 \cdot a$ is the sequence of digits $07$. </p> <p>I am wondering whether the result above can be proven using some number-theory tool.</p> <p>Thanks in advance!</p>
[ { "answer_id": 2083125, "score": 4, "text": "<p>Nice observation!</p>\n\n<p>This follows because $aaa\\cdots a = a \\cdot 111 \\cdots 1$ and</p>\n\n<p>$37 \\cdot 11 = 407$</p>\n\n<p>$37 \\cdot 111 = 4107$</p>\n\n<p>$37 \\cdot 1111 = 41107$</p>\n\n<p>$\\cdots$</p>\n\n<p>Indeed, let $u_n = 111 \\cdots 1$ ($n$ ones). Then $u_n = \\dfrac{10^n-1}{9}$ and</p>\n\n<p>$$\n\\begin{align}\n37 \\cdot 111\\cdots 1 \\quad (n \\text{ ones}) &amp;=\n37u_n\\\\&amp;= 36u_n + u_n\\\\\n&amp;= 4\\cdot9\\cdot u_n+10u_{n-2}+11\\\\\n&amp;=4(10^n-1)+10u_{n-2}+4+7\\\\\n&amp;=4\\cdot10^n+10u_{n-2}+7\\\\\n&amp;=4111\\cdots107 \\quad (n-2 \\text{ ones})\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083155, "score": 4, "text": "<p>This is not only true for $37$ but all $2$-digit numbers of form $[ab]$, where $a + b =10$.</p>\n\n<p>Since $[ab]=10a+b$</p>\n\n<p>$\n\\begin{align}\n&amp;\\ \\ \\ \\ \\ [ab]\\times11\\\\\n&amp;=(10a+b)\\times11\\\\\n&amp;=100a+10(a+b)+b\\\\\n&amp;=100(a+1)+b\\\\\n&amp;=[(a+1)0b]\\\\\n\\end{align}\n$</p>\n\n<p>In a similar way,</p>\n\n<p>$\n\\begin{align}\n&amp;\\ \\ \\ \\ \\ [ab]\\times11...n\\ times...11\\\\\n&amp;=10^{n}a+10^{n-1}(a+b)+10^{n-2}(a+b)...+10(a+b)+b\\\\\n&amp;=10^{n}(a+1)+10^{n-1}+...+100+b\\\\\n&amp;=[(a+1)11...n-2\\ times...110b]\\\\\n\\end{align}\n$</p>\n\n<p>Now for $[ab]\\times [kk..n\\ times..kk]$</p>\n\n<p>$\n\\begin{align}\n&amp;\\ \\ \\ \\ \\ [ab]\\times11...n\\ times...11\\times k\\\\\n&amp;=[(a+1)11...n-2\\ times...110b]\\times k\\\\\n&amp;=[(k\\times(a+1))kk..n-2\\ times..kk00]+(k\\times b)\\\\\n\\end{align}\n$</p>\n\n<p>This with some modification can be applied to all numbers of form $[abc...]$, where $a+b+c+...=10$.</p>\n" }, { "answer_id": 2083207, "score": 1, "text": "<p>Observe the patterns in the written calculation of these products</p>\n\n<p>$$a=1\\to\\begin{matrix}3&amp;7\\\\&amp;3&amp;7\\\\&amp;&amp;3&amp;7\\\\&amp;&amp;&amp;3&amp;7\\end{matrix}$$</p>\n\n<p>$$a=2\\to\\begin{matrix}7&amp;4\\\\&amp;7&amp;4\\\\&amp;&amp;7&amp;4\\\\&amp;&amp;&amp;7&amp;4\\end{matrix}$$</p>\n\n<p>$$a=3\\to\\begin{matrix}1&amp;1&amp;1\\\\&amp;1&amp;1&amp;1\\\\&amp;&amp;1&amp;1&amp;1\\\\&amp;&amp;&amp;1&amp;1&amp;1\\end{matrix}$$</p>\n\n<p>$$a=4\\to\\begin{matrix}1&amp;4&amp;8\\\\&amp;1&amp;4&amp;8\\\\&amp;&amp;1&amp;4&amp;8\\\\&amp;&amp;&amp;1&amp;4&amp;8\\end{matrix}$$</p>\n\n<p>$$a=5\\to\\begin{matrix}1&amp;8&amp;5\\\\&amp;1&amp;8&amp;5\\\\&amp;&amp;1&amp;8&amp;5\\\\&amp;&amp;&amp;1&amp;8&amp;5\\end{matrix}$$</p>\n\n<p>$$\\cdots$$</p>\n\n<p>$$a=9\\to\\begin{matrix}3&amp;3&amp;3\\\\&amp;3&amp;3&amp;3\\\\&amp;&amp;3&amp;3&amp;3\\\\&amp;&amp;&amp;3&amp;3&amp;3\\end{matrix}$$</p>\n\n<p>A digit of the product, in a complete column, is the sum of the digits of $37\\cdot a$ plus a possible carry from the previous column, i.e. the sum of the digits of the sum of the digits of $37\\cdot a$.</p>\n\n<p>The initial and final digit pairs correspond to partial sums.</p>\n\n<p>We have</p>\n\n<p>$$\n37\\cdot1=37\\to10\\to1,\\\\\n37\\cdot2=74\\to11\\to2,\\\\\n37\\cdot3=111\\to3\\to3,\\\\\n37\\cdot4=148\\to13\\to4,\\\\\n37\\cdot5=185\\to14\\to5,\\\\\n\\cdots\\\\\n37\\cdot9=333\\to9\\to9.$$</p>\n\n<p>The phenomenon is explained by the fact that $37=4\\cdot9+1\\to10\\to1$, and this occurs for all factors of the form $k\\cdot9+1$. For example,</p>\n\n<p>$$73\\cdot66666666666666=4866666666666618.$$</p>\n" }, { "answer_id": 2083125, "score": 4, "text": "<p>Nice observation!</p>\n\n<p>This follows because $aaa\\cdots a = a \\cdot 111 \\cdots 1$ and</p>\n\n<p>$37 \\cdot 11 = 407$</p>\n\n<p>$37 \\cdot 111 = 4107$</p>\n\n<p>$37 \\cdot 1111 = 41107$</p>\n\n<p>$\\cdots$</p>\n\n<p>Indeed, let $u_n = 111 \\cdots 1$ ($n$ ones). Then $u_n = \\dfrac{10^n-1}{9}$ and</p>\n\n<p>$$\n\\begin{align}\n37 \\cdot 111\\cdots 1 \\quad (n \\text{ ones}) &amp;=\n37u_n\\\\&amp;= 36u_n + u_n\\\\\n&amp;= 4\\cdot9\\cdot u_n+10u_{n-2}+11\\\\\n&amp;=4(10^n-1)+10u_{n-2}+4+7\\\\\n&amp;=4\\cdot10^n+10u_{n-2}+7\\\\\n&amp;=4111\\cdots107 \\quad (n-2 \\text{ ones})\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083155, "score": 4, "text": "<p>This is not only true for $37$ but all $2$-digit numbers of form $[ab]$, where $a + b =10$.</p>\n\n<p>Since $[ab]=10a+b$</p>\n\n<p>$\n\\begin{align}\n&amp;\\ \\ \\ \\ \\ [ab]\\times11\\\\\n&amp;=(10a+b)\\times11\\\\\n&amp;=100a+10(a+b)+b\\\\\n&amp;=100(a+1)+b\\\\\n&amp;=[(a+1)0b]\\\\\n\\end{align}\n$</p>\n\n<p>In a similar way,</p>\n\n<p>$\n\\begin{align}\n&amp;\\ \\ \\ \\ \\ [ab]\\times11...n\\ times...11\\\\\n&amp;=10^{n}a+10^{n-1}(a+b)+10^{n-2}(a+b)...+10(a+b)+b\\\\\n&amp;=10^{n}(a+1)+10^{n-1}+...+100+b\\\\\n&amp;=[(a+1)11...n-2\\ times...110b]\\\\\n\\end{align}\n$</p>\n\n<p>Now for $[ab]\\times [kk..n\\ times..kk]$</p>\n\n<p>$\n\\begin{align}\n&amp;\\ \\ \\ \\ \\ [ab]\\times11...n\\ times...11\\times k\\\\\n&amp;=[(a+1)11...n-2\\ times...110b]\\times k\\\\\n&amp;=[(k\\times(a+1))kk..n-2\\ times..kk00]+(k\\times b)\\\\\n\\end{align}\n$</p>\n\n<p>This with some modification can be applied to all numbers of form $[abc...]$, where $a+b+c+...=10$.</p>\n" }, { "answer_id": 2083207, "score": 1, "text": "<p>Observe the patterns in the written calculation of these products</p>\n\n<p>$$a=1\\to\\begin{matrix}3&amp;7\\\\&amp;3&amp;7\\\\&amp;&amp;3&amp;7\\\\&amp;&amp;&amp;3&amp;7\\end{matrix}$$</p>\n\n<p>$$a=2\\to\\begin{matrix}7&amp;4\\\\&amp;7&amp;4\\\\&amp;&amp;7&amp;4\\\\&amp;&amp;&amp;7&amp;4\\end{matrix}$$</p>\n\n<p>$$a=3\\to\\begin{matrix}1&amp;1&amp;1\\\\&amp;1&amp;1&amp;1\\\\&amp;&amp;1&amp;1&amp;1\\\\&amp;&amp;&amp;1&amp;1&amp;1\\end{matrix}$$</p>\n\n<p>$$a=4\\to\\begin{matrix}1&amp;4&amp;8\\\\&amp;1&amp;4&amp;8\\\\&amp;&amp;1&amp;4&amp;8\\\\&amp;&amp;&amp;1&amp;4&amp;8\\end{matrix}$$</p>\n\n<p>$$a=5\\to\\begin{matrix}1&amp;8&amp;5\\\\&amp;1&amp;8&amp;5\\\\&amp;&amp;1&amp;8&amp;5\\\\&amp;&amp;&amp;1&amp;8&amp;5\\end{matrix}$$</p>\n\n<p>$$\\cdots$$</p>\n\n<p>$$a=9\\to\\begin{matrix}3&amp;3&amp;3\\\\&amp;3&amp;3&amp;3\\\\&amp;&amp;3&amp;3&amp;3\\\\&amp;&amp;&amp;3&amp;3&amp;3\\end{matrix}$$</p>\n\n<p>A digit of the product, in a complete column, is the sum of the digits of $37\\cdot a$ plus a possible carry from the previous column, i.e. the sum of the digits of the sum of the digits of $37\\cdot a$.</p>\n\n<p>The initial and final digit pairs correspond to partial sums.</p>\n\n<p>We have</p>\n\n<p>$$\n37\\cdot1=37\\to10\\to1,\\\\\n37\\cdot2=74\\to11\\to2,\\\\\n37\\cdot3=111\\to3\\to3,\\\\\n37\\cdot4=148\\to13\\to4,\\\\\n37\\cdot5=185\\to14\\to5,\\\\\n\\cdots\\\\\n37\\cdot9=333\\to9\\to9.$$</p>\n\n<p>The phenomenon is explained by the fact that $37=4\\cdot9+1\\to10\\to1$, and this occurs for all factors of the form $k\\cdot9+1$. For example,</p>\n\n<p>$$73\\cdot66666666666666=4866666666666618.$$</p>\n" }, { "answer_id": 2083125, "score": 4, "text": "<p>Nice observation!</p>\n\n<p>This follows because $aaa\\cdots a = a \\cdot 111 \\cdots 1$ and</p>\n\n<p>$37 \\cdot 11 = 407$</p>\n\n<p>$37 \\cdot 111 = 4107$</p>\n\n<p>$37 \\cdot 1111 = 41107$</p>\n\n<p>$\\cdots$</p>\n\n<p>Indeed, let $u_n = 111 \\cdots 1$ ($n$ ones). Then $u_n = \\dfrac{10^n-1}{9}$ and</p>\n\n<p>$$\n\\begin{align}\n37 \\cdot 111\\cdots 1 \\quad (n \\text{ ones}) &amp;=\n37u_n\\\\&amp;= 36u_n + u_n\\\\\n&amp;= 4\\cdot9\\cdot u_n+10u_{n-2}+11\\\\\n&amp;=4(10^n-1)+10u_{n-2}+4+7\\\\\n&amp;=4\\cdot10^n+10u_{n-2}+7\\\\\n&amp;=4111\\cdots107 \\quad (n-2 \\text{ ones})\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083155, "score": 4, "text": "<p>This is not only true for $37$ but all $2$-digit numbers of form $[ab]$, where $a + b =10$.</p>\n\n<p>Since $[ab]=10a+b$</p>\n\n<p>$\n\\begin{align}\n&amp;\\ \\ \\ \\ \\ [ab]\\times11\\\\\n&amp;=(10a+b)\\times11\\\\\n&amp;=100a+10(a+b)+b\\\\\n&amp;=100(a+1)+b\\\\\n&amp;=[(a+1)0b]\\\\\n\\end{align}\n$</p>\n\n<p>In a similar way,</p>\n\n<p>$\n\\begin{align}\n&amp;\\ \\ \\ \\ \\ [ab]\\times11...n\\ times...11\\\\\n&amp;=10^{n}a+10^{n-1}(a+b)+10^{n-2}(a+b)...+10(a+b)+b\\\\\n&amp;=10^{n}(a+1)+10^{n-1}+...+100+b\\\\\n&amp;=[(a+1)11...n-2\\ times...110b]\\\\\n\\end{align}\n$</p>\n\n<p>Now for $[ab]\\times [kk..n\\ times..kk]$</p>\n\n<p>$\n\\begin{align}\n&amp;\\ \\ \\ \\ \\ [ab]\\times11...n\\ times...11\\times k\\\\\n&amp;=[(a+1)11...n-2\\ times...110b]\\times k\\\\\n&amp;=[(k\\times(a+1))kk..n-2\\ times..kk00]+(k\\times b)\\\\\n\\end{align}\n$</p>\n\n<p>This with some modification can be applied to all numbers of form $[abc...]$, where $a+b+c+...=10$.</p>\n" }, { "answer_id": 2083207, "score": 1, "text": "<p>Observe the patterns in the written calculation of these products</p>\n\n<p>$$a=1\\to\\begin{matrix}3&amp;7\\\\&amp;3&amp;7\\\\&amp;&amp;3&amp;7\\\\&amp;&amp;&amp;3&amp;7\\end{matrix}$$</p>\n\n<p>$$a=2\\to\\begin{matrix}7&amp;4\\\\&amp;7&amp;4\\\\&amp;&amp;7&amp;4\\\\&amp;&amp;&amp;7&amp;4\\end{matrix}$$</p>\n\n<p>$$a=3\\to\\begin{matrix}1&amp;1&amp;1\\\\&amp;1&amp;1&amp;1\\\\&amp;&amp;1&amp;1&amp;1\\\\&amp;&amp;&amp;1&amp;1&amp;1\\end{matrix}$$</p>\n\n<p>$$a=4\\to\\begin{matrix}1&amp;4&amp;8\\\\&amp;1&amp;4&amp;8\\\\&amp;&amp;1&amp;4&amp;8\\\\&amp;&amp;&amp;1&amp;4&amp;8\\end{matrix}$$</p>\n\n<p>$$a=5\\to\\begin{matrix}1&amp;8&amp;5\\\\&amp;1&amp;8&amp;5\\\\&amp;&amp;1&amp;8&amp;5\\\\&amp;&amp;&amp;1&amp;8&amp;5\\end{matrix}$$</p>\n\n<p>$$\\cdots$$</p>\n\n<p>$$a=9\\to\\begin{matrix}3&amp;3&amp;3\\\\&amp;3&amp;3&amp;3\\\\&amp;&amp;3&amp;3&amp;3\\\\&amp;&amp;&amp;3&amp;3&amp;3\\end{matrix}$$</p>\n\n<p>A digit of the product, in a complete column, is the sum of the digits of $37\\cdot a$ plus a possible carry from the previous column, i.e. the sum of the digits of the sum of the digits of $37\\cdot a$.</p>\n\n<p>The initial and final digit pairs correspond to partial sums.</p>\n\n<p>We have</p>\n\n<p>$$\n37\\cdot1=37\\to10\\to1,\\\\\n37\\cdot2=74\\to11\\to2,\\\\\n37\\cdot3=111\\to3\\to3,\\\\\n37\\cdot4=148\\to13\\to4,\\\\\n37\\cdot5=185\\to14\\to5,\\\\\n\\cdots\\\\\n37\\cdot9=333\\to9\\to9.$$</p>\n\n<p>The phenomenon is explained by the fact that $37=4\\cdot9+1\\to10\\to1$, and this occurs for all factors of the form $k\\cdot9+1$. For example,</p>\n\n<p>$$73\\cdot66666666666666=4866666666666618.$$</p>\n" }, { "answer_id": 2083125, "score": 4, "text": "<p>Nice observation!</p>\n\n<p>This follows because $aaa\\cdots a = a \\cdot 111 \\cdots 1$ and</p>\n\n<p>$37 \\cdot 11 = 407$</p>\n\n<p>$37 \\cdot 111 = 4107$</p>\n\n<p>$37 \\cdot 1111 = 41107$</p>\n\n<p>$\\cdots$</p>\n\n<p>Indeed, let $u_n = 111 \\cdots 1$ ($n$ ones). Then $u_n = \\dfrac{10^n-1}{9}$ and</p>\n\n<p>$$\n\\begin{align}\n37 \\cdot 111\\cdots 1 \\quad (n \\text{ ones}) &amp;=\n37u_n\\\\&amp;= 36u_n + u_n\\\\\n&amp;= 4\\cdot9\\cdot u_n+10u_{n-2}+11\\\\\n&amp;=4(10^n-1)+10u_{n-2}+4+7\\\\\n&amp;=4\\cdot10^n+10u_{n-2}+7\\\\\n&amp;=4111\\cdots107 \\quad (n-2 \\text{ ones})\n\\end{align}\n$$</p>\n" }, { "answer_id": 2083155, "score": 4, "text": "<p>This is not only true for $37$ but all $2$-digit numbers of form $[ab]$, where $a + b =10$.</p>\n\n<p>Since $[ab]=10a+b$</p>\n\n<p>$\n\\begin{align}\n&amp;\\ \\ \\ \\ \\ [ab]\\times11\\\\\n&amp;=(10a+b)\\times11\\\\\n&amp;=100a+10(a+b)+b\\\\\n&amp;=100(a+1)+b\\\\\n&amp;=[(a+1)0b]\\\\\n\\end{align}\n$</p>\n\n<p>In a similar way,</p>\n\n<p>$\n\\begin{align}\n&amp;\\ \\ \\ \\ \\ [ab]\\times11...n\\ times...11\\\\\n&amp;=10^{n}a+10^{n-1}(a+b)+10^{n-2}(a+b)...+10(a+b)+b\\\\\n&amp;=10^{n}(a+1)+10^{n-1}+...+100+b\\\\\n&amp;=[(a+1)11...n-2\\ times...110b]\\\\\n\\end{align}\n$</p>\n\n<p>Now for $[ab]\\times [kk..n\\ times..kk]$</p>\n\n<p>$\n\\begin{align}\n&amp;\\ \\ \\ \\ \\ [ab]\\times11...n\\ times...11\\times k\\\\\n&amp;=[(a+1)11...n-2\\ times...110b]\\times k\\\\\n&amp;=[(k\\times(a+1))kk..n-2\\ times..kk00]+(k\\times b)\\\\\n\\end{align}\n$</p>\n\n<p>This with some modification can be applied to all numbers of form $[abc...]$, where $a+b+c+...=10$.</p>\n" }, { "answer_id": 2083207, "score": 1, "text": "<p>Observe the patterns in the written calculation of these products</p>\n\n<p>$$a=1\\to\\begin{matrix}3&amp;7\\\\&amp;3&amp;7\\\\&amp;&amp;3&amp;7\\\\&amp;&amp;&amp;3&amp;7\\end{matrix}$$</p>\n\n<p>$$a=2\\to\\begin{matrix}7&amp;4\\\\&amp;7&amp;4\\\\&amp;&amp;7&amp;4\\\\&amp;&amp;&amp;7&amp;4\\end{matrix}$$</p>\n\n<p>$$a=3\\to\\begin{matrix}1&amp;1&amp;1\\\\&amp;1&amp;1&amp;1\\\\&amp;&amp;1&amp;1&amp;1\\\\&amp;&amp;&amp;1&amp;1&amp;1\\end{matrix}$$</p>\n\n<p>$$a=4\\to\\begin{matrix}1&amp;4&amp;8\\\\&amp;1&amp;4&amp;8\\\\&amp;&amp;1&amp;4&amp;8\\\\&amp;&amp;&amp;1&amp;4&amp;8\\end{matrix}$$</p>\n\n<p>$$a=5\\to\\begin{matrix}1&amp;8&amp;5\\\\&amp;1&amp;8&amp;5\\\\&amp;&amp;1&amp;8&amp;5\\\\&amp;&amp;&amp;1&amp;8&amp;5\\end{matrix}$$</p>\n\n<p>$$\\cdots$$</p>\n\n<p>$$a=9\\to\\begin{matrix}3&amp;3&amp;3\\\\&amp;3&amp;3&amp;3\\\\&amp;&amp;3&amp;3&amp;3\\\\&amp;&amp;&amp;3&amp;3&amp;3\\end{matrix}$$</p>\n\n<p>A digit of the product, in a complete column, is the sum of the digits of $37\\cdot a$ plus a possible carry from the previous column, i.e. the sum of the digits of the sum of the digits of $37\\cdot a$.</p>\n\n<p>The initial and final digit pairs correspond to partial sums.</p>\n\n<p>We have</p>\n\n<p>$$\n37\\cdot1=37\\to10\\to1,\\\\\n37\\cdot2=74\\to11\\to2,\\\\\n37\\cdot3=111\\to3\\to3,\\\\\n37\\cdot4=148\\to13\\to4,\\\\\n37\\cdot5=185\\to14\\to5,\\\\\n\\cdots\\\\\n37\\cdot9=333\\to9\\to9.$$</p>\n\n<p>The phenomenon is explained by the fact that $37=4\\cdot9+1\\to10\\to1$, and this occurs for all factors of the form $k\\cdot9+1$. For example,</p>\n\n<p>$$73\\cdot66666666666666=4866666666666618.$$</p>\n" } ]
Question: Identifying the digits of $37 \cdot aaaa\ldots a$. Answer 1: <p>Nice observation!</p> <p>This follows because $aaa\cdots a = a \cdot 111 \cdots 1$ and</p> <p>$37 \cdot 11 = 407$</p> <p>$37 \cdot 111 = 4107$</p> <p>$37 \cdot 1111 = 41107$</p> <p>$\cdots$</p> <p>Indeed, let $u_n = 111 \cdots 1$ ($n$ ones). Then $u_n = \dfrac{10^n-1}{9}$ and</p> <p>$$ \begin{align} 37 \cdot 111\cdots 1 \quad (n \text{ ones}) &amp;= 37u_n\\&amp;= 36u_n + u_n\\ &amp;= 4\cdot9\cdot u_n+10u_{n-2}+11\\ &amp;=4(10^n-1)+10u_{n-2}+4+7\\ &amp;=4\cdot10^n+10u_{n-2}+7\\ &amp;=4111\cdots107 \quad (n-2 \text{ ones}) \end{align} $$</p> Answer 2: <p>This is not only true for $37$ but all $2$-digit numbers of form $[ab]$, where $a + b =10$.</p> <p>Since $[ab]=10a+b$</p> <p>$ \begin{align} &amp;\ \ \ \ \ [ab]\times11\\ &amp;=(10a+b)\times11\\ &amp;=100a+10(a+b)+b\\ &amp;=100(a+1)+b\\ &amp;=[(a+1)0b]\\ \end{align} $</p> <p>In a similar way,</p> <p>$ \begin{align} &amp;\ \ \ \ \ [ab]\times11...n\ times...11\\ &amp;=10^{n}a+10^{n-1}(a+b)+10^{n-2}(a+b)...+10(a+b)+b\\ &amp;=10^{n}(a+1)+10^{n-1}+...+100+b\\ &amp;=[(a+1)11...n-2\ times...110b]\\ \end{align} $</p> <p>Now for $[ab]\times [kk..n\ times..kk]$</p> <p>$ \begin{align} &amp;\ \ \ \ \ [ab]\times11...n\ times...11\times k\\ &amp;=[(a+1)11...n-2\ times...110b]\times k\\ &amp;=[(k\times(a+1))kk..n-2\ times..kk00]+(k\times b)\\ \end{align} $</p> <p>This with some modification can be applied to all numbers of form $[abc...]$, where $a+b+c+...=10$.</p> Answer 3: <p>Observe the patterns in the written calculation of these products</p> <p>$$a=1\to\begin{matrix}3&amp;7\\&amp;3&amp;7\\&amp;&amp;3&amp;7\\&amp;&amp;&amp;3&amp;7\end{matrix}$$</p> <p>$$a=2\to\begin{matrix}7&amp;4\\&amp;7&amp;4\\&amp;&amp;7&amp;4\\&amp;&amp;&amp;7&amp;4\end{matrix}$$</p> <p>$$a=3\to\begin{matrix}1&amp;1&amp;1\\&amp;1&amp;1&amp;1\\&amp;&amp;1&amp;1&amp;1\\&amp;&amp;&amp;1&amp;1&amp;1\end{matrix}$$</p> <p>$$a=4\to\begin{matrix}1&amp;4&amp;8\\&amp;1&amp;4&amp;8\\&amp;&amp;1&amp;4&amp;8\\&amp;&amp;&amp;1&amp;4&amp;8\end{matrix}$$</p> <p>$$a=5\to\begin{matrix}1&amp;8&amp;5\\&amp;1&amp;8&amp;5\\&amp;&amp;1&amp;8&amp;5\\&amp;&amp;&amp;1&amp;8&amp;5\end{matrix}$$</p> <p>$$\cdots$$</p> <p>$$a=9\to\begin{matrix}3&amp;3&amp;3\\&amp;3&amp;3&amp;3\\&amp;&amp;3&amp;3&amp;3\\&amp;&amp;&amp;3&amp;3&amp;3\end{matrix}$$</p> <p>A digit of the product, in a complete column, is the sum of the digits of $37\cdot a$ plus a possible carry from the previous column, i.e. the sum of the digits of the sum of the digits of $37\cdot a$.</p> <p>The initial and final digit pairs correspond to partial sums.</p> <p>We have</p> <p>$$ 37\cdot1=37\to10\to1,\\ 37\cdot2=74\to11\to2,\\ 37\cdot3=111\to3\to3,\\ 37\cdot4=148\to13\to4,\\ 37\cdot5=185\to14\to5,\\ \cdots\\ 37\cdot9=333\to9\to9.$$</p> <p>The phenomenon is explained by the fact that $37=4\cdot9+1\to10\to1$, and this occurs for all factors of the form $k\cdot9+1$. For example,</p> <p>$$73\cdot66666666666666=4866666666666618.$$</p> Answer 4: <p>Nice observation!</p> <p>This follows because $aaa\cdots a = a \cdot 111 \cdots 1$ and</p> <p>$37 \cdot 11 = 407$</p> <p>$37 \cdot 111 = 4107$</p> <p>$37 \cdot 1111 = 41107$</p> <p>$\cdots$</p> <p>Indeed, let $u_n = 111 \cdots 1$ ($n$ ones). Then $u_n = \dfrac{10^n-1}{9}$ and</p> <p>$$ \begin{align} 37 \cdot 111\cdots 1 \quad (n \text{ ones}) &amp;= 37u_n\\&amp;= 36u_n + u_n\\ &amp;= 4\cdot9\cdot u_n+10u_{n-2}+11\\ &amp;=4(10^n-1)+10u_{n-2}+4+7\\ &amp;=4\cdot10^n+10u_{n-2}+7\\ &amp;=4111\cdots107 \quad (n-2 \text{ ones}) \end{align} $$</p> Answer 5: <p>This is not only true for $37$ but all $2$-digit numbers of form $[ab]$, where $a + b =10$.</p> <p>Since $[ab]=10a+b$</p> <p>$ \begin{align} &amp;\ \ \ \ \ [ab]\times11\\ &amp;=(10a+b)\times11\\ &amp;=100a+10(a+b)+b\\ &amp;=100(a+1)+b\\ &amp;=[(a+1)0b]\\ \end{align} $</p> <p>In a similar way,</p> <p>$ \begin{align} &amp;\ \ \ \ \ [ab]\times11...n\ times...11\\ &amp;=10^{n}a+10^{n-1}(a+b)+10^{n-2}(a+b)...+10(a+b)+b\\ &amp;=10^{n}(a+1)+10^{n-1}+...+100+b\\ &amp;=[(a+1)11...n-2\ times...110b]\\ \end{align} $</p> <p>Now for $[ab]\times [kk..n\ times..kk]$</p> <p>$ \begin{align} &amp;\ \ \ \ \ [ab]\times11...n\ times...11\times k\\ &amp;=[(a+1)11...n-2\ times...110b]\times k\\ &amp;=[(k\times(a+1))kk..n-2\ times..kk00]+(k\times b)\\ \end{align} $</p> <p>This with some modification can be applied to all numbers of form $[abc...]$, where $a+b+c+...=10$.</p> Answer 6: <p>Observe the patterns in the written calculation of these products</p> <p>$$a=1\to\begin{matrix}3&amp;7\\&amp;3&amp;7\\&amp;&amp;3&amp;7\\&amp;&amp;&amp;3&amp;7\end{matrix}$$</p> <p>$$a=2\to\begin{matrix}7&amp;4\\&amp;7&amp;4\\&amp;&amp;7&amp;4\\&amp;&amp;&amp;7&amp;4\end{matrix}$$</p> <p>$$a=3\to\begin{matrix}1&amp;1&amp;1\\&amp;1&amp;1&amp;1\\&amp;&amp;1&amp;1&amp;1\\&amp;&amp;&amp;1&amp;1&amp;1\end{matrix}$$</p> <p>$$a=4\to\begin{matrix}1&amp;4&amp;8\\&amp;1&amp;4&amp;8\\&amp;&amp;1&amp;4&amp;8\\&amp;&amp;&amp;1&amp;4&amp;8\end{matrix}$$</p> <p>$$a=5\to\begin{matrix}1&amp;8&amp;5\\&amp;1&amp;8&amp;5\\&amp;&amp;1&amp;8&amp;5\\&amp;&amp;&amp;1&amp;8&amp;5\end{matrix}$$</p> <p>$$\cdots$$</p> <p>$$a=9\to\begin{matrix}3&amp;3&amp;3\\&amp;3&amp;3&amp;3\\&amp;&amp;3&amp;3&amp;3\\&amp;&amp;&amp;3&amp;3&amp;3\end{matrix}$$</p> <p>A digit of the product, in a complete column, is the sum of the digits of $37\cdot a$ plus a possible carry from the previous column, i.e. the sum of the digits of the sum of the digits of $37\cdot a$.</p> <p>The initial and final digit pairs correspond to partial sums.</p> <p>We have</p> <p>$$ 37\cdot1=37\to10\to1,\\ 37\cdot2=74\to11\to2,\\ 37\cdot3=111\to3\to3,\\ 37\cdot4=148\to13\to4,\\ 37\cdot5=185\to14\to5,\\ \cdots\\ 37\cdot9=333\to9\to9.$$</p> <p>The phenomenon is explained by the fact that $37=4\cdot9+1\to10\to1$, and this occurs for all factors of the form $k\cdot9+1$. For example,</p> <p>$$73\cdot66666666666666=4866666666666618.$$</p> Answer 7: <p>Nice observation!</p> <p>This follows because $aaa\cdots a = a \cdot 111 \cdots 1$ and</p> <p>$37 \cdot 11 = 407$</p> <p>$37 \cdot 111 = 4107$</p> <p>$37 \cdot 1111 = 41107$</p> <p>$\cdots$</p> <p>Indeed, let $u_n = 111 \cdots 1$ ($n$ ones). Then $u_n = \dfrac{10^n-1}{9}$ and</p> <p>$$ \begin{align} 37 \cdot 111\cdots 1 \quad (n \text{ ones}) &amp;= 37u_n\\&amp;= 36u_n + u_n\\ &amp;= 4\cdot9\cdot u_n+10u_{n-2}+11\\ &amp;=4(10^n-1)+10u_{n-2}+4+7\\ &amp;=4\cdot10^n+10u_{n-2}+7\\ &amp;=4111\cdots107 \quad (n-2 \text{ ones}) \end{align} $$</p> Answer 8: <p>This is not only true for $37$ but all $2$-digit numbers of form $[ab]$, where $a + b =10$.</p> <p>Since $[ab]=10a+b$</p> <p>$ \begin{align} &amp;\ \ \ \ \ [ab]\times11\\ &amp;=(10a+b)\times11\\ &amp;=100a+10(a+b)+b\\ &amp;=100(a+1)+b\\ &amp;=[(a+1)0b]\\ \end{align} $</p> <p>In a similar way,</p> <p>$ \begin{align} &amp;\ \ \ \ \ [ab]\times11...n\ times...11\\ &amp;=10^{n}a+10^{n-1}(a+b)+10^{n-2}(a+b)...+10(a+b)+b\\ &amp;=10^{n}(a+1)+10^{n-1}+...+100+b\\ &amp;=[(a+1)11...n-2\ times...110b]\\ \end{align} $</p> <p>Now for $[ab]\times [kk..n\ times..kk]$</p> <p>$ \begin{align} &amp;\ \ \ \ \ [ab]\times11...n\ times...11\times k\\ &amp;=[(a+1)11...n-2\ times...110b]\times k\\ &amp;=[(k\times(a+1))kk..n-2\ times..kk00]+(k\times b)\\ \end{align} $</p> <p>This with some modification can be applied to all numbers of form $[abc...]$, where $a+b+c+...=10$.</p> Answer 9: <p>Observe the patterns in the written calculation of these products</p> <p>$$a=1\to\begin{matrix}3&amp;7\\&amp;3&amp;7\\&amp;&amp;3&amp;7\\&amp;&amp;&amp;3&amp;7\end{matrix}$$</p> <p>$$a=2\to\begin{matrix}7&amp;4\\&amp;7&amp;4\\&amp;&amp;7&amp;4\\&amp;&amp;&amp;7&amp;4\end{matrix}$$</p> <p>$$a=3\to\begin{matrix}1&amp;1&amp;1\\&amp;1&amp;1&amp;1\\&amp;&amp;1&amp;1&amp;1\\&amp;&amp;&amp;1&amp;1&amp;1\end{matrix}$$</p> <p>$$a=4\to\begin{matrix}1&amp;4&amp;8\\&amp;1&amp;4&amp;8\\&amp;&amp;1&amp;4&amp;8\\&amp;&amp;&amp;1&amp;4&amp;8\end{matrix}$$</p> <p>$$a=5\to\begin{matrix}1&amp;8&amp;5\\&amp;1&amp;8&amp;5\\&amp;&amp;1&amp;8&amp;5\\&amp;&amp;&amp;1&amp;8&amp;5\end{matrix}$$</p> <p>$$\cdots$$</p> <p>$$a=9\to\begin{matrix}3&amp;3&amp;3\\&amp;3&amp;3&amp;3\\&amp;&amp;3&amp;3&amp;3\\&amp;&amp;&amp;3&amp;3&amp;3\end{matrix}$$</p> <p>A digit of the product, in a complete column, is the sum of the digits of $37\cdot a$ plus a possible carry from the previous column, i.e. the sum of the digits of the sum of the digits of $37\cdot a$.</p> <p>The initial and final digit pairs correspond to partial sums.</p> <p>We have</p> <p>$$ 37\cdot1=37\to10\to1,\\ 37\cdot2=74\to11\to2,\\ 37\cdot3=111\to3\to3,\\ 37\cdot4=148\to13\to4,\\ 37\cdot5=185\to14\to5,\\ \cdots\\ 37\cdot9=333\to9\to9.$$</p> <p>The phenomenon is explained by the fact that $37=4\cdot9+1\to10\to1$, and this occurs for all factors of the form $k\cdot9+1$. For example,</p> <p>$$73\cdot66666666666666=4866666666666618.$$</p> Answer 10: <p>Nice observation!</p> <p>This follows because $aaa\cdots a = a \cdot 111 \cdots 1$ and</p> <p>$37 \cdot 11 = 407$</p> <p>$37 \cdot 111 = 4107$</p> <p>$37 \cdot 1111 = 41107$</p> <p>$\cdots$</p> <p>Indeed, let $u_n = 111 \cdots 1$ ($n$ ones). Then $u_n = \dfrac{10^n-1}{9}$ and</p> <p>$$ \begin{align} 37 \cdot 111\cdots 1 \quad (n \text{ ones}) &amp;= 37u_n\\&amp;= 36u_n + u_n\\ &amp;= 4\cdot9\cdot u_n+10u_{n-2}+11\\ &amp;=4(10^n-1)+10u_{n-2}+4+7\\ &amp;=4\cdot10^n+10u_{n-2}+7\\ &amp;=4111\cdots107 \quad (n-2 \text{ ones}) \end{align} $$</p> Answer 11: <p>This is not only true for $37$ but all $2$-digit numbers of form $[ab]$, where $a + b =10$.</p> <p>Since $[ab]=10a+b$</p> <p>$ \begin{align} &amp;\ \ \ \ \ [ab]\times11\\ &amp;=(10a+b)\times11\\ &amp;=100a+10(a+b)+b\\ &amp;=100(a+1)+b\\ &amp;=[(a+1)0b]\\ \end{align} $</p> <p>In a similar way,</p> <p>$ \begin{align} &amp;\ \ \ \ \ [ab]\times11...n\ times...11\\ &amp;=10^{n}a+10^{n-1}(a+b)+10^{n-2}(a+b)...+10(a+b)+b\\ &amp;=10^{n}(a+1)+10^{n-1}+...+100+b\\ &amp;=[(a+1)11...n-2\ times...110b]\\ \end{align} $</p> <p>Now for $[ab]\times [kk..n\ times..kk]$</p> <p>$ \begin{align} &amp;\ \ \ \ \ [ab]\times11...n\ times...11\times k\\ &amp;=[(a+1)11...n-2\ times...110b]\times k\\ &amp;=[(k\times(a+1))kk..n-2\ times..kk00]+(k\times b)\\ \end{align} $</p> <p>This with some modification can be applied to all numbers of form $[abc...]$, where $a+b+c+...=10$.</p> Answer 12: <p>Observe the patterns in the written calculation of these products</p> <p>$$a=1\to\begin{matrix}3&amp;7\\&amp;3&amp;7\\&amp;&amp;3&amp;7\\&amp;&amp;&amp;3&amp;7\end{matrix}$$</p> <p>$$a=2\to\begin{matrix}7&amp;4\\&amp;7&amp;4\\&amp;&amp;7&amp;4\\&amp;&amp;&amp;7&amp;4\end{matrix}$$</p> <p>$$a=3\to\begin{matrix}1&amp;1&amp;1\\&amp;1&amp;1&amp;1\\&amp;&amp;1&amp;1&amp;1\\&amp;&amp;&amp;1&amp;1&amp;1\end{matrix}$$</p> <p>$$a=4\to\begin{matrix}1&amp;4&amp;8\\&amp;1&amp;4&amp;8\\&amp;&amp;1&amp;4&amp;8\\&amp;&amp;&amp;1&amp;4&amp;8\end{matrix}$$</p> <p>$$a=5\to\begin{matrix}1&amp;8&amp;5\\&amp;1&amp;8&amp;5\\&amp;&amp;1&amp;8&amp;5\\&amp;&amp;&amp;1&amp;8&amp;5\end{matrix}$$</p> <p>$$\cdots$$</p> <p>$$a=9\to\begin{matrix}3&amp;3&amp;3\\&amp;3&amp;3&amp;3\\&amp;&amp;3&amp;3&amp;3\\&amp;&amp;&amp;3&amp;3&amp;3\end{matrix}$$</p> <p>A digit of the product, in a complete column, is the sum of the digits of $37\cdot a$ plus a possible carry from the previous column, i.e. the sum of the digits of the sum of the digits of $37\cdot a$.</p> <p>The initial and final digit pairs correspond to partial sums.</p> <p>We have</p> <p>$$ 37\cdot1=37\to10\to1,\\ 37\cdot2=74\to11\to2,\\ 37\cdot3=111\to3\to3,\\ 37\cdot4=148\to13\to4,\\ 37\cdot5=185\to14\to5,\\ \cdots\\ 37\cdot9=333\to9\to9.$$</p> <p>The phenomenon is explained by the fact that $37=4\cdot9+1\to10\to1$, and this occurs for all factors of the form $k\cdot9+1$. For example,</p> <p>$$73\cdot66666666666666=4866666666666618.$$</p>
Simple applications of Lie algebra in group theory
2,083,134
4
https://math.stackexchange.com/questions/2083134/simple-applications-of-lie-algebra-in-group-theory
<p>In his book Lie Algebra, Jacobson gives a motivation for Lie algebra as a tool used in a difficult problem in group theory - Burnside's problem.</p> <p>I was wondering if there is any <em>simple/elementary</em> application of Lie algebra in group theory which can be illustrated in class in the beginnning/introduction while teaching the course on Lie algebra.</p>
[ { "answer_id": 2083999, "score": 4, "text": "<p>There are more elementary questions than the famous Burnside problems, namely some problems concerning the solvability class of $p$-groups. They have a proof using basic Lie algebra theory. For example, we have the following question by Burnside:</p>\n\n<p><em>Question (Burnside 1913):</em> What is the smallest order of a group with prime power order and derived length $k$ ?</p>\n\n<p>Let us denote this order by $p^{\\beta_p(k)}$. One can show the following result:</p>\n\n<p><em>Proposition:</em> For $k\\ge 4$ and all primes $p$ we have \n$$\n\\beta_p(k)\\ge 2^{k-1}+2k−4.\n$$\nA proof only using elementary Lie algebra theory was given by L. A. Bokut in $1971$.\nFor references on similar problems see <a href=\"http://homepage.univie.ac.at/Dietrich.Burde/papers/burde_48_derived_lie.pdf\" rel=\"noreferrer\">here</a>.</p>\n\n<p>Edit: For an introductory class in Lie algebras, an easier example is perhaps to\nshow that under certain assumptions one may put all elements of a matrix group simultaneously into triangular form. This can be proved by Lie's Theorem for Lie algebras, which is a basic result in any class on Lie algebras.</p>\n" }, { "answer_id": 3828928, "score": 1, "text": "<p>Geometric control theory. If you would like a specific example, see <a href=\"https://www.semanticscholar.org/paper/SHORTEST-PATHS-FOR-THE-REEDS-SHEPP-CAR%3A-A-WORKED-OF-Sussmann-Tang/dd9dcf2c4a2636c03a8a0b589421ef1e4ece6cfc\" rel=\"nofollow noreferrer\">the Reeds-Schepp car</a>.</p>\n<p>The groups in control theory are the 1-parameter groups of diffeomorphisms induced by the vector fields that are the controls.</p>\n" }, { "answer_id": 2083999, "score": 4, "text": "<p>There are more elementary questions than the famous Burnside problems, namely some problems concerning the solvability class of $p$-groups. They have a proof using basic Lie algebra theory. For example, we have the following question by Burnside:</p>\n\n<p><em>Question (Burnside 1913):</em> What is the smallest order of a group with prime power order and derived length $k$ ?</p>\n\n<p>Let us denote this order by $p^{\\beta_p(k)}$. One can show the following result:</p>\n\n<p><em>Proposition:</em> For $k\\ge 4$ and all primes $p$ we have \n$$\n\\beta_p(k)\\ge 2^{k-1}+2k−4.\n$$\nA proof only using elementary Lie algebra theory was given by L. A. Bokut in $1971$.\nFor references on similar problems see <a href=\"http://homepage.univie.ac.at/Dietrich.Burde/papers/burde_48_derived_lie.pdf\" rel=\"noreferrer\">here</a>.</p>\n\n<p>Edit: For an introductory class in Lie algebras, an easier example is perhaps to\nshow that under certain assumptions one may put all elements of a matrix group simultaneously into triangular form. This can be proved by Lie's Theorem for Lie algebras, which is a basic result in any class on Lie algebras.</p>\n" }, { "answer_id": 3828928, "score": 1, "text": "<p>Geometric control theory. If you would like a specific example, see <a href=\"https://www.semanticscholar.org/paper/SHORTEST-PATHS-FOR-THE-REEDS-SHEPP-CAR%3A-A-WORKED-OF-Sussmann-Tang/dd9dcf2c4a2636c03a8a0b589421ef1e4ece6cfc\" rel=\"nofollow noreferrer\">the Reeds-Schepp car</a>.</p>\n<p>The groups in control theory are the 1-parameter groups of diffeomorphisms induced by the vector fields that are the controls.</p>\n" }, { "answer_id": 2083999, "score": 4, "text": "<p>There are more elementary questions than the famous Burnside problems, namely some problems concerning the solvability class of $p$-groups. They have a proof using basic Lie algebra theory. For example, we have the following question by Burnside:</p>\n\n<p><em>Question (Burnside 1913):</em> What is the smallest order of a group with prime power order and derived length $k$ ?</p>\n\n<p>Let us denote this order by $p^{\\beta_p(k)}$. One can show the following result:</p>\n\n<p><em>Proposition:</em> For $k\\ge 4$ and all primes $p$ we have \n$$\n\\beta_p(k)\\ge 2^{k-1}+2k−4.\n$$\nA proof only using elementary Lie algebra theory was given by L. A. Bokut in $1971$.\nFor references on similar problems see <a href=\"http://homepage.univie.ac.at/Dietrich.Burde/papers/burde_48_derived_lie.pdf\" rel=\"noreferrer\">here</a>.</p>\n\n<p>Edit: For an introductory class in Lie algebras, an easier example is perhaps to\nshow that under certain assumptions one may put all elements of a matrix group simultaneously into triangular form. This can be proved by Lie's Theorem for Lie algebras, which is a basic result in any class on Lie algebras.</p>\n" }, { "answer_id": 3828928, "score": 1, "text": "<p>Geometric control theory. If you would like a specific example, see <a href=\"https://www.semanticscholar.org/paper/SHORTEST-PATHS-FOR-THE-REEDS-SHEPP-CAR%3A-A-WORKED-OF-Sussmann-Tang/dd9dcf2c4a2636c03a8a0b589421ef1e4ece6cfc\" rel=\"nofollow noreferrer\">the Reeds-Schepp car</a>.</p>\n<p>The groups in control theory are the 1-parameter groups of diffeomorphisms induced by the vector fields that are the controls.</p>\n" }, { "answer_id": 2083999, "score": 4, "text": "<p>There are more elementary questions than the famous Burnside problems, namely some problems concerning the solvability class of $p$-groups. They have a proof using basic Lie algebra theory. For example, we have the following question by Burnside:</p>\n\n<p><em>Question (Burnside 1913):</em> What is the smallest order of a group with prime power order and derived length $k$ ?</p>\n\n<p>Let us denote this order by $p^{\\beta_p(k)}$. One can show the following result:</p>\n\n<p><em>Proposition:</em> For $k\\ge 4$ and all primes $p$ we have \n$$\n\\beta_p(k)\\ge 2^{k-1}+2k−4.\n$$\nA proof only using elementary Lie algebra theory was given by L. A. Bokut in $1971$.\nFor references on similar problems see <a href=\"http://homepage.univie.ac.at/Dietrich.Burde/papers/burde_48_derived_lie.pdf\" rel=\"noreferrer\">here</a>.</p>\n\n<p>Edit: For an introductory class in Lie algebras, an easier example is perhaps to\nshow that under certain assumptions one may put all elements of a matrix group simultaneously into triangular form. This can be proved by Lie's Theorem for Lie algebras, which is a basic result in any class on Lie algebras.</p>\n" }, { "answer_id": 3828928, "score": 1, "text": "<p>Geometric control theory. If you would like a specific example, see <a href=\"https://www.semanticscholar.org/paper/SHORTEST-PATHS-FOR-THE-REEDS-SHEPP-CAR%3A-A-WORKED-OF-Sussmann-Tang/dd9dcf2c4a2636c03a8a0b589421ef1e4ece6cfc\" rel=\"nofollow noreferrer\">the Reeds-Schepp car</a>.</p>\n<p>The groups in control theory are the 1-parameter groups of diffeomorphisms induced by the vector fields that are the controls.</p>\n" } ]
Question: Simple applications of Lie algebra in group theory Answer 1: <p>There are more elementary questions than the famous Burnside problems, namely some problems concerning the solvability class of $p$-groups. They have a proof using basic Lie algebra theory. For example, we have the following question by Burnside:</p> <p><em>Question (Burnside 1913):</em> What is the smallest order of a group with prime power order and derived length $k$ ?</p> <p>Let us denote this order by $p^{\beta_p(k)}$. One can show the following result:</p> <p><em>Proposition:</em> For $k\ge 4$ and all primes $p$ we have $$ \beta_p(k)\ge 2^{k-1}+2k−4. $$ A proof only using elementary Lie algebra theory was given by L. A. Bokut in $1971$. For references on similar problems see <a href="http://homepage.univie.ac.at/Dietrich.Burde/papers/burde_48_derived_lie.pdf" rel="noreferrer">here</a>.</p> <p>Edit: For an introductory class in Lie algebras, an easier example is perhaps to show that under certain assumptions one may put all elements of a matrix group simultaneously into triangular form. This can be proved by Lie's Theorem for Lie algebras, which is a basic result in any class on Lie algebras.</p> Answer 2: <p>Geometric control theory. If you would like a specific example, see <a href="https://www.semanticscholar.org/paper/SHORTEST-PATHS-FOR-THE-REEDS-SHEPP-CAR%3A-A-WORKED-OF-Sussmann-Tang/dd9dcf2c4a2636c03a8a0b589421ef1e4ece6cfc" rel="nofollow noreferrer">the Reeds-Schepp car</a>.</p> <p>The groups in control theory are the 1-parameter groups of diffeomorphisms induced by the vector fields that are the controls.</p> Answer 3: <p>There are more elementary questions than the famous Burnside problems, namely some problems concerning the solvability class of $p$-groups. They have a proof using basic Lie algebra theory. For example, we have the following question by Burnside:</p> <p><em>Question (Burnside 1913):</em> What is the smallest order of a group with prime power order and derived length $k$ ?</p> <p>Let us denote this order by $p^{\beta_p(k)}$. One can show the following result:</p> <p><em>Proposition:</em> For $k\ge 4$ and all primes $p$ we have $$ \beta_p(k)\ge 2^{k-1}+2k−4. $$ A proof only using elementary Lie algebra theory was given by L. A. Bokut in $1971$. For references on similar problems see <a href="http://homepage.univie.ac.at/Dietrich.Burde/papers/burde_48_derived_lie.pdf" rel="noreferrer">here</a>.</p> <p>Edit: For an introductory class in Lie algebras, an easier example is perhaps to show that under certain assumptions one may put all elements of a matrix group simultaneously into triangular form. This can be proved by Lie's Theorem for Lie algebras, which is a basic result in any class on Lie algebras.</p> Answer 4: <p>Geometric control theory. If you would like a specific example, see <a href="https://www.semanticscholar.org/paper/SHORTEST-PATHS-FOR-THE-REEDS-SHEPP-CAR%3A-A-WORKED-OF-Sussmann-Tang/dd9dcf2c4a2636c03a8a0b589421ef1e4ece6cfc" rel="nofollow noreferrer">the Reeds-Schepp car</a>.</p> <p>The groups in control theory are the 1-parameter groups of diffeomorphisms induced by the vector fields that are the controls.</p> Answer 5: <p>There are more elementary questions than the famous Burnside problems, namely some problems concerning the solvability class of $p$-groups. They have a proof using basic Lie algebra theory. For example, we have the following question by Burnside:</p> <p><em>Question (Burnside 1913):</em> What is the smallest order of a group with prime power order and derived length $k$ ?</p> <p>Let us denote this order by $p^{\beta_p(k)}$. One can show the following result:</p> <p><em>Proposition:</em> For $k\ge 4$ and all primes $p$ we have $$ \beta_p(k)\ge 2^{k-1}+2k−4. $$ A proof only using elementary Lie algebra theory was given by L. A. Bokut in $1971$. For references on similar problems see <a href="http://homepage.univie.ac.at/Dietrich.Burde/papers/burde_48_derived_lie.pdf" rel="noreferrer">here</a>.</p> <p>Edit: For an introductory class in Lie algebras, an easier example is perhaps to show that under certain assumptions one may put all elements of a matrix group simultaneously into triangular form. This can be proved by Lie's Theorem for Lie algebras, which is a basic result in any class on Lie algebras.</p> Answer 6: <p>Geometric control theory. If you would like a specific example, see <a href="https://www.semanticscholar.org/paper/SHORTEST-PATHS-FOR-THE-REEDS-SHEPP-CAR%3A-A-WORKED-OF-Sussmann-Tang/dd9dcf2c4a2636c03a8a0b589421ef1e4ece6cfc" rel="nofollow noreferrer">the Reeds-Schepp car</a>.</p> <p>The groups in control theory are the 1-parameter groups of diffeomorphisms induced by the vector fields that are the controls.</p> Answer 7: <p>There are more elementary questions than the famous Burnside problems, namely some problems concerning the solvability class of $p$-groups. They have a proof using basic Lie algebra theory. For example, we have the following question by Burnside:</p> <p><em>Question (Burnside 1913):</em> What is the smallest order of a group with prime power order and derived length $k$ ?</p> <p>Let us denote this order by $p^{\beta_p(k)}$. One can show the following result:</p> <p><em>Proposition:</em> For $k\ge 4$ and all primes $p$ we have $$ \beta_p(k)\ge 2^{k-1}+2k−4. $$ A proof only using elementary Lie algebra theory was given by L. A. Bokut in $1971$. For references on similar problems see <a href="http://homepage.univie.ac.at/Dietrich.Burde/papers/burde_48_derived_lie.pdf" rel="noreferrer">here</a>.</p> <p>Edit: For an introductory class in Lie algebras, an easier example is perhaps to show that under certain assumptions one may put all elements of a matrix group simultaneously into triangular form. This can be proved by Lie's Theorem for Lie algebras, which is a basic result in any class on Lie algebras.</p> Answer 8: <p>Geometric control theory. If you would like a specific example, see <a href="https://www.semanticscholar.org/paper/SHORTEST-PATHS-FOR-THE-REEDS-SHEPP-CAR%3A-A-WORKED-OF-Sussmann-Tang/dd9dcf2c4a2636c03a8a0b589421ef1e4ece6cfc" rel="nofollow noreferrer">the Reeds-Schepp car</a>.</p> <p>The groups in control theory are the 1-parameter groups of diffeomorphisms induced by the vector fields that are the controls.</p>
Finite subgroups and $H^2 = H$
2,083,216
4
https://math.stackexchange.com/questions/2083216/finite-subgroups-and-h2-h
<p>Let $G$ be a group, and $H$ a subset of $G$.</p> <p>If $H$ is a subgroup of $G$, then $H^2 = H$. I'm interested in what holds if $H^2 = H$. It's easy to verify that $H$ is a semigroup. If $H$ is finite, $H$ is also a monoid:</p> <p>Assume $e \notin H$. Fix any $h \in H$. Then $h \notin hH$. Since $hH \subseteq H^2 = H$, we have $|hH| &lt; |H|$, a contradiction. Thus $e \in H$.</p> <p>If $H$ is infinite, then $H$ needn't be a monoid: $(\mathbb{Q}, +)$ is a group, and $(\mathbb{Q}^+, +)$ isn't a monoid, even though $\mathbb{Q}^+ + \mathbb{Q}^+ = \mathbb{Q}^+$. </p> <p>But what if $H$ is finite; is $H$ perhaps a group?</p> <p>Thank you!</p>
[ { "answer_id": 2083230, "score": 5, "text": "<p>If $H$ is finite, then $H$ is a group.</p>\n\n<p>Note that if $h \\in H$, then $\\{h,h^2,h^3,\\ldots\\} \\subset H$, which implies that $h$ must have finite order (say $n$). In particular, $h^{-1} = h^{n-1} \\in H$, so $H$ is closed under taking inverses. Since also $e \\in H$ and $H$ is closed under multiplication, $H$ is a group.</p>\n" }, { "answer_id": 2083293, "score": 1, "text": "<p>Here's a slight variation to your proof. Fix $h\\in H$. Consider the injection $f: hH\\to H$. Since both sets are finite and $|hH| = |H|$, $f$ is a bijection. Hence $e\\in H$ has a preimage under $f$, which means there exists a $h'\\in H$ such that $hh' = e$, so $H$ is closed under taking inverses.</p>\n" }, { "answer_id": 2083230, "score": 5, "text": "<p>If $H$ is finite, then $H$ is a group.</p>\n\n<p>Note that if $h \\in H$, then $\\{h,h^2,h^3,\\ldots\\} \\subset H$, which implies that $h$ must have finite order (say $n$). In particular, $h^{-1} = h^{n-1} \\in H$, so $H$ is closed under taking inverses. Since also $e \\in H$ and $H$ is closed under multiplication, $H$ is a group.</p>\n" }, { "answer_id": 2083293, "score": 1, "text": "<p>Here's a slight variation to your proof. Fix $h\\in H$. Consider the injection $f: hH\\to H$. Since both sets are finite and $|hH| = |H|$, $f$ is a bijection. Hence $e\\in H$ has a preimage under $f$, which means there exists a $h'\\in H$ such that $hh' = e$, so $H$ is closed under taking inverses.</p>\n" }, { "answer_id": 2083230, "score": 5, "text": "<p>If $H$ is finite, then $H$ is a group.</p>\n\n<p>Note that if $h \\in H$, then $\\{h,h^2,h^3,\\ldots\\} \\subset H$, which implies that $h$ must have finite order (say $n$). In particular, $h^{-1} = h^{n-1} \\in H$, so $H$ is closed under taking inverses. Since also $e \\in H$ and $H$ is closed under multiplication, $H$ is a group.</p>\n" }, { "answer_id": 2083293, "score": 1, "text": "<p>Here's a slight variation to your proof. Fix $h\\in H$. Consider the injection $f: hH\\to H$. Since both sets are finite and $|hH| = |H|$, $f$ is a bijection. Hence $e\\in H$ has a preimage under $f$, which means there exists a $h'\\in H$ such that $hh' = e$, so $H$ is closed under taking inverses.</p>\n" }, { "answer_id": 2083230, "score": 5, "text": "<p>If $H$ is finite, then $H$ is a group.</p>\n\n<p>Note that if $h \\in H$, then $\\{h,h^2,h^3,\\ldots\\} \\subset H$, which implies that $h$ must have finite order (say $n$). In particular, $h^{-1} = h^{n-1} \\in H$, so $H$ is closed under taking inverses. Since also $e \\in H$ and $H$ is closed under multiplication, $H$ is a group.</p>\n" }, { "answer_id": 2083293, "score": 1, "text": "<p>Here's a slight variation to your proof. Fix $h\\in H$. Consider the injection $f: hH\\to H$. Since both sets are finite and $|hH| = |H|$, $f$ is a bijection. Hence $e\\in H$ has a preimage under $f$, which means there exists a $h'\\in H$ such that $hh' = e$, so $H$ is closed under taking inverses.</p>\n" } ]
Question: Finite subgroups and $H^2 = H$ Answer 1: <p>If $H$ is finite, then $H$ is a group.</p> <p>Note that if $h \in H$, then $\{h,h^2,h^3,\ldots\} \subset H$, which implies that $h$ must have finite order (say $n$). In particular, $h^{-1} = h^{n-1} \in H$, so $H$ is closed under taking inverses. Since also $e \in H$ and $H$ is closed under multiplication, $H$ is a group.</p> Answer 2: <p>Here's a slight variation to your proof. Fix $h\in H$. Consider the injection $f: hH\to H$. Since both sets are finite and $|hH| = |H|$, $f$ is a bijection. Hence $e\in H$ has a preimage under $f$, which means there exists a $h'\in H$ such that $hh' = e$, so $H$ is closed under taking inverses.</p> Answer 3: <p>If $H$ is finite, then $H$ is a group.</p> <p>Note that if $h \in H$, then $\{h,h^2,h^3,\ldots\} \subset H$, which implies that $h$ must have finite order (say $n$). In particular, $h^{-1} = h^{n-1} \in H$, so $H$ is closed under taking inverses. Since also $e \in H$ and $H$ is closed under multiplication, $H$ is a group.</p> Answer 4: <p>Here's a slight variation to your proof. Fix $h\in H$. Consider the injection $f: hH\to H$. Since both sets are finite and $|hH| = |H|$, $f$ is a bijection. Hence $e\in H$ has a preimage under $f$, which means there exists a $h'\in H$ such that $hh' = e$, so $H$ is closed under taking inverses.</p> Answer 5: <p>If $H$ is finite, then $H$ is a group.</p> <p>Note that if $h \in H$, then $\{h,h^2,h^3,\ldots\} \subset H$, which implies that $h$ must have finite order (say $n$). In particular, $h^{-1} = h^{n-1} \in H$, so $H$ is closed under taking inverses. Since also $e \in H$ and $H$ is closed under multiplication, $H$ is a group.</p> Answer 6: <p>Here's a slight variation to your proof. Fix $h\in H$. Consider the injection $f: hH\to H$. Since both sets are finite and $|hH| = |H|$, $f$ is a bijection. Hence $e\in H$ has a preimage under $f$, which means there exists a $h'\in H$ such that $hh' = e$, so $H$ is closed under taking inverses.</p> Answer 7: <p>If $H$ is finite, then $H$ is a group.</p> <p>Note that if $h \in H$, then $\{h,h^2,h^3,\ldots\} \subset H$, which implies that $h$ must have finite order (say $n$). In particular, $h^{-1} = h^{n-1} \in H$, so $H$ is closed under taking inverses. Since also $e \in H$ and $H$ is closed under multiplication, $H$ is a group.</p> Answer 8: <p>Here's a slight variation to your proof. Fix $h\in H$. Consider the injection $f: hH\to H$. Since both sets are finite and $|hH| = |H|$, $f$ is a bijection. Hence $e\in H$ has a preimage under $f$, which means there exists a $h'\in H$ such that $hh' = e$, so $H$ is closed under taking inverses.</p>
Convergence of $\sum_{n\geq 0}\frac{(n!)^d}{(d\cdot n)!}$
2,083,296
4
https://math.stackexchange.com/questions/2083296/convergence-of-sum-n-geq-0-fracndd-cdot-n
<p>In an exam I have been asked to discuss the convergence of a series regarding a parameter $d$. Here's the following : $\sum_{n=0}^\infty \frac{(n!)^d}{(d\cdot n)!}$ </p> <p>The answer is that this series converges for $d \geq 2$. I totally understand that if $d \leq 1$, the series will not converge but I am blocked while trying to use the d'Alembert or Cauchy's rules. </p> <p>Can somebody give me a tip ? </p>
[ { "answer_id": 2083337, "score": 2, "text": "<p>Let $f(n)$ be the general term of the series. Then, we have:</p>\n\n<p>$$\\frac{f(n+1)}{f(n)} = \\frac{(n+1)^d}{(dn+d)(\\cdots)(dn +1)}\\le \\left( \\frac{n+1}{dn + 1}\\right)^d \\to \\frac1{d^d}$$</p>\n\n<p>Thus:</p>\n\n<p>$$\\limsup \\frac{f(n+1)}{f(n)} \\le \\frac1{d^d} &lt; 1 \\text{ for $d\\ge 2$}$$</p>\n\n<p>So the series converges (using the ratio test).</p>\n" }, { "answer_id": 2083358, "score": 2, "text": "<p>Stirling approximation yields $\\displaystyle \\frac{(n!)^d}{(d\\cdot n)!}\\sim K\\frac{n^{\\frac{d-1}2}}{(d^d)^n}$ where $K=\\frac{(\\sqrt{2\\pi})^{d-1}}{\\sqrt d}$</p>\n\n<p>When $d&gt;1$, $d^d&gt;1$ and $\\displaystyle \\frac{n^{\\frac{d-1}2}}{(d^d)^n} = O\\left( \\frac{1}{(d^d)^{n/2}}\\right)$ and the series converges.</p>\n\n<p>When $d=1$, $\\displaystyle \\frac{n^{\\frac{d-1}2}}{(d^d)^n} = 1$ and the series diverges.</p>\n\n<p>When $d&lt;1$, $\\displaystyle \\frac{n^{\\frac{d-1}2}}{(d^d)^n}\\to \\infty$ and the series diverges trivially.</p>\n" }, { "answer_id": 2083337, "score": 2, "text": "<p>Let $f(n)$ be the general term of the series. Then, we have:</p>\n\n<p>$$\\frac{f(n+1)}{f(n)} = \\frac{(n+1)^d}{(dn+d)(\\cdots)(dn +1)}\\le \\left( \\frac{n+1}{dn + 1}\\right)^d \\to \\frac1{d^d}$$</p>\n\n<p>Thus:</p>\n\n<p>$$\\limsup \\frac{f(n+1)}{f(n)} \\le \\frac1{d^d} &lt; 1 \\text{ for $d\\ge 2$}$$</p>\n\n<p>So the series converges (using the ratio test).</p>\n" }, { "answer_id": 2083358, "score": 2, "text": "<p>Stirling approximation yields $\\displaystyle \\frac{(n!)^d}{(d\\cdot n)!}\\sim K\\frac{n^{\\frac{d-1}2}}{(d^d)^n}$ where $K=\\frac{(\\sqrt{2\\pi})^{d-1}}{\\sqrt d}$</p>\n\n<p>When $d&gt;1$, $d^d&gt;1$ and $\\displaystyle \\frac{n^{\\frac{d-1}2}}{(d^d)^n} = O\\left( \\frac{1}{(d^d)^{n/2}}\\right)$ and the series converges.</p>\n\n<p>When $d=1$, $\\displaystyle \\frac{n^{\\frac{d-1}2}}{(d^d)^n} = 1$ and the series diverges.</p>\n\n<p>When $d&lt;1$, $\\displaystyle \\frac{n^{\\frac{d-1}2}}{(d^d)^n}\\to \\infty$ and the series diverges trivially.</p>\n" }, { "answer_id": 2083337, "score": 2, "text": "<p>Let $f(n)$ be the general term of the series. Then, we have:</p>\n\n<p>$$\\frac{f(n+1)}{f(n)} = \\frac{(n+1)^d}{(dn+d)(\\cdots)(dn +1)}\\le \\left( \\frac{n+1}{dn + 1}\\right)^d \\to \\frac1{d^d}$$</p>\n\n<p>Thus:</p>\n\n<p>$$\\limsup \\frac{f(n+1)}{f(n)} \\le \\frac1{d^d} &lt; 1 \\text{ for $d\\ge 2$}$$</p>\n\n<p>So the series converges (using the ratio test).</p>\n" }, { "answer_id": 2083358, "score": 2, "text": "<p>Stirling approximation yields $\\displaystyle \\frac{(n!)^d}{(d\\cdot n)!}\\sim K\\frac{n^{\\frac{d-1}2}}{(d^d)^n}$ where $K=\\frac{(\\sqrt{2\\pi})^{d-1}}{\\sqrt d}$</p>\n\n<p>When $d&gt;1$, $d^d&gt;1$ and $\\displaystyle \\frac{n^{\\frac{d-1}2}}{(d^d)^n} = O\\left( \\frac{1}{(d^d)^{n/2}}\\right)$ and the series converges.</p>\n\n<p>When $d=1$, $\\displaystyle \\frac{n^{\\frac{d-1}2}}{(d^d)^n} = 1$ and the series diverges.</p>\n\n<p>When $d&lt;1$, $\\displaystyle \\frac{n^{\\frac{d-1}2}}{(d^d)^n}\\to \\infty$ and the series diverges trivially.</p>\n" } ]
Question: Convergence of $\sum_{n\geq 0}\frac{(n!)^d}{(d\cdot n)!}$ Answer 1: <p>Let $f(n)$ be the general term of the series. Then, we have:</p> <p>$$\frac{f(n+1)}{f(n)} = \frac{(n+1)^d}{(dn+d)(\cdots)(dn +1)}\le \left( \frac{n+1}{dn + 1}\right)^d \to \frac1{d^d}$$</p> <p>Thus:</p> <p>$$\limsup \frac{f(n+1)}{f(n)} \le \frac1{d^d} &lt; 1 \text{ for $d\ge 2$}$$</p> <p>So the series converges (using the ratio test).</p> Answer 2: <p>Stirling approximation yields $\displaystyle \frac{(n!)^d}{(d\cdot n)!}\sim K\frac{n^{\frac{d-1}2}}{(d^d)^n}$ where $K=\frac{(\sqrt{2\pi})^{d-1}}{\sqrt d}$</p> <p>When $d&gt;1$, $d^d&gt;1$ and $\displaystyle \frac{n^{\frac{d-1}2}}{(d^d)^n} = O\left( \frac{1}{(d^d)^{n/2}}\right)$ and the series converges.</p> <p>When $d=1$, $\displaystyle \frac{n^{\frac{d-1}2}}{(d^d)^n} = 1$ and the series diverges.</p> <p>When $d&lt;1$, $\displaystyle \frac{n^{\frac{d-1}2}}{(d^d)^n}\to \infty$ and the series diverges trivially.</p> Answer 3: <p>Let $f(n)$ be the general term of the series. Then, we have:</p> <p>$$\frac{f(n+1)}{f(n)} = \frac{(n+1)^d}{(dn+d)(\cdots)(dn +1)}\le \left( \frac{n+1}{dn + 1}\right)^d \to \frac1{d^d}$$</p> <p>Thus:</p> <p>$$\limsup \frac{f(n+1)}{f(n)} \le \frac1{d^d} &lt; 1 \text{ for $d\ge 2$}$$</p> <p>So the series converges (using the ratio test).</p> Answer 4: <p>Stirling approximation yields $\displaystyle \frac{(n!)^d}{(d\cdot n)!}\sim K\frac{n^{\frac{d-1}2}}{(d^d)^n}$ where $K=\frac{(\sqrt{2\pi})^{d-1}}{\sqrt d}$</p> <p>When $d&gt;1$, $d^d&gt;1$ and $\displaystyle \frac{n^{\frac{d-1}2}}{(d^d)^n} = O\left( \frac{1}{(d^d)^{n/2}}\right)$ and the series converges.</p> <p>When $d=1$, $\displaystyle \frac{n^{\frac{d-1}2}}{(d^d)^n} = 1$ and the series diverges.</p> <p>When $d&lt;1$, $\displaystyle \frac{n^{\frac{d-1}2}}{(d^d)^n}\to \infty$ and the series diverges trivially.</p> Answer 5: <p>Let $f(n)$ be the general term of the series. Then, we have:</p> <p>$$\frac{f(n+1)}{f(n)} = \frac{(n+1)^d}{(dn+d)(\cdots)(dn +1)}\le \left( \frac{n+1}{dn + 1}\right)^d \to \frac1{d^d}$$</p> <p>Thus:</p> <p>$$\limsup \frac{f(n+1)}{f(n)} \le \frac1{d^d} &lt; 1 \text{ for $d\ge 2$}$$</p> <p>So the series converges (using the ratio test).</p> Answer 6: <p>Stirling approximation yields $\displaystyle \frac{(n!)^d}{(d\cdot n)!}\sim K\frac{n^{\frac{d-1}2}}{(d^d)^n}$ where $K=\frac{(\sqrt{2\pi})^{d-1}}{\sqrt d}$</p> <p>When $d&gt;1$, $d^d&gt;1$ and $\displaystyle \frac{n^{\frac{d-1}2}}{(d^d)^n} = O\left( \frac{1}{(d^d)^{n/2}}\right)$ and the series converges.</p> <p>When $d=1$, $\displaystyle \frac{n^{\frac{d-1}2}}{(d^d)^n} = 1$ and the series diverges.</p> <p>When $d&lt;1$, $\displaystyle \frac{n^{\frac{d-1}2}}{(d^d)^n}\to \infty$ and the series diverges trivially.</p>
Is the limit of $f(x)-\ln(x)$ equal to the limit of $f(x+n)-\ln(x)$?
2,083,315
4
https://math.stackexchange.com/questions/2083315/is-the-limit-of-fx-lnx-equal-to-the-limit-of-fxn-lnx
<p>Given that $\displaystyle \lim_{x \to +\infty} (f(x) - \ln x) = 0$, can I say that $\displaystyle \lim_{x \to +\infty} (f(x+n)- \ln x) = 0$?</p>
[ { "answer_id": 2083320, "score": 4, "text": "<p>Yes, if $n$ is constant.</p>\n\n<p>\\begin{align}f(x+n)-\\ln {x}&amp;=f(x+n)\\pm\\ln{(x+n)}-\\ln x\\\\[0.2cm]&amp;=f(x+n)-\\ln {(x+n)}+(\\ln{(x+n)}-\\ln {(x)})\\end{align} So, this reduces to whether $\\lim_{x\\to\\infty}\\ln{(x+n)}-\\ln{(x)}=0$ or not. Due to continuity of $\\ln$ $$\\lim_{x\\to\\infty}\\ln{(x+n)}-\\ln{(x)}=\\lim_{x\\to \\infty}\\ln {\\left(\\frac{x+n}{x}\\right)}=\\lim_{x\\to \\infty}\\ln {\\left(1+\\frac{n}{x}\\right)}=\\ln {(1+0)}=0$$ If you mean $x=n$ and $n$ varies as $x$, in other words if $n$ is <strong>not</strong> a constant, then the last equation becomes $$\\lim_{x\\to\\infty}\\ln{(x+x)}-\\ln{(x)}=\\lim_{x\\to \\infty}\\ln {\\left(\\frac{x+x}{x}\\right)}=\\ln 2$$ and the answer in this case is no.</p>\n" }, { "answer_id": 2083323, "score": 3, "text": "<p>Yes, the limit is 0.</p>\n\n<p>$$\\lim_{n \\to \\infty} (f(x+n)- \\ln(n))$$</p>\n\n<p>$$= \\lim_{n \\to \\infty} (((f(x+n)- \\ln(x+n)) + (\\ln(x + n) - \\ln(n)))$$</p>\n\n<p>$$= \\lim_{n \\to \\infty} (\\ln(x + n) - \\ln(n))$$</p>\n\n<p>$$= \\lim_{n \\to \\infty} (\\ln \\frac{x + n}{n})$$</p>\n\n<p>$$= \\lim_{n \\to \\infty} (\\ln{1})$$</p>\n\n<p>$$= 0$$</p>\n" }, { "answer_id": 2083326, "score": 2, "text": "<p>$f(x+n)-ln(x)=f(x+n)-ln(x+n-n)=f(x+n)-ln((x+n)(1-{n\\over{x+n}}))$</p>\n\n<p>$=f(x+n)-ln(x+n)-ln(1-{n\\over{x+n}})$.</p>\n\n<p>$lim_{x\\rightarrow +\\infty}f(x)-ln(x)=0$ implies that $lim_{x\\rightarrow +\\infty}f(x+n)-ln(x+n)=0$. But $lim_{x\\rightarrow +\\infty}ln(1-{n\\over{x+n}})=0$. So the answer is yes.</p>\n" }, { "answer_id": 2083320, "score": 4, "text": "<p>Yes, if $n$ is constant.</p>\n\n<p>\\begin{align}f(x+n)-\\ln {x}&amp;=f(x+n)\\pm\\ln{(x+n)}-\\ln x\\\\[0.2cm]&amp;=f(x+n)-\\ln {(x+n)}+(\\ln{(x+n)}-\\ln {(x)})\\end{align} So, this reduces to whether $\\lim_{x\\to\\infty}\\ln{(x+n)}-\\ln{(x)}=0$ or not. Due to continuity of $\\ln$ $$\\lim_{x\\to\\infty}\\ln{(x+n)}-\\ln{(x)}=\\lim_{x\\to \\infty}\\ln {\\left(\\frac{x+n}{x}\\right)}=\\lim_{x\\to \\infty}\\ln {\\left(1+\\frac{n}{x}\\right)}=\\ln {(1+0)}=0$$ If you mean $x=n$ and $n$ varies as $x$, in other words if $n$ is <strong>not</strong> a constant, then the last equation becomes $$\\lim_{x\\to\\infty}\\ln{(x+x)}-\\ln{(x)}=\\lim_{x\\to \\infty}\\ln {\\left(\\frac{x+x}{x}\\right)}=\\ln 2$$ and the answer in this case is no.</p>\n" }, { "answer_id": 2083323, "score": 3, "text": "<p>Yes, the limit is 0.</p>\n\n<p>$$\\lim_{n \\to \\infty} (f(x+n)- \\ln(n))$$</p>\n\n<p>$$= \\lim_{n \\to \\infty} (((f(x+n)- \\ln(x+n)) + (\\ln(x + n) - \\ln(n)))$$</p>\n\n<p>$$= \\lim_{n \\to \\infty} (\\ln(x + n) - \\ln(n))$$</p>\n\n<p>$$= \\lim_{n \\to \\infty} (\\ln \\frac{x + n}{n})$$</p>\n\n<p>$$= \\lim_{n \\to \\infty} (\\ln{1})$$</p>\n\n<p>$$= 0$$</p>\n" }, { "answer_id": 2083326, "score": 2, "text": "<p>$f(x+n)-ln(x)=f(x+n)-ln(x+n-n)=f(x+n)-ln((x+n)(1-{n\\over{x+n}}))$</p>\n\n<p>$=f(x+n)-ln(x+n)-ln(1-{n\\over{x+n}})$.</p>\n\n<p>$lim_{x\\rightarrow +\\infty}f(x)-ln(x)=0$ implies that $lim_{x\\rightarrow +\\infty}f(x+n)-ln(x+n)=0$. But $lim_{x\\rightarrow +\\infty}ln(1-{n\\over{x+n}})=0$. So the answer is yes.</p>\n" }, { "answer_id": 2083320, "score": 4, "text": "<p>Yes, if $n$ is constant.</p>\n\n<p>\\begin{align}f(x+n)-\\ln {x}&amp;=f(x+n)\\pm\\ln{(x+n)}-\\ln x\\\\[0.2cm]&amp;=f(x+n)-\\ln {(x+n)}+(\\ln{(x+n)}-\\ln {(x)})\\end{align} So, this reduces to whether $\\lim_{x\\to\\infty}\\ln{(x+n)}-\\ln{(x)}=0$ or not. Due to continuity of $\\ln$ $$\\lim_{x\\to\\infty}\\ln{(x+n)}-\\ln{(x)}=\\lim_{x\\to \\infty}\\ln {\\left(\\frac{x+n}{x}\\right)}=\\lim_{x\\to \\infty}\\ln {\\left(1+\\frac{n}{x}\\right)}=\\ln {(1+0)}=0$$ If you mean $x=n$ and $n$ varies as $x$, in other words if $n$ is <strong>not</strong> a constant, then the last equation becomes $$\\lim_{x\\to\\infty}\\ln{(x+x)}-\\ln{(x)}=\\lim_{x\\to \\infty}\\ln {\\left(\\frac{x+x}{x}\\right)}=\\ln 2$$ and the answer in this case is no.</p>\n" }, { "answer_id": 2083323, "score": 3, "text": "<p>Yes, the limit is 0.</p>\n\n<p>$$\\lim_{n \\to \\infty} (f(x+n)- \\ln(n))$$</p>\n\n<p>$$= \\lim_{n \\to \\infty} (((f(x+n)- \\ln(x+n)) + (\\ln(x + n) - \\ln(n)))$$</p>\n\n<p>$$= \\lim_{n \\to \\infty} (\\ln(x + n) - \\ln(n))$$</p>\n\n<p>$$= \\lim_{n \\to \\infty} (\\ln \\frac{x + n}{n})$$</p>\n\n<p>$$= \\lim_{n \\to \\infty} (\\ln{1})$$</p>\n\n<p>$$= 0$$</p>\n" }, { "answer_id": 2083326, "score": 2, "text": "<p>$f(x+n)-ln(x)=f(x+n)-ln(x+n-n)=f(x+n)-ln((x+n)(1-{n\\over{x+n}}))$</p>\n\n<p>$=f(x+n)-ln(x+n)-ln(1-{n\\over{x+n}})$.</p>\n\n<p>$lim_{x\\rightarrow +\\infty}f(x)-ln(x)=0$ implies that $lim_{x\\rightarrow +\\infty}f(x+n)-ln(x+n)=0$. But $lim_{x\\rightarrow +\\infty}ln(1-{n\\over{x+n}})=0$. So the answer is yes.</p>\n" } ]
Question: Is the limit of $f(x)-\ln(x)$ equal to the limit of $f(x+n)-\ln(x)$? Answer 1: <p>Yes, if $n$ is constant.</p> <p>\begin{align}f(x+n)-\ln {x}&amp;=f(x+n)\pm\ln{(x+n)}-\ln x\\[0.2cm]&amp;=f(x+n)-\ln {(x+n)}+(\ln{(x+n)}-\ln {(x)})\end{align} So, this reduces to whether $\lim_{x\to\infty}\ln{(x+n)}-\ln{(x)}=0$ or not. Due to continuity of $\ln$ $$\lim_{x\to\infty}\ln{(x+n)}-\ln{(x)}=\lim_{x\to \infty}\ln {\left(\frac{x+n}{x}\right)}=\lim_{x\to \infty}\ln {\left(1+\frac{n}{x}\right)}=\ln {(1+0)}=0$$ If you mean $x=n$ and $n$ varies as $x$, in other words if $n$ is <strong>not</strong> a constant, then the last equation becomes $$\lim_{x\to\infty}\ln{(x+x)}-\ln{(x)}=\lim_{x\to \infty}\ln {\left(\frac{x+x}{x}\right)}=\ln 2$$ and the answer in this case is no.</p> Answer 2: <p>Yes, the limit is 0.</p> <p>$$\lim_{n \to \infty} (f(x+n)- \ln(n))$$</p> <p>$$= \lim_{n \to \infty} (((f(x+n)- \ln(x+n)) + (\ln(x + n) - \ln(n)))$$</p> <p>$$= \lim_{n \to \infty} (\ln(x + n) - \ln(n))$$</p> <p>$$= \lim_{n \to \infty} (\ln \frac{x + n}{n})$$</p> <p>$$= \lim_{n \to \infty} (\ln{1})$$</p> <p>$$= 0$$</p> Answer 3: <p>$f(x+n)-ln(x)=f(x+n)-ln(x+n-n)=f(x+n)-ln((x+n)(1-{n\over{x+n}}))$</p> <p>$=f(x+n)-ln(x+n)-ln(1-{n\over{x+n}})$.</p> <p>$lim_{x\rightarrow +\infty}f(x)-ln(x)=0$ implies that $lim_{x\rightarrow +\infty}f(x+n)-ln(x+n)=0$. But $lim_{x\rightarrow +\infty}ln(1-{n\over{x+n}})=0$. So the answer is yes.</p> Answer 4: <p>Yes, if $n$ is constant.</p> <p>\begin{align}f(x+n)-\ln {x}&amp;=f(x+n)\pm\ln{(x+n)}-\ln x\\[0.2cm]&amp;=f(x+n)-\ln {(x+n)}+(\ln{(x+n)}-\ln {(x)})\end{align} So, this reduces to whether $\lim_{x\to\infty}\ln{(x+n)}-\ln{(x)}=0$ or not. Due to continuity of $\ln$ $$\lim_{x\to\infty}\ln{(x+n)}-\ln{(x)}=\lim_{x\to \infty}\ln {\left(\frac{x+n}{x}\right)}=\lim_{x\to \infty}\ln {\left(1+\frac{n}{x}\right)}=\ln {(1+0)}=0$$ If you mean $x=n$ and $n$ varies as $x$, in other words if $n$ is <strong>not</strong> a constant, then the last equation becomes $$\lim_{x\to\infty}\ln{(x+x)}-\ln{(x)}=\lim_{x\to \infty}\ln {\left(\frac{x+x}{x}\right)}=\ln 2$$ and the answer in this case is no.</p> Answer 5: <p>Yes, the limit is 0.</p> <p>$$\lim_{n \to \infty} (f(x+n)- \ln(n))$$</p> <p>$$= \lim_{n \to \infty} (((f(x+n)- \ln(x+n)) + (\ln(x + n) - \ln(n)))$$</p> <p>$$= \lim_{n \to \infty} (\ln(x + n) - \ln(n))$$</p> <p>$$= \lim_{n \to \infty} (\ln \frac{x + n}{n})$$</p> <p>$$= \lim_{n \to \infty} (\ln{1})$$</p> <p>$$= 0$$</p> Answer 6: <p>$f(x+n)-ln(x)=f(x+n)-ln(x+n-n)=f(x+n)-ln((x+n)(1-{n\over{x+n}}))$</p> <p>$=f(x+n)-ln(x+n)-ln(1-{n\over{x+n}})$.</p> <p>$lim_{x\rightarrow +\infty}f(x)-ln(x)=0$ implies that $lim_{x\rightarrow +\infty}f(x+n)-ln(x+n)=0$. But $lim_{x\rightarrow +\infty}ln(1-{n\over{x+n}})=0$. So the answer is yes.</p> Answer 7: <p>Yes, if $n$ is constant.</p> <p>\begin{align}f(x+n)-\ln {x}&amp;=f(x+n)\pm\ln{(x+n)}-\ln x\\[0.2cm]&amp;=f(x+n)-\ln {(x+n)}+(\ln{(x+n)}-\ln {(x)})\end{align} So, this reduces to whether $\lim_{x\to\infty}\ln{(x+n)}-\ln{(x)}=0$ or not. Due to continuity of $\ln$ $$\lim_{x\to\infty}\ln{(x+n)}-\ln{(x)}=\lim_{x\to \infty}\ln {\left(\frac{x+n}{x}\right)}=\lim_{x\to \infty}\ln {\left(1+\frac{n}{x}\right)}=\ln {(1+0)}=0$$ If you mean $x=n$ and $n$ varies as $x$, in other words if $n$ is <strong>not</strong> a constant, then the last equation becomes $$\lim_{x\to\infty}\ln{(x+x)}-\ln{(x)}=\lim_{x\to \infty}\ln {\left(\frac{x+x}{x}\right)}=\ln 2$$ and the answer in this case is no.</p> Answer 8: <p>Yes, the limit is 0.</p> <p>$$\lim_{n \to \infty} (f(x+n)- \ln(n))$$</p> <p>$$= \lim_{n \to \infty} (((f(x+n)- \ln(x+n)) + (\ln(x + n) - \ln(n)))$$</p> <p>$$= \lim_{n \to \infty} (\ln(x + n) - \ln(n))$$</p> <p>$$= \lim_{n \to \infty} (\ln \frac{x + n}{n})$$</p> <p>$$= \lim_{n \to \infty} (\ln{1})$$</p> <p>$$= 0$$</p> Answer 9: <p>$f(x+n)-ln(x)=f(x+n)-ln(x+n-n)=f(x+n)-ln((x+n)(1-{n\over{x+n}}))$</p> <p>$=f(x+n)-ln(x+n)-ln(1-{n\over{x+n}})$.</p> <p>$lim_{x\rightarrow +\infty}f(x)-ln(x)=0$ implies that $lim_{x\rightarrow +\infty}f(x+n)-ln(x+n)=0$. But $lim_{x\rightarrow +\infty}ln(1-{n\over{x+n}})=0$. So the answer is yes.</p>
How to verbally state $f(y\mid x)\;?$
2,083,804
4
https://math.stackexchange.com/questions/2083804/how-to-verbally-state-fy-mid-x
<p>How do we verbally state: $\large f(y\mid x)\;?$</p> <p>I'm familiar with $f(x)$ as "$f$ of $x$" and $f(x,y)$ as "$f$ of $x$ and $y$" (or $f$ of $x, y$), but what does the vertical line mean and how to state this verbally so that a screen reader would read it correctly?</p>
[ { "answer_id": 2083934, "score": 2, "text": "<p>F of y given x? This is what I heard in probability </p>\n" }, { "answer_id": 2083937, "score": 1, "text": "<p>We read $f(y \\ | \\ x)$ as \"f of y, given x.\" This is useful when we talk about conditional probability distributions.</p>\n" }, { "answer_id": 2083934, "score": 2, "text": "<p>F of y given x? This is what I heard in probability </p>\n" }, { "answer_id": 2083937, "score": 1, "text": "<p>We read $f(y \\ | \\ x)$ as \"f of y, given x.\" This is useful when we talk about conditional probability distributions.</p>\n" }, { "answer_id": 2083934, "score": 2, "text": "<p>F of y given x? This is what I heard in probability </p>\n" }, { "answer_id": 2083937, "score": 1, "text": "<p>We read $f(y \\ | \\ x)$ as \"f of y, given x.\" This is useful when we talk about conditional probability distributions.</p>\n" } ]
Question: How to verbally state $f(y\mid x)\;?$ Answer 1: <p>F of y given x? This is what I heard in probability </p> Answer 2: <p>We read $f(y \ | \ x)$ as "f of y, given x." This is useful when we talk about conditional probability distributions.</p> Answer 3: <p>F of y given x? This is what I heard in probability </p> Answer 4: <p>We read $f(y \ | \ x)$ as "f of y, given x." This is useful when we talk about conditional probability distributions.</p> Answer 5: <p>F of y given x? This is what I heard in probability </p> Answer 6: <p>We read $f(y \ | \ x)$ as "f of y, given x." This is useful when we talk about conditional probability distributions.</p>
Why can&#39;t I take the derivative of $x^x$ as $x(x^{x-1})$?
2,083,894
4
https://math.stackexchange.com/questions/2083894/why-cant-i-take-the-derivative-of-xx-as-xxx-1
<p>Why can't I take the derivative of $x^x$ as $x(x^{x-1})$?</p> <p>I don't understand why I have to convert it to $e^{x\ln(x)}$ first.</p>
[ { "answer_id": 4215893, "score": 5, "text": "<p>The chain rule can be applied directly, if we consider <span class=\"math-container\">$x^x=\\left.y^x\\vphantom{\\frac{\\mathrm{d}}{\\mathrm{d}x}}\\right|_{y=x}$</span> and set <span class=\"math-container\">$\\frac{\\mathrm{d}y}{\\mathrm{d}x}=1$</span>.</p>\n<p>The part where we take the partial with respect to <span class=\"math-container\">$x$</span> is missing if we just use the rule cited in the question (which is used when taking the partial with respect to <span class=\"math-container\">$y$</span>).\n<span class=\"math-container\">$$\n\\begin{align}\n\\frac{\\mathrm{d}}{\\mathrm{d}x}x^x\n&amp;=\\overbrace{\\left.\\frac{\\partial}{\\partial y}y^x\\frac{\\mathrm{d}y}{\\mathrm{d}x}\\right|_{y=x}}^{\\text{used in question}}+\\overbrace{\\left.\\frac{\\partial}{\\partial x}y^x\\right|_{y=x}}^{\\substack{\\text{missing in}\\\\\\text{question}}}\\\\\n&amp;=\\left.xy^{x-1}\\frac{\\mathrm{d}y}{\\mathrm{d}x}\\right|_{y=x}+\\left.\\log(y)\\,y^x\\vphantom{\\frac{\\partial}{\\partial x}}\\right|_{y=x}\\\\\n&amp;=x^x+\\log(x)\\,x^x\n\\end{align}\n$$</span></p>\n" }, { "answer_id": 2083910, "score": 3, "text": "<p>You want to differentiate $x\\mapsto x^x$ with respect to \"$x$\".</p>\n\n<p>You can not use the rule</p>\n\n<p>$$\\frac{\\mathrm d}{\\mathrm d x}(x^n)=nx^{n-1}$$</p>\n\n<p>because here, \"$n$\" is depending on $x$.</p>\n\n<p>What you are stated is correct, you have to use the identity </p>\n\n<p>$$x^x=e^{x\\ln(x)}$$</p>\n\n<p>and then use the chain rule.</p>\n" }, { "answer_id": 2083955, "score": 2, "text": "<p>Taking the basis as variable, the derivative is $x\\cdot x^{x-1}$, taking the exponent as such the derivative is $x^x\\cdot\\ln(x)$. Adding both gives the derivative of $x^x$, namely\n$$\\left(x^x\\right)'=x^x+x^x\\cdot\\ln(x).$$\n Astonishing at the first glance, isn't it?</p>\n" }, { "answer_id": 2083911, "score": 0, "text": "<p>I think you are asking why you can't differentiate $x^x$ in the same way you would differentiate $x^3$?</p>\n\n<p>The shortcut that you use when differentiating $x^r$ where $r$ is an real number, only hold when $r$ is a real number. In the case of $x^x$, the exponent is a variable so the shortcut doesn't apply.</p>\n" }, { "answer_id": 2083912, "score": 0, "text": "<p><strong>just hint</strong></p>\n\n<p>It doesn't work cause the exponent is not constant.</p>\n\n<p>we cannot factor \n$x^x-x_0^{x_0}$</p>\n\n<p>it is not like \n$x^n-x_0^n$</p>\n" }, { "answer_id": 2083922, "score": 0, "text": "<p>Some people have given good hints, but I would like to elaborate on their responses.</p>\n\n<p>Students get confused with the power rule very often. I've seen these at all levels. The key to remember is that the power rule allows us to look at derivatives of polynomials - specifically, those functions whose powers are <em>fixed</em> numbers.</p>\n\n<p>We know, and can show through induction (or even the binomial theorem; do it!) that</p>\n\n<p>$$ \\frac{d}{dx} [x^n] = nx^{n-1} $$</p>\n\n<p>where $n \\in \\mathbb{R}$. What you have is a function whose base and power both vary. To use a rule we know, we must do the typical $e$ trick: recall that</p>\n\n<p>$$ x^n = e^{n \\ln x} $$</p>\n\n<p>and so we just write $ x^x = e^{x \\ln x}$ and use our typical differentiation rules.</p>\n" }, { "answer_id": 2084055, "score": 0, "text": "<p>This is a valid assumption many students make, but the reason is because $x^x$ is not like the power rule we have seen. The power rule applies to functions like $5x^4, 3x^2, x^{3.2}... etc$. Notice the pattern? The $exponent$ is a $number$ that stays $constant$. However, something like $x^x$, is not. The $x$ in the exponent is always $changing$, that's what $x$ does. So you cannot. </p>\n" }, { "answer_id": 2084099, "score": 0, "text": "<p>As people have said, you cannot use the power rule to differentiate non-constant powers; it only works for functions resembling $f(x)^c$ where $c$ is a constant.</p>\n\n<p>To differentiate $y = x^x$...</p>\n\n<p>$ln(y) = ln (x^x)$</p>\n\n<p>$ln(y) = x ln(x)$</p>\n\n<p>$\\frac{1}{y} dy = (ln(x) + x(\\frac{1}{x}))dx$</p>\n\n<p>$\\frac{1}{y} dy = (ln(x) + 1)dx$</p>\n\n<p>$\\frac{dy}{dx} = (ln(x)+1)y$</p>\n\n<p>$\\frac{dy}{dx} = (ln(x)+1)x^x$</p>\n\n<p>$\\frac{dy}{dx} = x^xln(x)+x^x$</p>\n" }, { "answer_id": 4215893, "score": 5, "text": "<p>The chain rule can be applied directly, if we consider <span class=\"math-container\">$x^x=\\left.y^x\\vphantom{\\frac{\\mathrm{d}}{\\mathrm{d}x}}\\right|_{y=x}$</span> and set <span class=\"math-container\">$\\frac{\\mathrm{d}y}{\\mathrm{d}x}=1$</span>.</p>\n<p>The part where we take the partial with respect to <span class=\"math-container\">$x$</span> is missing if we just use the rule cited in the question (which is used when taking the partial with respect to <span class=\"math-container\">$y$</span>).\n<span class=\"math-container\">$$\n\\begin{align}\n\\frac{\\mathrm{d}}{\\mathrm{d}x}x^x\n&amp;=\\overbrace{\\left.\\frac{\\partial}{\\partial y}y^x\\frac{\\mathrm{d}y}{\\mathrm{d}x}\\right|_{y=x}}^{\\text{used in question}}+\\overbrace{\\left.\\frac{\\partial}{\\partial x}y^x\\right|_{y=x}}^{\\substack{\\text{missing in}\\\\\\text{question}}}\\\\\n&amp;=\\left.xy^{x-1}\\frac{\\mathrm{d}y}{\\mathrm{d}x}\\right|_{y=x}+\\left.\\log(y)\\,y^x\\vphantom{\\frac{\\partial}{\\partial x}}\\right|_{y=x}\\\\\n&amp;=x^x+\\log(x)\\,x^x\n\\end{align}\n$$</span></p>\n" }, { "answer_id": 2083910, "score": 3, "text": "<p>You want to differentiate $x\\mapsto x^x$ with respect to \"$x$\".</p>\n\n<p>You can not use the rule</p>\n\n<p>$$\\frac{\\mathrm d}{\\mathrm d x}(x^n)=nx^{n-1}$$</p>\n\n<p>because here, \"$n$\" is depending on $x$.</p>\n\n<p>What you are stated is correct, you have to use the identity </p>\n\n<p>$$x^x=e^{x\\ln(x)}$$</p>\n\n<p>and then use the chain rule.</p>\n" }, { "answer_id": 2083955, "score": 2, "text": "<p>Taking the basis as variable, the derivative is $x\\cdot x^{x-1}$, taking the exponent as such the derivative is $x^x\\cdot\\ln(x)$. Adding both gives the derivative of $x^x$, namely\n$$\\left(x^x\\right)'=x^x+x^x\\cdot\\ln(x).$$\n Astonishing at the first glance, isn't it?</p>\n" }, { "answer_id": 2083911, "score": 0, "text": "<p>I think you are asking why you can't differentiate $x^x$ in the same way you would differentiate $x^3$?</p>\n\n<p>The shortcut that you use when differentiating $x^r$ where $r$ is an real number, only hold when $r$ is a real number. In the case of $x^x$, the exponent is a variable so the shortcut doesn't apply.</p>\n" }, { "answer_id": 2083912, "score": 0, "text": "<p><strong>just hint</strong></p>\n\n<p>It doesn't work cause the exponent is not constant.</p>\n\n<p>we cannot factor \n$x^x-x_0^{x_0}$</p>\n\n<p>it is not like \n$x^n-x_0^n$</p>\n" }, { "answer_id": 2083922, "score": 0, "text": "<p>Some people have given good hints, but I would like to elaborate on their responses.</p>\n\n<p>Students get confused with the power rule very often. I've seen these at all levels. The key to remember is that the power rule allows us to look at derivatives of polynomials - specifically, those functions whose powers are <em>fixed</em> numbers.</p>\n\n<p>We know, and can show through induction (or even the binomial theorem; do it!) that</p>\n\n<p>$$ \\frac{d}{dx} [x^n] = nx^{n-1} $$</p>\n\n<p>where $n \\in \\mathbb{R}$. What you have is a function whose base and power both vary. To use a rule we know, we must do the typical $e$ trick: recall that</p>\n\n<p>$$ x^n = e^{n \\ln x} $$</p>\n\n<p>and so we just write $ x^x = e^{x \\ln x}$ and use our typical differentiation rules.</p>\n" }, { "answer_id": 2084055, "score": 0, "text": "<p>This is a valid assumption many students make, but the reason is because $x^x$ is not like the power rule we have seen. The power rule applies to functions like $5x^4, 3x^2, x^{3.2}... etc$. Notice the pattern? The $exponent$ is a $number$ that stays $constant$. However, something like $x^x$, is not. The $x$ in the exponent is always $changing$, that's what $x$ does. So you cannot. </p>\n" }, { "answer_id": 2084099, "score": 0, "text": "<p>As people have said, you cannot use the power rule to differentiate non-constant powers; it only works for functions resembling $f(x)^c$ where $c$ is a constant.</p>\n\n<p>To differentiate $y = x^x$...</p>\n\n<p>$ln(y) = ln (x^x)$</p>\n\n<p>$ln(y) = x ln(x)$</p>\n\n<p>$\\frac{1}{y} dy = (ln(x) + x(\\frac{1}{x}))dx$</p>\n\n<p>$\\frac{1}{y} dy = (ln(x) + 1)dx$</p>\n\n<p>$\\frac{dy}{dx} = (ln(x)+1)y$</p>\n\n<p>$\\frac{dy}{dx} = (ln(x)+1)x^x$</p>\n\n<p>$\\frac{dy}{dx} = x^xln(x)+x^x$</p>\n" }, { "answer_id": 4215893, "score": 5, "text": "<p>The chain rule can be applied directly, if we consider <span class=\"math-container\">$x^x=\\left.y^x\\vphantom{\\frac{\\mathrm{d}}{\\mathrm{d}x}}\\right|_{y=x}$</span> and set <span class=\"math-container\">$\\frac{\\mathrm{d}y}{\\mathrm{d}x}=1$</span>.</p>\n<p>The part where we take the partial with respect to <span class=\"math-container\">$x$</span> is missing if we just use the rule cited in the question (which is used when taking the partial with respect to <span class=\"math-container\">$y$</span>).\n<span class=\"math-container\">$$\n\\begin{align}\n\\frac{\\mathrm{d}}{\\mathrm{d}x}x^x\n&amp;=\\overbrace{\\left.\\frac{\\partial}{\\partial y}y^x\\frac{\\mathrm{d}y}{\\mathrm{d}x}\\right|_{y=x}}^{\\text{used in question}}+\\overbrace{\\left.\\frac{\\partial}{\\partial x}y^x\\right|_{y=x}}^{\\substack{\\text{missing in}\\\\\\text{question}}}\\\\\n&amp;=\\left.xy^{x-1}\\frac{\\mathrm{d}y}{\\mathrm{d}x}\\right|_{y=x}+\\left.\\log(y)\\,y^x\\vphantom{\\frac{\\partial}{\\partial x}}\\right|_{y=x}\\\\\n&amp;=x^x+\\log(x)\\,x^x\n\\end{align}\n$$</span></p>\n" }, { "answer_id": 2083910, "score": 3, "text": "<p>You want to differentiate $x\\mapsto x^x$ with respect to \"$x$\".</p>\n\n<p>You can not use the rule</p>\n\n<p>$$\\frac{\\mathrm d}{\\mathrm d x}(x^n)=nx^{n-1}$$</p>\n\n<p>because here, \"$n$\" is depending on $x$.</p>\n\n<p>What you are stated is correct, you have to use the identity </p>\n\n<p>$$x^x=e^{x\\ln(x)}$$</p>\n\n<p>and then use the chain rule.</p>\n" }, { "answer_id": 2083955, "score": 2, "text": "<p>Taking the basis as variable, the derivative is $x\\cdot x^{x-1}$, taking the exponent as such the derivative is $x^x\\cdot\\ln(x)$. Adding both gives the derivative of $x^x$, namely\n$$\\left(x^x\\right)'=x^x+x^x\\cdot\\ln(x).$$\n Astonishing at the first glance, isn't it?</p>\n" }, { "answer_id": 2083911, "score": 0, "text": "<p>I think you are asking why you can't differentiate $x^x$ in the same way you would differentiate $x^3$?</p>\n\n<p>The shortcut that you use when differentiating $x^r$ where $r$ is an real number, only hold when $r$ is a real number. In the case of $x^x$, the exponent is a variable so the shortcut doesn't apply.</p>\n" }, { "answer_id": 2083912, "score": 0, "text": "<p><strong>just hint</strong></p>\n\n<p>It doesn't work cause the exponent is not constant.</p>\n\n<p>we cannot factor \n$x^x-x_0^{x_0}$</p>\n\n<p>it is not like \n$x^n-x_0^n$</p>\n" }, { "answer_id": 2083922, "score": 0, "text": "<p>Some people have given good hints, but I would like to elaborate on their responses.</p>\n\n<p>Students get confused with the power rule very often. I've seen these at all levels. The key to remember is that the power rule allows us to look at derivatives of polynomials - specifically, those functions whose powers are <em>fixed</em> numbers.</p>\n\n<p>We know, and can show through induction (or even the binomial theorem; do it!) that</p>\n\n<p>$$ \\frac{d}{dx} [x^n] = nx^{n-1} $$</p>\n\n<p>where $n \\in \\mathbb{R}$. What you have is a function whose base and power both vary. To use a rule we know, we must do the typical $e$ trick: recall that</p>\n\n<p>$$ x^n = e^{n \\ln x} $$</p>\n\n<p>and so we just write $ x^x = e^{x \\ln x}$ and use our typical differentiation rules.</p>\n" }, { "answer_id": 2084055, "score": 0, "text": "<p>This is a valid assumption many students make, but the reason is because $x^x$ is not like the power rule we have seen. The power rule applies to functions like $5x^4, 3x^2, x^{3.2}... etc$. Notice the pattern? The $exponent$ is a $number$ that stays $constant$. However, something like $x^x$, is not. The $x$ in the exponent is always $changing$, that's what $x$ does. So you cannot. </p>\n" }, { "answer_id": 2084099, "score": 0, "text": "<p>As people have said, you cannot use the power rule to differentiate non-constant powers; it only works for functions resembling $f(x)^c$ where $c$ is a constant.</p>\n\n<p>To differentiate $y = x^x$...</p>\n\n<p>$ln(y) = ln (x^x)$</p>\n\n<p>$ln(y) = x ln(x)$</p>\n\n<p>$\\frac{1}{y} dy = (ln(x) + x(\\frac{1}{x}))dx$</p>\n\n<p>$\\frac{1}{y} dy = (ln(x) + 1)dx$</p>\n\n<p>$\\frac{dy}{dx} = (ln(x)+1)y$</p>\n\n<p>$\\frac{dy}{dx} = (ln(x)+1)x^x$</p>\n\n<p>$\\frac{dy}{dx} = x^xln(x)+x^x$</p>\n" } ]
Question: Why can&#39;t I take the derivative of $x^x$ as $x(x^{x-1})$? Answer 1: <p>The chain rule can be applied directly, if we consider <span class="math-container">$x^x=\left.y^x\vphantom{\frac{\mathrm{d}}{\mathrm{d}x}}\right|_{y=x}$</span> and set <span class="math-container">$\frac{\mathrm{d}y}{\mathrm{d}x}=1$</span>.</p> <p>The part where we take the partial with respect to <span class="math-container">$x$</span> is missing if we just use the rule cited in the question (which is used when taking the partial with respect to <span class="math-container">$y$</span>). <span class="math-container">$$ \begin{align} \frac{\mathrm{d}}{\mathrm{d}x}x^x &amp;=\overbrace{\left.\frac{\partial}{\partial y}y^x\frac{\mathrm{d}y}{\mathrm{d}x}\right|_{y=x}}^{\text{used in question}}+\overbrace{\left.\frac{\partial}{\partial x}y^x\right|_{y=x}}^{\substack{\text{missing in}\\\text{question}}}\\ &amp;=\left.xy^{x-1}\frac{\mathrm{d}y}{\mathrm{d}x}\right|_{y=x}+\left.\log(y)\,y^x\vphantom{\frac{\partial}{\partial x}}\right|_{y=x}\\ &amp;=x^x+\log(x)\,x^x \end{align} $$</span></p> Answer 2: <p>You want to differentiate $x\mapsto x^x$ with respect to "$x$".</p> <p>You can not use the rule</p> <p>$$\frac{\mathrm d}{\mathrm d x}(x^n)=nx^{n-1}$$</p> <p>because here, "$n$" is depending on $x$.</p> <p>What you are stated is correct, you have to use the identity </p> <p>$$x^x=e^{x\ln(x)}$$</p> <p>and then use the chain rule.</p> Answer 3: <p>Taking the basis as variable, the derivative is $x\cdot x^{x-1}$, taking the exponent as such the derivative is $x^x\cdot\ln(x)$. Adding both gives the derivative of $x^x$, namely $$\left(x^x\right)'=x^x+x^x\cdot\ln(x).$$ Astonishing at the first glance, isn't it?</p> Answer 4: <p>I think you are asking why you can't differentiate $x^x$ in the same way you would differentiate $x^3$?</p> <p>The shortcut that you use when differentiating $x^r$ where $r$ is an real number, only hold when $r$ is a real number. In the case of $x^x$, the exponent is a variable so the shortcut doesn't apply.</p> Answer 5: <p><strong>just hint</strong></p> <p>It doesn't work cause the exponent is not constant.</p> <p>we cannot factor $x^x-x_0^{x_0}$</p> <p>it is not like $x^n-x_0^n$</p> Answer 6: <p>Some people have given good hints, but I would like to elaborate on their responses.</p> <p>Students get confused with the power rule very often. I've seen these at all levels. The key to remember is that the power rule allows us to look at derivatives of polynomials - specifically, those functions whose powers are <em>fixed</em> numbers.</p> <p>We know, and can show through induction (or even the binomial theorem; do it!) that</p> <p>$$ \frac{d}{dx} [x^n] = nx^{n-1} $$</p> <p>where $n \in \mathbb{R}$. What you have is a function whose base and power both vary. To use a rule we know, we must do the typical $e$ trick: recall that</p> <p>$$ x^n = e^{n \ln x} $$</p> <p>and so we just write $ x^x = e^{x \ln x}$ and use our typical differentiation rules.</p> Answer 7: <p>This is a valid assumption many students make, but the reason is because $x^x$ is not like the power rule we have seen. The power rule applies to functions like $5x^4, 3x^2, x^{3.2}... etc$. Notice the pattern? The $exponent$ is a $number$ that stays $constant$. However, something like $x^x$, is not. The $x$ in the exponent is always $changing$, that's what $x$ does. So you cannot. </p> Answer 8: <p>As people have said, you cannot use the power rule to differentiate non-constant powers; it only works for functions resembling $f(x)^c$ where $c$ is a constant.</p> <p>To differentiate $y = x^x$...</p> <p>$ln(y) = ln (x^x)$</p> <p>$ln(y) = x ln(x)$</p> <p>$\frac{1}{y} dy = (ln(x) + x(\frac{1}{x}))dx$</p> <p>$\frac{1}{y} dy = (ln(x) + 1)dx$</p> <p>$\frac{dy}{dx} = (ln(x)+1)y$</p> <p>$\frac{dy}{dx} = (ln(x)+1)x^x$</p> <p>$\frac{dy}{dx} = x^xln(x)+x^x$</p> Answer 9: <p>The chain rule can be applied directly, if we consider <span class="math-container">$x^x=\left.y^x\vphantom{\frac{\mathrm{d}}{\mathrm{d}x}}\right|_{y=x}$</span> and set <span class="math-container">$\frac{\mathrm{d}y}{\mathrm{d}x}=1$</span>.</p> <p>The part where we take the partial with respect to <span class="math-container">$x$</span> is missing if we just use the rule cited in the question (which is used when taking the partial with respect to <span class="math-container">$y$</span>). <span class="math-container">$$ \begin{align} \frac{\mathrm{d}}{\mathrm{d}x}x^x &amp;=\overbrace{\left.\frac{\partial}{\partial y}y^x\frac{\mathrm{d}y}{\mathrm{d}x}\right|_{y=x}}^{\text{used in question}}+\overbrace{\left.\frac{\partial}{\partial x}y^x\right|_{y=x}}^{\substack{\text{missing in}\\\text{question}}}\\ &amp;=\left.xy^{x-1}\frac{\mathrm{d}y}{\mathrm{d}x}\right|_{y=x}+\left.\log(y)\,y^x\vphantom{\frac{\partial}{\partial x}}\right|_{y=x}\\ &amp;=x^x+\log(x)\,x^x \end{align} $$</span></p> Answer 10: <p>You want to differentiate $x\mapsto x^x$ with respect to "$x$".</p> <p>You can not use the rule</p> <p>$$\frac{\mathrm d}{\mathrm d x}(x^n)=nx^{n-1}$$</p> <p>because here, "$n$" is depending on $x$.</p> <p>What you are stated is correct, you have to use the identity </p> <p>$$x^x=e^{x\ln(x)}$$</p> <p>and then use the chain rule.</p> Answer 11: <p>Taking the basis as variable, the derivative is $x\cdot x^{x-1}$, taking the exponent as such the derivative is $x^x\cdot\ln(x)$. Adding both gives the derivative of $x^x$, namely $$\left(x^x\right)'=x^x+x^x\cdot\ln(x).$$ Astonishing at the first glance, isn't it?</p> Answer 12: <p>I think you are asking why you can't differentiate $x^x$ in the same way you would differentiate $x^3$?</p> <p>The shortcut that you use when differentiating $x^r$ where $r$ is an real number, only hold when $r$ is a real number. In the case of $x^x$, the exponent is a variable so the shortcut doesn't apply.</p> Answer 13: <p><strong>just hint</strong></p> <p>It doesn't work cause the exponent is not constant.</p> <p>we cannot factor $x^x-x_0^{x_0}$</p> <p>it is not like $x^n-x_0^n$</p> Answer 14: <p>Some people have given good hints, but I would like to elaborate on their responses.</p> <p>Students get confused with the power rule very often. I've seen these at all levels. The key to remember is that the power rule allows us to look at derivatives of polynomials - specifically, those functions whose powers are <em>fixed</em> numbers.</p> <p>We know, and can show through induction (or even the binomial theorem; do it!) that</p> <p>$$ \frac{d}{dx} [x^n] = nx^{n-1} $$</p> <p>where $n \in \mathbb{R}$. What you have is a function whose base and power both vary. To use a rule we know, we must do the typical $e$ trick: recall that</p> <p>$$ x^n = e^{n \ln x} $$</p> <p>and so we just write $ x^x = e^{x \ln x}$ and use our typical differentiation rules.</p> Answer 15: <p>This is a valid assumption many students make, but the reason is because $x^x$ is not like the power rule we have seen. The power rule applies to functions like $5x^4, 3x^2, x^{3.2}... etc$. Notice the pattern? The $exponent$ is a $number$ that stays $constant$. However, something like $x^x$, is not. The $x$ in the exponent is always $changing$, that's what $x$ does. So you cannot. </p> Answer 16: <p>As people have said, you cannot use the power rule to differentiate non-constant powers; it only works for functions resembling $f(x)^c$ where $c$ is a constant.</p> <p>To differentiate $y = x^x$...</p> <p>$ln(y) = ln (x^x)$</p> <p>$ln(y) = x ln(x)$</p> <p>$\frac{1}{y} dy = (ln(x) + x(\frac{1}{x}))dx$</p> <p>$\frac{1}{y} dy = (ln(x) + 1)dx$</p> <p>$\frac{dy}{dx} = (ln(x)+1)y$</p> <p>$\frac{dy}{dx} = (ln(x)+1)x^x$</p> <p>$\frac{dy}{dx} = x^xln(x)+x^x$</p> Answer 17: <p>The chain rule can be applied directly, if we consider <span class="math-container">$x^x=\left.y^x\vphantom{\frac{\mathrm{d}}{\mathrm{d}x}}\right|_{y=x}$</span> and set <span class="math-container">$\frac{\mathrm{d}y}{\mathrm{d}x}=1$</span>.</p> <p>The part where we take the partial with respect to <span class="math-container">$x$</span> is missing if we just use the rule cited in the question (which is used when taking the partial with respect to <span class="math-container">$y$</span>). <span class="math-container">$$ \begin{align} \frac{\mathrm{d}}{\mathrm{d}x}x^x &amp;=\overbrace{\left.\frac{\partial}{\partial y}y^x\frac{\mathrm{d}y}{\mathrm{d}x}\right|_{y=x}}^{\text{used in question}}+\overbrace{\left.\frac{\partial}{\partial x}y^x\right|_{y=x}}^{\substack{\text{missing in}\\\text{question}}}\\ &amp;=\left.xy^{x-1}\frac{\mathrm{d}y}{\mathrm{d}x}\right|_{y=x}+\left.\log(y)\,y^x\vphantom{\frac{\partial}{\partial x}}\right|_{y=x}\\ &amp;=x^x+\log(x)\,x^x \end{align} $$</span></p> Answer 18: <p>You want to differentiate $x\mapsto x^x$ with respect to "$x$".</p> <p>You can not use the rule</p> <p>$$\frac{\mathrm d}{\mathrm d x}(x^n)=nx^{n-1}$$</p> <p>because here, "$n$" is depending on $x$.</p> <p>What you are stated is correct, you have to use the identity </p> <p>$$x^x=e^{x\ln(x)}$$</p> <p>and then use the chain rule.</p> Answer 19: <p>Taking the basis as variable, the derivative is $x\cdot x^{x-1}$, taking the exponent as such the derivative is $x^x\cdot\ln(x)$. Adding both gives the derivative of $x^x$, namely $$\left(x^x\right)'=x^x+x^x\cdot\ln(x).$$ Astonishing at the first glance, isn't it?</p> Answer 20: <p>I think you are asking why you can't differentiate $x^x$ in the same way you would differentiate $x^3$?</p> <p>The shortcut that you use when differentiating $x^r$ where $r$ is an real number, only hold when $r$ is a real number. In the case of $x^x$, the exponent is a variable so the shortcut doesn't apply.</p> Answer 21: <p><strong>just hint</strong></p> <p>It doesn't work cause the exponent is not constant.</p> <p>we cannot factor $x^x-x_0^{x_0}$</p> <p>it is not like $x^n-x_0^n$</p> Answer 22: <p>Some people have given good hints, but I would like to elaborate on their responses.</p> <p>Students get confused with the power rule very often. I've seen these at all levels. The key to remember is that the power rule allows us to look at derivatives of polynomials - specifically, those functions whose powers are <em>fixed</em> numbers.</p> <p>We know, and can show through induction (or even the binomial theorem; do it!) that</p> <p>$$ \frac{d}{dx} [x^n] = nx^{n-1} $$</p> <p>where $n \in \mathbb{R}$. What you have is a function whose base and power both vary. To use a rule we know, we must do the typical $e$ trick: recall that</p> <p>$$ x^n = e^{n \ln x} $$</p> <p>and so we just write $ x^x = e^{x \ln x}$ and use our typical differentiation rules.</p> Answer 23: <p>This is a valid assumption many students make, but the reason is because $x^x$ is not like the power rule we have seen. The power rule applies to functions like $5x^4, 3x^2, x^{3.2}... etc$. Notice the pattern? The $exponent$ is a $number$ that stays $constant$. However, something like $x^x$, is not. The $x$ in the exponent is always $changing$, that's what $x$ does. So you cannot. </p> Answer 24: <p>As people have said, you cannot use the power rule to differentiate non-constant powers; it only works for functions resembling $f(x)^c$ where $c$ is a constant.</p> <p>To differentiate $y = x^x$...</p> <p>$ln(y) = ln (x^x)$</p> <p>$ln(y) = x ln(x)$</p> <p>$\frac{1}{y} dy = (ln(x) + x(\frac{1}{x}))dx$</p> <p>$\frac{1}{y} dy = (ln(x) + 1)dx$</p> <p>$\frac{dy}{dx} = (ln(x)+1)y$</p> <p>$\frac{dy}{dx} = (ln(x)+1)x^x$</p> <p>$\frac{dy}{dx} = x^xln(x)+x^x$</p>
the least odd natural number $k$ such that the equation $x^2\equiv k\pmod {2017}$ doesn&#39;t have integer solution
2,084,053
4
https://math.stackexchange.com/questions/2084053/the-least-odd-natural-number-k-such-that-the-equation-x2-equiv-k-pmod-2017
<p>I'm looking for the least odd natural number $k$ such that the equation $x^2\equiv k\pmod {2017}$ doesn't have integer solution.</p> <p>I'm using the Eisenstein lemma, we have $\big(\frac{a}{p}\big)=(-1)^{T(a,p)}$ when $a$ is an odd integer and $p$ is an odd prime number with $p\nmid a$. Following my calculations we have:</p> <p>$T(3,2017)=336$ and $T(5,2017)=607$. Since $\big(\frac{3}{1017}\big)=(-1)^{336}=1$ and $\big(\frac{5}{1017}\big)=(-1)^{667}=-1$. So, we have $k=5$.</p> <p>Am I made any mistakes? </p>
[ { "answer_id": 2084082, "score": 3, "text": "<p>The least quadratic non-residue $\\!\\!\\pmod{p}$ is always a prime. By exaustive search:\n$$\\left(\\frac{2}{2017}\\right)=1,\\quad \\left(\\frac{3}{2017}\\right)=1,\\quad \\left(\\frac{\\color{red}{5}}{2017}\\right)=-1$$\nhence $\\color{red}{5}$ is the answer. </p>\n\n<p>$\\pm 1,\\pm 2,\\pm 3$ are surely quadratic residues since $2017$ is a number of the form $24k+1$.</p>\n" }, { "answer_id": 2084073, "score": 1, "text": "<p>For amusement value: the discriminant $2017$ has class number one. The principal form is $x^2 + 43 xy - 42 y^2,$ equivalent over $SL_2 \\mathbb Z$ to\n$u^2 + uv - 504 v^2.$ Meanwhile, the form does represent $-1$ integrally, which is automatic as $2017 \\equiv 1 \\pmod 4$ is prime. The Lagrange cycle for the principal form therefore shows all the small primes represented, which are the same as those which are quadratic residues. The line labelled 2 shows that $2$ is represented, line 14 shows that $3$ is represented. Line 40 shows $7.$ Line 32 shows $11.$ The bit about $2$ means that there is an integer solution to $s^2 - 2 t^2 = 2017.$ Look at that, $45^2 - 2 \\cdot 2^2 = 2017$</p>\n\n<p>Discriminant $8068$ also has class number one, so $x^2 - 2017 y^2$ does very well. However, it is not possible to have $x^2 - 2017 y^2 \\equiv 2 \\pmod 4,$ so $2$ itself is out of the picture. </p>\n\n<pre><code>jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./indefCycle 1 43 -42\n\n 0 form 1 43 -42\n\n\n 1 0\n 0 1\n\nTo Return \n 1 0\n 0 1\n\n0 form 1 43 -42 delta -1 ambiguous \n1 form -42 41 2 delta 21\n2 form 2 43 -21 delta -2\n3 form -21 41 4 delta 10\n4 form 4 39 -31 delta -1\n5 form -31 23 12 delta 2\n6 form 12 25 -29 delta -1\n7 form -29 33 8 delta 4\n8 form 8 31 -33 delta -1\n9 form -33 35 6 delta 6\n10 form 6 37 -27 delta -1\n11 form -27 17 16 delta 1\n12 form 16 15 -28 delta -1\n13 form -28 41 3 delta 14\n14 form 3 43 -14 delta -3\n15 form -14 41 6 delta 7\n16 form 6 43 -7 delta -6\n17 form -7 41 12 delta 3\n18 form 12 31 -22 delta -1\n19 form -22 13 21 delta 1\n20 form 21 29 -14 delta -2\n21 form -14 27 23 delta 1\n22 form 23 19 -18 delta -1\n23 form -18 17 24 delta 1\n24 form 24 31 -11 delta -3\n25 form -11 35 18 delta 2\n26 form 18 37 -9 delta -4\n27 form -9 35 22 delta 1\n28 form 22 9 -22 delta -1\n29 form -22 35 9 delta 4\n30 form 9 37 -18 delta -2\n31 form -18 35 11 delta 3\n32 form 11 31 -24 delta -1\n33 form -24 17 18 delta 1\n34 form 18 19 -23 delta -1\n35 form -23 27 14 delta 2\n36 form 14 29 -21 delta -1\n37 form -21 13 22 delta 1\n38 form 22 31 -12 delta -3\n39 form -12 41 7 delta 6\n40 form 7 43 -6 delta -7\n41 form -6 41 14 delta 3\n42 form 14 43 -3 delta -14\n43 form -3 41 28 delta 1\n44 form 28 15 -16 delta -1\n45 form -16 17 27 delta 1\n46 form 27 37 -6 delta -6\n47 form -6 35 33 delta 1\n48 form 33 31 -8 delta -4\n49 form -8 33 29 delta 1\n50 form 29 25 -12 delta -2\n51 form -12 23 31 delta 1\n52 form 31 39 -4 delta -10\n53 form -4 41 21 delta 2\n54 form 21 43 -2 delta -21\n55 form -2 41 42 delta 1\n56 form 42 43 -1 delta -43\n57 form -1 43 42 delta 1 ambiguous -1 composed with form zero \n58 form 42 41 -2 delta -21\n59 form -2 43 21 delta 2\n60 form 21 41 -4 delta -10\n61 form -4 39 31 delta 1\n62 form 31 23 -12 delta -2\n63 form -12 25 29 delta 1\n64 form 29 33 -8 delta -4\n65 form -8 31 33 delta 1\n66 form 33 35 -6 delta -6\n67 form -6 37 27 delta 1\n68 form 27 17 -16 delta -1\n69 form -16 15 28 delta 1\n70 form 28 41 -3 delta -14\n71 form -3 43 14 delta 3\n72 form 14 41 -6 delta -7\n73 form -6 43 7 delta 6\n74 form 7 41 -12 delta -3\n75 form -12 31 22 delta 1\n76 form 22 13 -21 delta -1\n77 form -21 29 14 delta 2\n78 form 14 27 -23 delta -1\n79 form -23 19 18 delta 1\n80 form 18 17 -24 delta -1\n81 form -24 31 11 delta 3\n82 form 11 35 -18 delta -2\n83 form -18 37 9 delta 4\n84 form 9 35 -22 delta -1\n85 form -22 9 22 delta 1\n86 form 22 35 -9 delta -4\n87 form -9 37 18 delta 2\n88 form 18 35 -11 delta -3\n89 form -11 31 24 delta 1\n90 form 24 17 -18 delta -1\n91 form -18 19 23 delta 1\n92 form 23 27 -14 delta -2\n93 form -14 29 21 delta 1\n94 form 21 13 -22 delta -1\n95 form -22 31 12 delta 3\n96 form 12 41 -7 delta -6\n97 form -7 43 6 delta 7\n98 form 6 41 -14 delta -3\n99 form -14 43 3 delta 14\n100 form 3 41 -28 delta -1\n101 form -28 15 16 delta 1\n102 form 16 17 -27 delta -1\n103 form -27 37 6 delta 6\n104 form 6 35 -33 delta -1\n105 form -33 31 8 delta 4\n106 form 8 33 -29 delta -1\n107 form -29 25 12 delta 2\n108 form 12 23 -31 delta -1\n109 form -31 39 4 delta 10\n110 form 4 41 -21 delta -2\n111 form -21 43 2 delta 21\n112 form 2 41 -42 delta -1\n113 form -42 43 1 delta 43\n114 form 1 43 -42\n\n\n form 1 x^2 + 43 x y -42 y^2 \n\nminimum was 1rep x = 1 y = 0 disc 2017 dSqrt 44 M_Ratio 1936\nAutomorph, written on right of Gram matrix: \n-965532677013781482875198088170183835928234466523769 -42440482758478505176147113435578355104819436064102752\n-1010487684725678694670169367513770359638558001526256 -44416503120217965353692480891262309300386228532152777\n=========================================\n</code></pre>\n" }, { "answer_id": 2084082, "score": 3, "text": "<p>The least quadratic non-residue $\\!\\!\\pmod{p}$ is always a prime. By exaustive search:\n$$\\left(\\frac{2}{2017}\\right)=1,\\quad \\left(\\frac{3}{2017}\\right)=1,\\quad \\left(\\frac{\\color{red}{5}}{2017}\\right)=-1$$\nhence $\\color{red}{5}$ is the answer. </p>\n\n<p>$\\pm 1,\\pm 2,\\pm 3$ are surely quadratic residues since $2017$ is a number of the form $24k+1$.</p>\n" }, { "answer_id": 2084073, "score": 1, "text": "<p>For amusement value: the discriminant $2017$ has class number one. The principal form is $x^2 + 43 xy - 42 y^2,$ equivalent over $SL_2 \\mathbb Z$ to\n$u^2 + uv - 504 v^2.$ Meanwhile, the form does represent $-1$ integrally, which is automatic as $2017 \\equiv 1 \\pmod 4$ is prime. The Lagrange cycle for the principal form therefore shows all the small primes represented, which are the same as those which are quadratic residues. The line labelled 2 shows that $2$ is represented, line 14 shows that $3$ is represented. Line 40 shows $7.$ Line 32 shows $11.$ The bit about $2$ means that there is an integer solution to $s^2 - 2 t^2 = 2017.$ Look at that, $45^2 - 2 \\cdot 2^2 = 2017$</p>\n\n<p>Discriminant $8068$ also has class number one, so $x^2 - 2017 y^2$ does very well. However, it is not possible to have $x^2 - 2017 y^2 \\equiv 2 \\pmod 4,$ so $2$ itself is out of the picture. </p>\n\n<pre><code>jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./indefCycle 1 43 -42\n\n 0 form 1 43 -42\n\n\n 1 0\n 0 1\n\nTo Return \n 1 0\n 0 1\n\n0 form 1 43 -42 delta -1 ambiguous \n1 form -42 41 2 delta 21\n2 form 2 43 -21 delta -2\n3 form -21 41 4 delta 10\n4 form 4 39 -31 delta -1\n5 form -31 23 12 delta 2\n6 form 12 25 -29 delta -1\n7 form -29 33 8 delta 4\n8 form 8 31 -33 delta -1\n9 form -33 35 6 delta 6\n10 form 6 37 -27 delta -1\n11 form -27 17 16 delta 1\n12 form 16 15 -28 delta -1\n13 form -28 41 3 delta 14\n14 form 3 43 -14 delta -3\n15 form -14 41 6 delta 7\n16 form 6 43 -7 delta -6\n17 form -7 41 12 delta 3\n18 form 12 31 -22 delta -1\n19 form -22 13 21 delta 1\n20 form 21 29 -14 delta -2\n21 form -14 27 23 delta 1\n22 form 23 19 -18 delta -1\n23 form -18 17 24 delta 1\n24 form 24 31 -11 delta -3\n25 form -11 35 18 delta 2\n26 form 18 37 -9 delta -4\n27 form -9 35 22 delta 1\n28 form 22 9 -22 delta -1\n29 form -22 35 9 delta 4\n30 form 9 37 -18 delta -2\n31 form -18 35 11 delta 3\n32 form 11 31 -24 delta -1\n33 form -24 17 18 delta 1\n34 form 18 19 -23 delta -1\n35 form -23 27 14 delta 2\n36 form 14 29 -21 delta -1\n37 form -21 13 22 delta 1\n38 form 22 31 -12 delta -3\n39 form -12 41 7 delta 6\n40 form 7 43 -6 delta -7\n41 form -6 41 14 delta 3\n42 form 14 43 -3 delta -14\n43 form -3 41 28 delta 1\n44 form 28 15 -16 delta -1\n45 form -16 17 27 delta 1\n46 form 27 37 -6 delta -6\n47 form -6 35 33 delta 1\n48 form 33 31 -8 delta -4\n49 form -8 33 29 delta 1\n50 form 29 25 -12 delta -2\n51 form -12 23 31 delta 1\n52 form 31 39 -4 delta -10\n53 form -4 41 21 delta 2\n54 form 21 43 -2 delta -21\n55 form -2 41 42 delta 1\n56 form 42 43 -1 delta -43\n57 form -1 43 42 delta 1 ambiguous -1 composed with form zero \n58 form 42 41 -2 delta -21\n59 form -2 43 21 delta 2\n60 form 21 41 -4 delta -10\n61 form -4 39 31 delta 1\n62 form 31 23 -12 delta -2\n63 form -12 25 29 delta 1\n64 form 29 33 -8 delta -4\n65 form -8 31 33 delta 1\n66 form 33 35 -6 delta -6\n67 form -6 37 27 delta 1\n68 form 27 17 -16 delta -1\n69 form -16 15 28 delta 1\n70 form 28 41 -3 delta -14\n71 form -3 43 14 delta 3\n72 form 14 41 -6 delta -7\n73 form -6 43 7 delta 6\n74 form 7 41 -12 delta -3\n75 form -12 31 22 delta 1\n76 form 22 13 -21 delta -1\n77 form -21 29 14 delta 2\n78 form 14 27 -23 delta -1\n79 form -23 19 18 delta 1\n80 form 18 17 -24 delta -1\n81 form -24 31 11 delta 3\n82 form 11 35 -18 delta -2\n83 form -18 37 9 delta 4\n84 form 9 35 -22 delta -1\n85 form -22 9 22 delta 1\n86 form 22 35 -9 delta -4\n87 form -9 37 18 delta 2\n88 form 18 35 -11 delta -3\n89 form -11 31 24 delta 1\n90 form 24 17 -18 delta -1\n91 form -18 19 23 delta 1\n92 form 23 27 -14 delta -2\n93 form -14 29 21 delta 1\n94 form 21 13 -22 delta -1\n95 form -22 31 12 delta 3\n96 form 12 41 -7 delta -6\n97 form -7 43 6 delta 7\n98 form 6 41 -14 delta -3\n99 form -14 43 3 delta 14\n100 form 3 41 -28 delta -1\n101 form -28 15 16 delta 1\n102 form 16 17 -27 delta -1\n103 form -27 37 6 delta 6\n104 form 6 35 -33 delta -1\n105 form -33 31 8 delta 4\n106 form 8 33 -29 delta -1\n107 form -29 25 12 delta 2\n108 form 12 23 -31 delta -1\n109 form -31 39 4 delta 10\n110 form 4 41 -21 delta -2\n111 form -21 43 2 delta 21\n112 form 2 41 -42 delta -1\n113 form -42 43 1 delta 43\n114 form 1 43 -42\n\n\n form 1 x^2 + 43 x y -42 y^2 \n\nminimum was 1rep x = 1 y = 0 disc 2017 dSqrt 44 M_Ratio 1936\nAutomorph, written on right of Gram matrix: \n-965532677013781482875198088170183835928234466523769 -42440482758478505176147113435578355104819436064102752\n-1010487684725678694670169367513770359638558001526256 -44416503120217965353692480891262309300386228532152777\n=========================================\n</code></pre>\n" }, { "answer_id": 2084082, "score": 3, "text": "<p>The least quadratic non-residue $\\!\\!\\pmod{p}$ is always a prime. By exaustive search:\n$$\\left(\\frac{2}{2017}\\right)=1,\\quad \\left(\\frac{3}{2017}\\right)=1,\\quad \\left(\\frac{\\color{red}{5}}{2017}\\right)=-1$$\nhence $\\color{red}{5}$ is the answer. </p>\n\n<p>$\\pm 1,\\pm 2,\\pm 3$ are surely quadratic residues since $2017$ is a number of the form $24k+1$.</p>\n" }, { "answer_id": 2084073, "score": 1, "text": "<p>For amusement value: the discriminant $2017$ has class number one. The principal form is $x^2 + 43 xy - 42 y^2,$ equivalent over $SL_2 \\mathbb Z$ to\n$u^2 + uv - 504 v^2.$ Meanwhile, the form does represent $-1$ integrally, which is automatic as $2017 \\equiv 1 \\pmod 4$ is prime. The Lagrange cycle for the principal form therefore shows all the small primes represented, which are the same as those which are quadratic residues. The line labelled 2 shows that $2$ is represented, line 14 shows that $3$ is represented. Line 40 shows $7.$ Line 32 shows $11.$ The bit about $2$ means that there is an integer solution to $s^2 - 2 t^2 = 2017.$ Look at that, $45^2 - 2 \\cdot 2^2 = 2017$</p>\n\n<p>Discriminant $8068$ also has class number one, so $x^2 - 2017 y^2$ does very well. However, it is not possible to have $x^2 - 2017 y^2 \\equiv 2 \\pmod 4,$ so $2$ itself is out of the picture. </p>\n\n<pre><code>jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./indefCycle 1 43 -42\n\n 0 form 1 43 -42\n\n\n 1 0\n 0 1\n\nTo Return \n 1 0\n 0 1\n\n0 form 1 43 -42 delta -1 ambiguous \n1 form -42 41 2 delta 21\n2 form 2 43 -21 delta -2\n3 form -21 41 4 delta 10\n4 form 4 39 -31 delta -1\n5 form -31 23 12 delta 2\n6 form 12 25 -29 delta -1\n7 form -29 33 8 delta 4\n8 form 8 31 -33 delta -1\n9 form -33 35 6 delta 6\n10 form 6 37 -27 delta -1\n11 form -27 17 16 delta 1\n12 form 16 15 -28 delta -1\n13 form -28 41 3 delta 14\n14 form 3 43 -14 delta -3\n15 form -14 41 6 delta 7\n16 form 6 43 -7 delta -6\n17 form -7 41 12 delta 3\n18 form 12 31 -22 delta -1\n19 form -22 13 21 delta 1\n20 form 21 29 -14 delta -2\n21 form -14 27 23 delta 1\n22 form 23 19 -18 delta -1\n23 form -18 17 24 delta 1\n24 form 24 31 -11 delta -3\n25 form -11 35 18 delta 2\n26 form 18 37 -9 delta -4\n27 form -9 35 22 delta 1\n28 form 22 9 -22 delta -1\n29 form -22 35 9 delta 4\n30 form 9 37 -18 delta -2\n31 form -18 35 11 delta 3\n32 form 11 31 -24 delta -1\n33 form -24 17 18 delta 1\n34 form 18 19 -23 delta -1\n35 form -23 27 14 delta 2\n36 form 14 29 -21 delta -1\n37 form -21 13 22 delta 1\n38 form 22 31 -12 delta -3\n39 form -12 41 7 delta 6\n40 form 7 43 -6 delta -7\n41 form -6 41 14 delta 3\n42 form 14 43 -3 delta -14\n43 form -3 41 28 delta 1\n44 form 28 15 -16 delta -1\n45 form -16 17 27 delta 1\n46 form 27 37 -6 delta -6\n47 form -6 35 33 delta 1\n48 form 33 31 -8 delta -4\n49 form -8 33 29 delta 1\n50 form 29 25 -12 delta -2\n51 form -12 23 31 delta 1\n52 form 31 39 -4 delta -10\n53 form -4 41 21 delta 2\n54 form 21 43 -2 delta -21\n55 form -2 41 42 delta 1\n56 form 42 43 -1 delta -43\n57 form -1 43 42 delta 1 ambiguous -1 composed with form zero \n58 form 42 41 -2 delta -21\n59 form -2 43 21 delta 2\n60 form 21 41 -4 delta -10\n61 form -4 39 31 delta 1\n62 form 31 23 -12 delta -2\n63 form -12 25 29 delta 1\n64 form 29 33 -8 delta -4\n65 form -8 31 33 delta 1\n66 form 33 35 -6 delta -6\n67 form -6 37 27 delta 1\n68 form 27 17 -16 delta -1\n69 form -16 15 28 delta 1\n70 form 28 41 -3 delta -14\n71 form -3 43 14 delta 3\n72 form 14 41 -6 delta -7\n73 form -6 43 7 delta 6\n74 form 7 41 -12 delta -3\n75 form -12 31 22 delta 1\n76 form 22 13 -21 delta -1\n77 form -21 29 14 delta 2\n78 form 14 27 -23 delta -1\n79 form -23 19 18 delta 1\n80 form 18 17 -24 delta -1\n81 form -24 31 11 delta 3\n82 form 11 35 -18 delta -2\n83 form -18 37 9 delta 4\n84 form 9 35 -22 delta -1\n85 form -22 9 22 delta 1\n86 form 22 35 -9 delta -4\n87 form -9 37 18 delta 2\n88 form 18 35 -11 delta -3\n89 form -11 31 24 delta 1\n90 form 24 17 -18 delta -1\n91 form -18 19 23 delta 1\n92 form 23 27 -14 delta -2\n93 form -14 29 21 delta 1\n94 form 21 13 -22 delta -1\n95 form -22 31 12 delta 3\n96 form 12 41 -7 delta -6\n97 form -7 43 6 delta 7\n98 form 6 41 -14 delta -3\n99 form -14 43 3 delta 14\n100 form 3 41 -28 delta -1\n101 form -28 15 16 delta 1\n102 form 16 17 -27 delta -1\n103 form -27 37 6 delta 6\n104 form 6 35 -33 delta -1\n105 form -33 31 8 delta 4\n106 form 8 33 -29 delta -1\n107 form -29 25 12 delta 2\n108 form 12 23 -31 delta -1\n109 form -31 39 4 delta 10\n110 form 4 41 -21 delta -2\n111 form -21 43 2 delta 21\n112 form 2 41 -42 delta -1\n113 form -42 43 1 delta 43\n114 form 1 43 -42\n\n\n form 1 x^2 + 43 x y -42 y^2 \n\nminimum was 1rep x = 1 y = 0 disc 2017 dSqrt 44 M_Ratio 1936\nAutomorph, written on right of Gram matrix: \n-965532677013781482875198088170183835928234466523769 -42440482758478505176147113435578355104819436064102752\n-1010487684725678694670169367513770359638558001526256 -44416503120217965353692480891262309300386228532152777\n=========================================\n</code></pre>\n" } ]
Question: the least odd natural number $k$ such that the equation $x^2\equiv k\pmod {2017}$ doesn&#39;t have integer solution Answer 1: <p>The least quadratic non-residue $\!\!\pmod{p}$ is always a prime. By exaustive search: $$\left(\frac{2}{2017}\right)=1,\quad \left(\frac{3}{2017}\right)=1,\quad \left(\frac{\color{red}{5}}{2017}\right)=-1$$ hence $\color{red}{5}$ is the answer. </p> <p>$\pm 1,\pm 2,\pm 3$ are surely quadratic residues since $2017$ is a number of the form $24k+1$.</p> Answer 2: <p>For amusement value: the discriminant $2017$ has class number one. The principal form is $x^2 + 43 xy - 42 y^2,$ equivalent over $SL_2 \mathbb Z$ to $u^2 + uv - 504 v^2.$ Meanwhile, the form does represent $-1$ integrally, which is automatic as $2017 \equiv 1 \pmod 4$ is prime. The Lagrange cycle for the principal form therefore shows all the small primes represented, which are the same as those which are quadratic residues. The line labelled 2 shows that $2$ is represented, line 14 shows that $3$ is represented. Line 40 shows $7.$ Line 32 shows $11.$ The bit about $2$ means that there is an integer solution to $s^2 - 2 t^2 = 2017.$ Look at that, $45^2 - 2 \cdot 2^2 = 2017$</p> <p>Discriminant $8068$ also has class number one, so $x^2 - 2017 y^2$ does very well. However, it is not possible to have $x^2 - 2017 y^2 \equiv 2 \pmod 4,$ so $2$ itself is out of the picture. </p> <pre><code>jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./indefCycle 1 43 -42 0 form 1 43 -42 1 0 0 1 To Return 1 0 0 1 0 form 1 43 -42 delta -1 ambiguous 1 form -42 41 2 delta 21 2 form 2 43 -21 delta -2 3 form -21 41 4 delta 10 4 form 4 39 -31 delta -1 5 form -31 23 12 delta 2 6 form 12 25 -29 delta -1 7 form -29 33 8 delta 4 8 form 8 31 -33 delta -1 9 form -33 35 6 delta 6 10 form 6 37 -27 delta -1 11 form -27 17 16 delta 1 12 form 16 15 -28 delta -1 13 form -28 41 3 delta 14 14 form 3 43 -14 delta -3 15 form -14 41 6 delta 7 16 form 6 43 -7 delta -6 17 form -7 41 12 delta 3 18 form 12 31 -22 delta -1 19 form -22 13 21 delta 1 20 form 21 29 -14 delta -2 21 form -14 27 23 delta 1 22 form 23 19 -18 delta -1 23 form -18 17 24 delta 1 24 form 24 31 -11 delta -3 25 form -11 35 18 delta 2 26 form 18 37 -9 delta -4 27 form -9 35 22 delta 1 28 form 22 9 -22 delta -1 29 form -22 35 9 delta 4 30 form 9 37 -18 delta -2 31 form -18 35 11 delta 3 32 form 11 31 -24 delta -1 33 form -24 17 18 delta 1 34 form 18 19 -23 delta -1 35 form -23 27 14 delta 2 36 form 14 29 -21 delta -1 37 form -21 13 22 delta 1 38 form 22 31 -12 delta -3 39 form -12 41 7 delta 6 40 form 7 43 -6 delta -7 41 form -6 41 14 delta 3 42 form 14 43 -3 delta -14 43 form -3 41 28 delta 1 44 form 28 15 -16 delta -1 45 form -16 17 27 delta 1 46 form 27 37 -6 delta -6 47 form -6 35 33 delta 1 48 form 33 31 -8 delta -4 49 form -8 33 29 delta 1 50 form 29 25 -12 delta -2 51 form -12 23 31 delta 1 52 form 31 39 -4 delta -10 53 form -4 41 21 delta 2 54 form 21 43 -2 delta -21 55 form -2 41 42 delta 1 56 form 42 43 -1 delta -43 57 form -1 43 42 delta 1 ambiguous -1 composed with form zero 58 form 42 41 -2 delta -21 59 form -2 43 21 delta 2 60 form 21 41 -4 delta -10 61 form -4 39 31 delta 1 62 form 31 23 -12 delta -2 63 form -12 25 29 delta 1 64 form 29 33 -8 delta -4 65 form -8 31 33 delta 1 66 form 33 35 -6 delta -6 67 form -6 37 27 delta 1 68 form 27 17 -16 delta -1 69 form -16 15 28 delta 1 70 form 28 41 -3 delta -14 71 form -3 43 14 delta 3 72 form 14 41 -6 delta -7 73 form -6 43 7 delta 6 74 form 7 41 -12 delta -3 75 form -12 31 22 delta 1 76 form 22 13 -21 delta -1 77 form -21 29 14 delta 2 78 form 14 27 -23 delta -1 79 form -23 19 18 delta 1 80 form 18 17 -24 delta -1 81 form -24 31 11 delta 3 82 form 11 35 -18 delta -2 83 form -18 37 9 delta 4 84 form 9 35 -22 delta -1 85 form -22 9 22 delta 1 86 form 22 35 -9 delta -4 87 form -9 37 18 delta 2 88 form 18 35 -11 delta -3 89 form -11 31 24 delta 1 90 form 24 17 -18 delta -1 91 form -18 19 23 delta 1 92 form 23 27 -14 delta -2 93 form -14 29 21 delta 1 94 form 21 13 -22 delta -1 95 form -22 31 12 delta 3 96 form 12 41 -7 delta -6 97 form -7 43 6 delta 7 98 form 6 41 -14 delta -3 99 form -14 43 3 delta 14 100 form 3 41 -28 delta -1 101 form -28 15 16 delta 1 102 form 16 17 -27 delta -1 103 form -27 37 6 delta 6 104 form 6 35 -33 delta -1 105 form -33 31 8 delta 4 106 form 8 33 -29 delta -1 107 form -29 25 12 delta 2 108 form 12 23 -31 delta -1 109 form -31 39 4 delta 10 110 form 4 41 -21 delta -2 111 form -21 43 2 delta 21 112 form 2 41 -42 delta -1 113 form -42 43 1 delta 43 114 form 1 43 -42 form 1 x^2 + 43 x y -42 y^2 minimum was 1rep x = 1 y = 0 disc 2017 dSqrt 44 M_Ratio 1936 Automorph, written on right of Gram matrix: -965532677013781482875198088170183835928234466523769 -42440482758478505176147113435578355104819436064102752 -1010487684725678694670169367513770359638558001526256 -44416503120217965353692480891262309300386228532152777 ========================================= </code></pre> Answer 3: <p>The least quadratic non-residue $\!\!\pmod{p}$ is always a prime. By exaustive search: $$\left(\frac{2}{2017}\right)=1,\quad \left(\frac{3}{2017}\right)=1,\quad \left(\frac{\color{red}{5}}{2017}\right)=-1$$ hence $\color{red}{5}$ is the answer. </p> <p>$\pm 1,\pm 2,\pm 3$ are surely quadratic residues since $2017$ is a number of the form $24k+1$.</p> Answer 4: <p>For amusement value: the discriminant $2017$ has class number one. The principal form is $x^2 + 43 xy - 42 y^2,$ equivalent over $SL_2 \mathbb Z$ to $u^2 + uv - 504 v^2.$ Meanwhile, the form does represent $-1$ integrally, which is automatic as $2017 \equiv 1 \pmod 4$ is prime. The Lagrange cycle for the principal form therefore shows all the small primes represented, which are the same as those which are quadratic residues. The line labelled 2 shows that $2$ is represented, line 14 shows that $3$ is represented. Line 40 shows $7.$ Line 32 shows $11.$ The bit about $2$ means that there is an integer solution to $s^2 - 2 t^2 = 2017.$ Look at that, $45^2 - 2 \cdot 2^2 = 2017$</p> <p>Discriminant $8068$ also has class number one, so $x^2 - 2017 y^2$ does very well. However, it is not possible to have $x^2 - 2017 y^2 \equiv 2 \pmod 4,$ so $2$ itself is out of the picture. </p> <pre><code>jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./indefCycle 1 43 -42 0 form 1 43 -42 1 0 0 1 To Return 1 0 0 1 0 form 1 43 -42 delta -1 ambiguous 1 form -42 41 2 delta 21 2 form 2 43 -21 delta -2 3 form -21 41 4 delta 10 4 form 4 39 -31 delta -1 5 form -31 23 12 delta 2 6 form 12 25 -29 delta -1 7 form -29 33 8 delta 4 8 form 8 31 -33 delta -1 9 form -33 35 6 delta 6 10 form 6 37 -27 delta -1 11 form -27 17 16 delta 1 12 form 16 15 -28 delta -1 13 form -28 41 3 delta 14 14 form 3 43 -14 delta -3 15 form -14 41 6 delta 7 16 form 6 43 -7 delta -6 17 form -7 41 12 delta 3 18 form 12 31 -22 delta -1 19 form -22 13 21 delta 1 20 form 21 29 -14 delta -2 21 form -14 27 23 delta 1 22 form 23 19 -18 delta -1 23 form -18 17 24 delta 1 24 form 24 31 -11 delta -3 25 form -11 35 18 delta 2 26 form 18 37 -9 delta -4 27 form -9 35 22 delta 1 28 form 22 9 -22 delta -1 29 form -22 35 9 delta 4 30 form 9 37 -18 delta -2 31 form -18 35 11 delta 3 32 form 11 31 -24 delta -1 33 form -24 17 18 delta 1 34 form 18 19 -23 delta -1 35 form -23 27 14 delta 2 36 form 14 29 -21 delta -1 37 form -21 13 22 delta 1 38 form 22 31 -12 delta -3 39 form -12 41 7 delta 6 40 form 7 43 -6 delta -7 41 form -6 41 14 delta 3 42 form 14 43 -3 delta -14 43 form -3 41 28 delta 1 44 form 28 15 -16 delta -1 45 form -16 17 27 delta 1 46 form 27 37 -6 delta -6 47 form -6 35 33 delta 1 48 form 33 31 -8 delta -4 49 form -8 33 29 delta 1 50 form 29 25 -12 delta -2 51 form -12 23 31 delta 1 52 form 31 39 -4 delta -10 53 form -4 41 21 delta 2 54 form 21 43 -2 delta -21 55 form -2 41 42 delta 1 56 form 42 43 -1 delta -43 57 form -1 43 42 delta 1 ambiguous -1 composed with form zero 58 form 42 41 -2 delta -21 59 form -2 43 21 delta 2 60 form 21 41 -4 delta -10 61 form -4 39 31 delta 1 62 form 31 23 -12 delta -2 63 form -12 25 29 delta 1 64 form 29 33 -8 delta -4 65 form -8 31 33 delta 1 66 form 33 35 -6 delta -6 67 form -6 37 27 delta 1 68 form 27 17 -16 delta -1 69 form -16 15 28 delta 1 70 form 28 41 -3 delta -14 71 form -3 43 14 delta 3 72 form 14 41 -6 delta -7 73 form -6 43 7 delta 6 74 form 7 41 -12 delta -3 75 form -12 31 22 delta 1 76 form 22 13 -21 delta -1 77 form -21 29 14 delta 2 78 form 14 27 -23 delta -1 79 form -23 19 18 delta 1 80 form 18 17 -24 delta -1 81 form -24 31 11 delta 3 82 form 11 35 -18 delta -2 83 form -18 37 9 delta 4 84 form 9 35 -22 delta -1 85 form -22 9 22 delta 1 86 form 22 35 -9 delta -4 87 form -9 37 18 delta 2 88 form 18 35 -11 delta -3 89 form -11 31 24 delta 1 90 form 24 17 -18 delta -1 91 form -18 19 23 delta 1 92 form 23 27 -14 delta -2 93 form -14 29 21 delta 1 94 form 21 13 -22 delta -1 95 form -22 31 12 delta 3 96 form 12 41 -7 delta -6 97 form -7 43 6 delta 7 98 form 6 41 -14 delta -3 99 form -14 43 3 delta 14 100 form 3 41 -28 delta -1 101 form -28 15 16 delta 1 102 form 16 17 -27 delta -1 103 form -27 37 6 delta 6 104 form 6 35 -33 delta -1 105 form -33 31 8 delta 4 106 form 8 33 -29 delta -1 107 form -29 25 12 delta 2 108 form 12 23 -31 delta -1 109 form -31 39 4 delta 10 110 form 4 41 -21 delta -2 111 form -21 43 2 delta 21 112 form 2 41 -42 delta -1 113 form -42 43 1 delta 43 114 form 1 43 -42 form 1 x^2 + 43 x y -42 y^2 minimum was 1rep x = 1 y = 0 disc 2017 dSqrt 44 M_Ratio 1936 Automorph, written on right of Gram matrix: -965532677013781482875198088170183835928234466523769 -42440482758478505176147113435578355104819436064102752 -1010487684725678694670169367513770359638558001526256 -44416503120217965353692480891262309300386228532152777 ========================================= </code></pre> Answer 5: <p>The least quadratic non-residue $\!\!\pmod{p}$ is always a prime. By exaustive search: $$\left(\frac{2}{2017}\right)=1,\quad \left(\frac{3}{2017}\right)=1,\quad \left(\frac{\color{red}{5}}{2017}\right)=-1$$ hence $\color{red}{5}$ is the answer. </p> <p>$\pm 1,\pm 2,\pm 3$ are surely quadratic residues since $2017$ is a number of the form $24k+1$.</p> Answer 6: <p>For amusement value: the discriminant $2017$ has class number one. The principal form is $x^2 + 43 xy - 42 y^2,$ equivalent over $SL_2 \mathbb Z$ to $u^2 + uv - 504 v^2.$ Meanwhile, the form does represent $-1$ integrally, which is automatic as $2017 \equiv 1 \pmod 4$ is prime. The Lagrange cycle for the principal form therefore shows all the small primes represented, which are the same as those which are quadratic residues. The line labelled 2 shows that $2$ is represented, line 14 shows that $3$ is represented. Line 40 shows $7.$ Line 32 shows $11.$ The bit about $2$ means that there is an integer solution to $s^2 - 2 t^2 = 2017.$ Look at that, $45^2 - 2 \cdot 2^2 = 2017$</p> <p>Discriminant $8068$ also has class number one, so $x^2 - 2017 y^2$ does very well. However, it is not possible to have $x^2 - 2017 y^2 \equiv 2 \pmod 4,$ so $2$ itself is out of the picture. </p> <pre><code>jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./indefCycle 1 43 -42 0 form 1 43 -42 1 0 0 1 To Return 1 0 0 1 0 form 1 43 -42 delta -1 ambiguous 1 form -42 41 2 delta 21 2 form 2 43 -21 delta -2 3 form -21 41 4 delta 10 4 form 4 39 -31 delta -1 5 form -31 23 12 delta 2 6 form 12 25 -29 delta -1 7 form -29 33 8 delta 4 8 form 8 31 -33 delta -1 9 form -33 35 6 delta 6 10 form 6 37 -27 delta -1 11 form -27 17 16 delta 1 12 form 16 15 -28 delta -1 13 form -28 41 3 delta 14 14 form 3 43 -14 delta -3 15 form -14 41 6 delta 7 16 form 6 43 -7 delta -6 17 form -7 41 12 delta 3 18 form 12 31 -22 delta -1 19 form -22 13 21 delta 1 20 form 21 29 -14 delta -2 21 form -14 27 23 delta 1 22 form 23 19 -18 delta -1 23 form -18 17 24 delta 1 24 form 24 31 -11 delta -3 25 form -11 35 18 delta 2 26 form 18 37 -9 delta -4 27 form -9 35 22 delta 1 28 form 22 9 -22 delta -1 29 form -22 35 9 delta 4 30 form 9 37 -18 delta -2 31 form -18 35 11 delta 3 32 form 11 31 -24 delta -1 33 form -24 17 18 delta 1 34 form 18 19 -23 delta -1 35 form -23 27 14 delta 2 36 form 14 29 -21 delta -1 37 form -21 13 22 delta 1 38 form 22 31 -12 delta -3 39 form -12 41 7 delta 6 40 form 7 43 -6 delta -7 41 form -6 41 14 delta 3 42 form 14 43 -3 delta -14 43 form -3 41 28 delta 1 44 form 28 15 -16 delta -1 45 form -16 17 27 delta 1 46 form 27 37 -6 delta -6 47 form -6 35 33 delta 1 48 form 33 31 -8 delta -4 49 form -8 33 29 delta 1 50 form 29 25 -12 delta -2 51 form -12 23 31 delta 1 52 form 31 39 -4 delta -10 53 form -4 41 21 delta 2 54 form 21 43 -2 delta -21 55 form -2 41 42 delta 1 56 form 42 43 -1 delta -43 57 form -1 43 42 delta 1 ambiguous -1 composed with form zero 58 form 42 41 -2 delta -21 59 form -2 43 21 delta 2 60 form 21 41 -4 delta -10 61 form -4 39 31 delta 1 62 form 31 23 -12 delta -2 63 form -12 25 29 delta 1 64 form 29 33 -8 delta -4 65 form -8 31 33 delta 1 66 form 33 35 -6 delta -6 67 form -6 37 27 delta 1 68 form 27 17 -16 delta -1 69 form -16 15 28 delta 1 70 form 28 41 -3 delta -14 71 form -3 43 14 delta 3 72 form 14 41 -6 delta -7 73 form -6 43 7 delta 6 74 form 7 41 -12 delta -3 75 form -12 31 22 delta 1 76 form 22 13 -21 delta -1 77 form -21 29 14 delta 2 78 form 14 27 -23 delta -1 79 form -23 19 18 delta 1 80 form 18 17 -24 delta -1 81 form -24 31 11 delta 3 82 form 11 35 -18 delta -2 83 form -18 37 9 delta 4 84 form 9 35 -22 delta -1 85 form -22 9 22 delta 1 86 form 22 35 -9 delta -4 87 form -9 37 18 delta 2 88 form 18 35 -11 delta -3 89 form -11 31 24 delta 1 90 form 24 17 -18 delta -1 91 form -18 19 23 delta 1 92 form 23 27 -14 delta -2 93 form -14 29 21 delta 1 94 form 21 13 -22 delta -1 95 form -22 31 12 delta 3 96 form 12 41 -7 delta -6 97 form -7 43 6 delta 7 98 form 6 41 -14 delta -3 99 form -14 43 3 delta 14 100 form 3 41 -28 delta -1 101 form -28 15 16 delta 1 102 form 16 17 -27 delta -1 103 form -27 37 6 delta 6 104 form 6 35 -33 delta -1 105 form -33 31 8 delta 4 106 form 8 33 -29 delta -1 107 form -29 25 12 delta 2 108 form 12 23 -31 delta -1 109 form -31 39 4 delta 10 110 form 4 41 -21 delta -2 111 form -21 43 2 delta 21 112 form 2 41 -42 delta -1 113 form -42 43 1 delta 43 114 form 1 43 -42 form 1 x^2 + 43 x y -42 y^2 minimum was 1rep x = 1 y = 0 disc 2017 dSqrt 44 M_Ratio 1936 Automorph, written on right of Gram matrix: -965532677013781482875198088170183835928234466523769 -42440482758478505176147113435578355104819436064102752 -1010487684725678694670169367513770359638558001526256 -44416503120217965353692480891262309300386228532152777 ========================================= </code></pre>
If $(1+ \cos A) (1+ \cos B) (1+\cos C)= y = (1- \cos A) (1-\cos B) (1-\cos C)$ then prove that $y = \pm \sin A \sin B \sin C$
2,084,182
4
https://math.stackexchange.com/questions/2084182/if-1-cos-a-1-cos-b-1-cos-c-y-1-cos-a-1-cos-b-1-cos-c-t
<p>If $$(1+ \cos A) (1+ \cos B) (1+\cos C)= y = (1- \cos A) (1-\cos B) (1-\cos C)$$ then prove that $$y = \pm \sin A \sin B \sin C$$</p> <p>My Work: If $y=\prod(1+\cos A)=\prod(1-\cos A)$</p> <p>$y^2=y\cdot y=\prod(1+\cos A)\cdot\prod(1-\cos A)$</p> <p>How, should I move further? I am not getting any idea.</p>
[ { "answer_id": 2084185, "score": 3, "text": "<p>You've essentially solved it. Note $\\prod(1+\\cos A)\\cdot\\prod(1-\\cos A) = \\prod((1+\\cos A) (1 - \\cos A)) = \\prod (1 - \\cos^2 A) = \\prod \\sin^2 A$.</p>\n\n<p>Therefore, you can finish by showing $y^2 = \\prod \\sin^2 A$, and the claim follows.</p>\n\n<p>Hope this helps!</p>\n" }, { "answer_id": 2084186, "score": 0, "text": "<p>Hint: \n$$\\sin \\theta = \\pm \\sqrt{1- \\cos^2 \\theta}$$</p>\n" }, { "answer_id": 2084185, "score": 3, "text": "<p>You've essentially solved it. Note $\\prod(1+\\cos A)\\cdot\\prod(1-\\cos A) = \\prod((1+\\cos A) (1 - \\cos A)) = \\prod (1 - \\cos^2 A) = \\prod \\sin^2 A$.</p>\n\n<p>Therefore, you can finish by showing $y^2 = \\prod \\sin^2 A$, and the claim follows.</p>\n\n<p>Hope this helps!</p>\n" }, { "answer_id": 2084186, "score": 0, "text": "<p>Hint: \n$$\\sin \\theta = \\pm \\sqrt{1- \\cos^2 \\theta}$$</p>\n" }, { "answer_id": 2084185, "score": 3, "text": "<p>You've essentially solved it. Note $\\prod(1+\\cos A)\\cdot\\prod(1-\\cos A) = \\prod((1+\\cos A) (1 - \\cos A)) = \\prod (1 - \\cos^2 A) = \\prod \\sin^2 A$.</p>\n\n<p>Therefore, you can finish by showing $y^2 = \\prod \\sin^2 A$, and the claim follows.</p>\n\n<p>Hope this helps!</p>\n" }, { "answer_id": 2084186, "score": 0, "text": "<p>Hint: \n$$\\sin \\theta = \\pm \\sqrt{1- \\cos^2 \\theta}$$</p>\n" } ]
Question: If $(1+ \cos A) (1+ \cos B) (1+\cos C)= y = (1- \cos A) (1-\cos B) (1-\cos C)$ then prove that $y = \pm \sin A \sin B \sin C$ Answer 1: <p>You've essentially solved it. Note $\prod(1+\cos A)\cdot\prod(1-\cos A) = \prod((1+\cos A) (1 - \cos A)) = \prod (1 - \cos^2 A) = \prod \sin^2 A$.</p> <p>Therefore, you can finish by showing $y^2 = \prod \sin^2 A$, and the claim follows.</p> <p>Hope this helps!</p> Answer 2: <p>Hint: $$\sin \theta = \pm \sqrt{1- \cos^2 \theta}$$</p> Answer 3: <p>You've essentially solved it. Note $\prod(1+\cos A)\cdot\prod(1-\cos A) = \prod((1+\cos A) (1 - \cos A)) = \prod (1 - \cos^2 A) = \prod \sin^2 A$.</p> <p>Therefore, you can finish by showing $y^2 = \prod \sin^2 A$, and the claim follows.</p> <p>Hope this helps!</p> Answer 4: <p>Hint: $$\sin \theta = \pm \sqrt{1- \cos^2 \theta}$$</p> Answer 5: <p>You've essentially solved it. Note $\prod(1+\cos A)\cdot\prod(1-\cos A) = \prod((1+\cos A) (1 - \cos A)) = \prod (1 - \cos^2 A) = \prod \sin^2 A$.</p> <p>Therefore, you can finish by showing $y^2 = \prod \sin^2 A$, and the claim follows.</p> <p>Hope this helps!</p> Answer 6: <p>Hint: $$\sin \theta = \pm \sqrt{1- \cos^2 \theta}$$</p>
Taking a derivative from both sides of the equation
2,084,320
4
https://math.stackexchange.com/questions/2084320/taking-a-derivative-from-both-sides-of-the-equation
<p>Our teacher tried to explain to us how to find a slope at a given point of the function $y^2 = 2px$ by taking derivative from both sides of the equation, he did this: $$y^2= 2px $$ $$2yy' = 2p$$ $$ y' = \frac{p}{y}$$ I haven't quite understood the second part of the process, will be thankful for a logical and clear explanation :)</p>
[ { "answer_id": 2084322, "score": 4, "text": "<p>When we find derivative of some term. First we find the derivative of the power then the derivative of term. </p>\n\n<p>So left hand side of second step,</p>\n\n<p>$\\frac{d}{dx}y^2 = 2y^{2-1}.\\frac{d}{dx}y$</p>\n\n<p>= $2y^{1}.\\frac{dy}{dx}$</p>\n\n<p>And you can write $\\frac{dy}{dx} = y'$</p>\n\n<p>So we have,</p>\n\n<p>= $2y.y'$</p>\n\n<p>In second step on the right hand side,</p>\n\n<p>$\\frac{d}{dx}(2px)$ </p>\n\n<p>= 2p$\\frac{d}{dx}x$ </p>\n\n<p>As 2 and p is constant and derivative of x with respect to x is 1.</p>\n\n<p>= 2p</p>\n\n<p>At last,</p>\n\n<p>2y.y' = 2p.</p>\n\n<p>$y' = \\frac{p}{y}$</p>\n" }, { "answer_id": 2084329, "score": 4, "text": "<p>It will become clear if you know the concept of <a href=\"https://www.math.ucdavis.edu/~kouba/CalcOneDIRECTORY/implicitdiffdirectory/ImplicitDiff.html\" rel=\"nofollow noreferrer\">implicit differentiation</a>. </p>\n\n<p>Thus we have that $$y^2=2px $$ $$\\Rightarrow 2y\\frac {dy}{dx} =2p\\frac {dx}{dx} $$ $$\\Rightarrow 2yy'= 2p $$ $$\\Rightarrow y'=\\frac {p}{y} $$ where $y^{(n)} $ is the nth derivative of $y$ (we generally use $y', y''$ etc. for lower derivatives but it is not advisable for higher ones). Hope it helps. </p>\n" }, { "answer_id": 2084328, "score": 2, "text": "<p>This is just the chain rule you presumably learned about earlier in calc, but the notation is more compact. Recall the chain rule says that\n$$\n\\frac{d}{dx}f(y(x)) = f'(y(x))y'(x)\n$$</p>\n\n<p>The LHS of the second line is just the chain rule applied for $f(y) = y^2$. Since $f'(y) = 2y,$ we have \n$$\n\\frac{d}{dx}(y(x))^2 = 2y(x)y'(x).\n$$</p>\n" }, { "answer_id": 2084339, "score": 0, "text": "<p>$y$ is a function of $x$. $y^2$ is differentiated with respect to independent variable $x$ using Chain Rule. Instead of everytime writing like the following</p>\n\n<p>$$y(x)^2= 2px, \\,2y(x)y'(x) = 2p,\\, y^{\\prime}(x) = \\frac{p}{y(x)}$$ </p>\n\n<p>your teacher has omitted repetitive parentheses $(x)$, assuming that you can appreciate that $p$ is a constant and $x,y$ are variables.</p>\n\n<p>BTW $p$ is double the focal length of parabola which has a property that sub-normal <em>( projection on x-axis $=p)$ is always constant</em> for the normal (in red).</p>\n\n<p><a href=\"https://i.stack.imgur.com/5vBJ6.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/5vBJ6.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2084322, "score": 4, "text": "<p>When we find derivative of some term. First we find the derivative of the power then the derivative of term. </p>\n\n<p>So left hand side of second step,</p>\n\n<p>$\\frac{d}{dx}y^2 = 2y^{2-1}.\\frac{d}{dx}y$</p>\n\n<p>= $2y^{1}.\\frac{dy}{dx}$</p>\n\n<p>And you can write $\\frac{dy}{dx} = y'$</p>\n\n<p>So we have,</p>\n\n<p>= $2y.y'$</p>\n\n<p>In second step on the right hand side,</p>\n\n<p>$\\frac{d}{dx}(2px)$ </p>\n\n<p>= 2p$\\frac{d}{dx}x$ </p>\n\n<p>As 2 and p is constant and derivative of x with respect to x is 1.</p>\n\n<p>= 2p</p>\n\n<p>At last,</p>\n\n<p>2y.y' = 2p.</p>\n\n<p>$y' = \\frac{p}{y}$</p>\n" }, { "answer_id": 2084329, "score": 4, "text": "<p>It will become clear if you know the concept of <a href=\"https://www.math.ucdavis.edu/~kouba/CalcOneDIRECTORY/implicitdiffdirectory/ImplicitDiff.html\" rel=\"nofollow noreferrer\">implicit differentiation</a>. </p>\n\n<p>Thus we have that $$y^2=2px $$ $$\\Rightarrow 2y\\frac {dy}{dx} =2p\\frac {dx}{dx} $$ $$\\Rightarrow 2yy'= 2p $$ $$\\Rightarrow y'=\\frac {p}{y} $$ where $y^{(n)} $ is the nth derivative of $y$ (we generally use $y', y''$ etc. for lower derivatives but it is not advisable for higher ones). Hope it helps. </p>\n" }, { "answer_id": 2084328, "score": 2, "text": "<p>This is just the chain rule you presumably learned about earlier in calc, but the notation is more compact. Recall the chain rule says that\n$$\n\\frac{d}{dx}f(y(x)) = f'(y(x))y'(x)\n$$</p>\n\n<p>The LHS of the second line is just the chain rule applied for $f(y) = y^2$. Since $f'(y) = 2y,$ we have \n$$\n\\frac{d}{dx}(y(x))^2 = 2y(x)y'(x).\n$$</p>\n" }, { "answer_id": 2084339, "score": 0, "text": "<p>$y$ is a function of $x$. $y^2$ is differentiated with respect to independent variable $x$ using Chain Rule. Instead of everytime writing like the following</p>\n\n<p>$$y(x)^2= 2px, \\,2y(x)y'(x) = 2p,\\, y^{\\prime}(x) = \\frac{p}{y(x)}$$ </p>\n\n<p>your teacher has omitted repetitive parentheses $(x)$, assuming that you can appreciate that $p$ is a constant and $x,y$ are variables.</p>\n\n<p>BTW $p$ is double the focal length of parabola which has a property that sub-normal <em>( projection on x-axis $=p)$ is always constant</em> for the normal (in red).</p>\n\n<p><a href=\"https://i.stack.imgur.com/5vBJ6.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/5vBJ6.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2084322, "score": 4, "text": "<p>When we find derivative of some term. First we find the derivative of the power then the derivative of term. </p>\n\n<p>So left hand side of second step,</p>\n\n<p>$\\frac{d}{dx}y^2 = 2y^{2-1}.\\frac{d}{dx}y$</p>\n\n<p>= $2y^{1}.\\frac{dy}{dx}$</p>\n\n<p>And you can write $\\frac{dy}{dx} = y'$</p>\n\n<p>So we have,</p>\n\n<p>= $2y.y'$</p>\n\n<p>In second step on the right hand side,</p>\n\n<p>$\\frac{d}{dx}(2px)$ </p>\n\n<p>= 2p$\\frac{d}{dx}x$ </p>\n\n<p>As 2 and p is constant and derivative of x with respect to x is 1.</p>\n\n<p>= 2p</p>\n\n<p>At last,</p>\n\n<p>2y.y' = 2p.</p>\n\n<p>$y' = \\frac{p}{y}$</p>\n" }, { "answer_id": 2084329, "score": 4, "text": "<p>It will become clear if you know the concept of <a href=\"https://www.math.ucdavis.edu/~kouba/CalcOneDIRECTORY/implicitdiffdirectory/ImplicitDiff.html\" rel=\"nofollow noreferrer\">implicit differentiation</a>. </p>\n\n<p>Thus we have that $$y^2=2px $$ $$\\Rightarrow 2y\\frac {dy}{dx} =2p\\frac {dx}{dx} $$ $$\\Rightarrow 2yy'= 2p $$ $$\\Rightarrow y'=\\frac {p}{y} $$ where $y^{(n)} $ is the nth derivative of $y$ (we generally use $y', y''$ etc. for lower derivatives but it is not advisable for higher ones). Hope it helps. </p>\n" }, { "answer_id": 2084328, "score": 2, "text": "<p>This is just the chain rule you presumably learned about earlier in calc, but the notation is more compact. Recall the chain rule says that\n$$\n\\frac{d}{dx}f(y(x)) = f'(y(x))y'(x)\n$$</p>\n\n<p>The LHS of the second line is just the chain rule applied for $f(y) = y^2$. Since $f'(y) = 2y,$ we have \n$$\n\\frac{d}{dx}(y(x))^2 = 2y(x)y'(x).\n$$</p>\n" }, { "answer_id": 2084339, "score": 0, "text": "<p>$y$ is a function of $x$. $y^2$ is differentiated with respect to independent variable $x$ using Chain Rule. Instead of everytime writing like the following</p>\n\n<p>$$y(x)^2= 2px, \\,2y(x)y'(x) = 2p,\\, y^{\\prime}(x) = \\frac{p}{y(x)}$$ </p>\n\n<p>your teacher has omitted repetitive parentheses $(x)$, assuming that you can appreciate that $p$ is a constant and $x,y$ are variables.</p>\n\n<p>BTW $p$ is double the focal length of parabola which has a property that sub-normal <em>( projection on x-axis $=p)$ is always constant</em> for the normal (in red).</p>\n\n<p><a href=\"https://i.stack.imgur.com/5vBJ6.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/5vBJ6.png\" alt=\"enter image description here\"></a></p>\n" } ]
Question: Taking a derivative from both sides of the equation Answer 1: <p>When we find derivative of some term. First we find the derivative of the power then the derivative of term. </p> <p>So left hand side of second step,</p> <p>$\frac{d}{dx}y^2 = 2y^{2-1}.\frac{d}{dx}y$</p> <p>= $2y^{1}.\frac{dy}{dx}$</p> <p>And you can write $\frac{dy}{dx} = y'$</p> <p>So we have,</p> <p>= $2y.y'$</p> <p>In second step on the right hand side,</p> <p>$\frac{d}{dx}(2px)$ </p> <p>= 2p$\frac{d}{dx}x$ </p> <p>As 2 and p is constant and derivative of x with respect to x is 1.</p> <p>= 2p</p> <p>At last,</p> <p>2y.y' = 2p.</p> <p>$y' = \frac{p}{y}$</p> Answer 2: <p>It will become clear if you know the concept of <a href="https://www.math.ucdavis.edu/~kouba/CalcOneDIRECTORY/implicitdiffdirectory/ImplicitDiff.html" rel="nofollow noreferrer">implicit differentiation</a>. </p> <p>Thus we have that $$y^2=2px $$ $$\Rightarrow 2y\frac {dy}{dx} =2p\frac {dx}{dx} $$ $$\Rightarrow 2yy'= 2p $$ $$\Rightarrow y'=\frac {p}{y} $$ where $y^{(n)} $ is the nth derivative of $y$ (we generally use $y', y''$ etc. for lower derivatives but it is not advisable for higher ones). Hope it helps. </p> Answer 3: <p>This is just the chain rule you presumably learned about earlier in calc, but the notation is more compact. Recall the chain rule says that $$ \frac{d}{dx}f(y(x)) = f'(y(x))y'(x) $$</p> <p>The LHS of the second line is just the chain rule applied for $f(y) = y^2$. Since $f'(y) = 2y,$ we have $$ \frac{d}{dx}(y(x))^2 = 2y(x)y'(x). $$</p> Answer 4: <p>$y$ is a function of $x$. $y^2$ is differentiated with respect to independent variable $x$ using Chain Rule. Instead of everytime writing like the following</p> <p>$$y(x)^2= 2px, \,2y(x)y'(x) = 2p,\, y^{\prime}(x) = \frac{p}{y(x)}$$ </p> <p>your teacher has omitted repetitive parentheses $(x)$, assuming that you can appreciate that $p$ is a constant and $x,y$ are variables.</p> <p>BTW $p$ is double the focal length of parabola which has a property that sub-normal <em>( projection on x-axis $=p)$ is always constant</em> for the normal (in red).</p> <p><a href="https://i.stack.imgur.com/5vBJ6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5vBJ6.png" alt="enter image description here"></a></p> Answer 5: <p>When we find derivative of some term. First we find the derivative of the power then the derivative of term. </p> <p>So left hand side of second step,</p> <p>$\frac{d}{dx}y^2 = 2y^{2-1}.\frac{d}{dx}y$</p> <p>= $2y^{1}.\frac{dy}{dx}$</p> <p>And you can write $\frac{dy}{dx} = y'$</p> <p>So we have,</p> <p>= $2y.y'$</p> <p>In second step on the right hand side,</p> <p>$\frac{d}{dx}(2px)$ </p> <p>= 2p$\frac{d}{dx}x$ </p> <p>As 2 and p is constant and derivative of x with respect to x is 1.</p> <p>= 2p</p> <p>At last,</p> <p>2y.y' = 2p.</p> <p>$y' = \frac{p}{y}$</p> Answer 6: <p>It will become clear if you know the concept of <a href="https://www.math.ucdavis.edu/~kouba/CalcOneDIRECTORY/implicitdiffdirectory/ImplicitDiff.html" rel="nofollow noreferrer">implicit differentiation</a>. </p> <p>Thus we have that $$y^2=2px $$ $$\Rightarrow 2y\frac {dy}{dx} =2p\frac {dx}{dx} $$ $$\Rightarrow 2yy'= 2p $$ $$\Rightarrow y'=\frac {p}{y} $$ where $y^{(n)} $ is the nth derivative of $y$ (we generally use $y', y''$ etc. for lower derivatives but it is not advisable for higher ones). Hope it helps. </p> Answer 7: <p>This is just the chain rule you presumably learned about earlier in calc, but the notation is more compact. Recall the chain rule says that $$ \frac{d}{dx}f(y(x)) = f'(y(x))y'(x) $$</p> <p>The LHS of the second line is just the chain rule applied for $f(y) = y^2$. Since $f'(y) = 2y,$ we have $$ \frac{d}{dx}(y(x))^2 = 2y(x)y'(x). $$</p> Answer 8: <p>$y$ is a function of $x$. $y^2$ is differentiated with respect to independent variable $x$ using Chain Rule. Instead of everytime writing like the following</p> <p>$$y(x)^2= 2px, \,2y(x)y'(x) = 2p,\, y^{\prime}(x) = \frac{p}{y(x)}$$ </p> <p>your teacher has omitted repetitive parentheses $(x)$, assuming that you can appreciate that $p$ is a constant and $x,y$ are variables.</p> <p>BTW $p$ is double the focal length of parabola which has a property that sub-normal <em>( projection on x-axis $=p)$ is always constant</em> for the normal (in red).</p> <p><a href="https://i.stack.imgur.com/5vBJ6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5vBJ6.png" alt="enter image description here"></a></p> Answer 9: <p>When we find derivative of some term. First we find the derivative of the power then the derivative of term. </p> <p>So left hand side of second step,</p> <p>$\frac{d}{dx}y^2 = 2y^{2-1}.\frac{d}{dx}y$</p> <p>= $2y^{1}.\frac{dy}{dx}$</p> <p>And you can write $\frac{dy}{dx} = y'$</p> <p>So we have,</p> <p>= $2y.y'$</p> <p>In second step on the right hand side,</p> <p>$\frac{d}{dx}(2px)$ </p> <p>= 2p$\frac{d}{dx}x$ </p> <p>As 2 and p is constant and derivative of x with respect to x is 1.</p> <p>= 2p</p> <p>At last,</p> <p>2y.y' = 2p.</p> <p>$y' = \frac{p}{y}$</p> Answer 10: <p>It will become clear if you know the concept of <a href="https://www.math.ucdavis.edu/~kouba/CalcOneDIRECTORY/implicitdiffdirectory/ImplicitDiff.html" rel="nofollow noreferrer">implicit differentiation</a>. </p> <p>Thus we have that $$y^2=2px $$ $$\Rightarrow 2y\frac {dy}{dx} =2p\frac {dx}{dx} $$ $$\Rightarrow 2yy'= 2p $$ $$\Rightarrow y'=\frac {p}{y} $$ where $y^{(n)} $ is the nth derivative of $y$ (we generally use $y', y''$ etc. for lower derivatives but it is not advisable for higher ones). Hope it helps. </p> Answer 11: <p>This is just the chain rule you presumably learned about earlier in calc, but the notation is more compact. Recall the chain rule says that $$ \frac{d}{dx}f(y(x)) = f'(y(x))y'(x) $$</p> <p>The LHS of the second line is just the chain rule applied for $f(y) = y^2$. Since $f'(y) = 2y,$ we have $$ \frac{d}{dx}(y(x))^2 = 2y(x)y'(x). $$</p> Answer 12: <p>$y$ is a function of $x$. $y^2$ is differentiated with respect to independent variable $x$ using Chain Rule. Instead of everytime writing like the following</p> <p>$$y(x)^2= 2px, \,2y(x)y'(x) = 2p,\, y^{\prime}(x) = \frac{p}{y(x)}$$ </p> <p>your teacher has omitted repetitive parentheses $(x)$, assuming that you can appreciate that $p$ is a constant and $x,y$ are variables.</p> <p>BTW $p$ is double the focal length of parabola which has a property that sub-normal <em>( projection on x-axis $=p)$ is always constant</em> for the normal (in red).</p> <p><a href="https://i.stack.imgur.com/5vBJ6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5vBJ6.png" alt="enter image description here"></a></p>
How to construct symmetric and positive definite $A,B,C$ such that $A+B+C=I$?
2,084,573
4
https://math.stackexchange.com/questions/2084573/how-to-construct-symmetric-and-positive-definite-a-b-c-such-that-abc-i
<p>In an attempt to formulate a answer to this (in)famous question <UL><LI> <A HREF="http://math.stackexchange.com/questions/860294/how-does-one-prove-the-matrix-inequality-det-left6a3b3c3i-n-right">How does one prove the matrix inequality $\det\left(6(A^3+B^3+C^3)+I_{n}\right)\ge 5^n\det(A^2+B^2+C^2)$?</A> </LI></UL> I'm trying to construct three $n\times n$ matrices $A,B,C$ that are (a) symmetric, (b) positive definite, (c) add to $I_n$ . Note that I've already decided to restrict attention to the reals and I have replaced Hermitian by symmetric (which IMO is difficult enough).</p> <p>My unsuccessful tries are a wild mixture of two extremes: <UL><LI>Make a random square matrix $H$ and form $A = H^TH$ . Make another random square matrix $H$ and form $B = H^TH$ . In the same way, form $C = H^TH$ . Then $A,B,C$ are symmetric and positive definite. But in general $A+B+C \ne I$ .</LI><LI> Generate random numbers for $A_{ij} = A_{ji}$ , $B_{ij} = B_{ji}$ and form $C_{ij} = C_{ji} = I_{ij}-A_{ij}-B_{ij}$ . Then $A,B,C$ are symmetric and $A+B+C = I$ , but it cannot be guaranteed that these are positive definite matrices.</LI></UL> So the question is: how can the three requirements (a) , (b) , (c) be fulfilled at the same time, while keeping $A,B,C$ yet as random as possible? My plan is to do numerical experiments and eventually find a counter example. I have all the ingredients to do it, except this.</p>
[ { "answer_id": 2084970, "score": 4, "text": "<p>This is easy. Denote by $\\operatorname{diag}(x)$ the diagonal matrix whose diagonal is the vector $x$. Assuming that $A$ is diagonal, every valid triple $(A,B,C)$ can be expressed as follows:\n\\begin{align*}\nA&amp;=\\operatorname{diag}(a),\\\\\nR&amp;=(I-A)^{1/2},\\\\\nB&amp;=RU\\operatorname{diag}(b)U^\\ast R,\\\\\nC&amp;=I-A-B,\n\\end{align*}\nwhere $a$ and $b$ are two vectors whose entries lie inside $(0,1)$ and $U$ is a unitary matrix. Note that $C=RU\\left(I-\\operatorname{diag}(b)\\right)U^\\ast R$. Hence it is positive definite.</p>\n\n<p>So, to generate a random sample, all you need are random instances of $a,b$ and $U$. The unitary matrix $U$ can be obtained by performing a singular value decomposition on a random square matrix, or you may generate it using the method mentioned in <a href=\"https://en.wikipedia.org/wiki/Orthogonal_matrix#Randomization\" rel=\"nofollow noreferrer\">Wikipedia</a>.</p>\n" }, { "answer_id": 4527024, "score": 2, "text": "<p>Another option is the following.</p>\n<ol>\n<li>Pick arbitrary random matrices <span class=\"math-container\">$X$</span>, <span class=\"math-container\">$Y$</span>, <span class=\"math-container\">$Z$</span> (either real or complex depending on whether symmetric or Hermitian matrices are needed).</li>\n<li>Set <span class=\"math-container\">$A_0 = X^\\dagger X$</span>, <span class=\"math-container\">$B_0 = Y^\\dagger Y$</span>, <span class=\"math-container\">$C_0 = Z^\\dagger Z$</span>. These are\npositive Hermitian, but do not necessarily add up to one.</li>\n<li>Compute <span class=\"math-container\">$D = A_0 + B_0 + C_0$</span> and set <span class=\"math-container\">$A = D^{-1/2} A_0 D^{-1/2}$</span>, <span class=\"math-container\">$B = D^{-1/2} B_0 D^{-1/2}$</span>, <span class=\"math-container\">$C = D^{-1/2} C_0 D^{-1/2}$</span>.</li>\n</ol>\n<p>By construction <span class=\"math-container\">$A+B+C = D^{-1/2} D D^{-1/2} = I$</span>.</p>\n<p>The minor downside of this construction (depending on the distribution used to pick <span class=\"math-container\">$X$</span>, <span class=\"math-container\">$Y$</span>, <span class=\"math-container\">$Z$</span>) is that it is theoretically possible that <span class=\"math-container\">$D$</span> has a 0 eigenvalue. That seems unlikely for most of the distributions for <span class=\"math-container\">$X$</span>, <span class=\"math-container\">$Y$</span>, <span class=\"math-container\">$Z$</span>, since that would imply that the corresponding eigenvector is\nthe eigenvector with 0 eigenvalue for all 3 of <span class=\"math-container\">$X$</span>, <span class=\"math-container\">$Y$</span>, <span class=\"math-container\">$Z$</span> simultaneously. If that happens, one may restart the sampling from the beginning, or add a small positive number (times an identity matrix) to <span class=\"math-container\">$A_0$</span>, <span class=\"math-container\">$B_0$</span>, <span class=\"math-container\">$C_0$</span>, or all of them.</p>\n" }, { "answer_id": 2084970, "score": 4, "text": "<p>This is easy. Denote by $\\operatorname{diag}(x)$ the diagonal matrix whose diagonal is the vector $x$. Assuming that $A$ is diagonal, every valid triple $(A,B,C)$ can be expressed as follows:\n\\begin{align*}\nA&amp;=\\operatorname{diag}(a),\\\\\nR&amp;=(I-A)^{1/2},\\\\\nB&amp;=RU\\operatorname{diag}(b)U^\\ast R,\\\\\nC&amp;=I-A-B,\n\\end{align*}\nwhere $a$ and $b$ are two vectors whose entries lie inside $(0,1)$ and $U$ is a unitary matrix. Note that $C=RU\\left(I-\\operatorname{diag}(b)\\right)U^\\ast R$. Hence it is positive definite.</p>\n\n<p>So, to generate a random sample, all you need are random instances of $a,b$ and $U$. The unitary matrix $U$ can be obtained by performing a singular value decomposition on a random square matrix, or you may generate it using the method mentioned in <a href=\"https://en.wikipedia.org/wiki/Orthogonal_matrix#Randomization\" rel=\"nofollow noreferrer\">Wikipedia</a>.</p>\n" }, { "answer_id": 4527024, "score": 2, "text": "<p>Another option is the following.</p>\n<ol>\n<li>Pick arbitrary random matrices <span class=\"math-container\">$X$</span>, <span class=\"math-container\">$Y$</span>, <span class=\"math-container\">$Z$</span> (either real or complex depending on whether symmetric or Hermitian matrices are needed).</li>\n<li>Set <span class=\"math-container\">$A_0 = X^\\dagger X$</span>, <span class=\"math-container\">$B_0 = Y^\\dagger Y$</span>, <span class=\"math-container\">$C_0 = Z^\\dagger Z$</span>. These are\npositive Hermitian, but do not necessarily add up to one.</li>\n<li>Compute <span class=\"math-container\">$D = A_0 + B_0 + C_0$</span> and set <span class=\"math-container\">$A = D^{-1/2} A_0 D^{-1/2}$</span>, <span class=\"math-container\">$B = D^{-1/2} B_0 D^{-1/2}$</span>, <span class=\"math-container\">$C = D^{-1/2} C_0 D^{-1/2}$</span>.</li>\n</ol>\n<p>By construction <span class=\"math-container\">$A+B+C = D^{-1/2} D D^{-1/2} = I$</span>.</p>\n<p>The minor downside of this construction (depending on the distribution used to pick <span class=\"math-container\">$X$</span>, <span class=\"math-container\">$Y$</span>, <span class=\"math-container\">$Z$</span>) is that it is theoretically possible that <span class=\"math-container\">$D$</span> has a 0 eigenvalue. That seems unlikely for most of the distributions for <span class=\"math-container\">$X$</span>, <span class=\"math-container\">$Y$</span>, <span class=\"math-container\">$Z$</span>, since that would imply that the corresponding eigenvector is\nthe eigenvector with 0 eigenvalue for all 3 of <span class=\"math-container\">$X$</span>, <span class=\"math-container\">$Y$</span>, <span class=\"math-container\">$Z$</span> simultaneously. If that happens, one may restart the sampling from the beginning, or add a small positive number (times an identity matrix) to <span class=\"math-container\">$A_0$</span>, <span class=\"math-container\">$B_0$</span>, <span class=\"math-container\">$C_0$</span>, or all of them.</p>\n" }, { "answer_id": 2084970, "score": 4, "text": "<p>This is easy. Denote by $\\operatorname{diag}(x)$ the diagonal matrix whose diagonal is the vector $x$. Assuming that $A$ is diagonal, every valid triple $(A,B,C)$ can be expressed as follows:\n\\begin{align*}\nA&amp;=\\operatorname{diag}(a),\\\\\nR&amp;=(I-A)^{1/2},\\\\\nB&amp;=RU\\operatorname{diag}(b)U^\\ast R,\\\\\nC&amp;=I-A-B,\n\\end{align*}\nwhere $a$ and $b$ are two vectors whose entries lie inside $(0,1)$ and $U$ is a unitary matrix. Note that $C=RU\\left(I-\\operatorname{diag}(b)\\right)U^\\ast R$. Hence it is positive definite.</p>\n\n<p>So, to generate a random sample, all you need are random instances of $a,b$ and $U$. The unitary matrix $U$ can be obtained by performing a singular value decomposition on a random square matrix, or you may generate it using the method mentioned in <a href=\"https://en.wikipedia.org/wiki/Orthogonal_matrix#Randomization\" rel=\"nofollow noreferrer\">Wikipedia</a>.</p>\n" }, { "answer_id": 4527024, "score": 2, "text": "<p>Another option is the following.</p>\n<ol>\n<li>Pick arbitrary random matrices <span class=\"math-container\">$X$</span>, <span class=\"math-container\">$Y$</span>, <span class=\"math-container\">$Z$</span> (either real or complex depending on whether symmetric or Hermitian matrices are needed).</li>\n<li>Set <span class=\"math-container\">$A_0 = X^\\dagger X$</span>, <span class=\"math-container\">$B_0 = Y^\\dagger Y$</span>, <span class=\"math-container\">$C_0 = Z^\\dagger Z$</span>. These are\npositive Hermitian, but do not necessarily add up to one.</li>\n<li>Compute <span class=\"math-container\">$D = A_0 + B_0 + C_0$</span> and set <span class=\"math-container\">$A = D^{-1/2} A_0 D^{-1/2}$</span>, <span class=\"math-container\">$B = D^{-1/2} B_0 D^{-1/2}$</span>, <span class=\"math-container\">$C = D^{-1/2} C_0 D^{-1/2}$</span>.</li>\n</ol>\n<p>By construction <span class=\"math-container\">$A+B+C = D^{-1/2} D D^{-1/2} = I$</span>.</p>\n<p>The minor downside of this construction (depending on the distribution used to pick <span class=\"math-container\">$X$</span>, <span class=\"math-container\">$Y$</span>, <span class=\"math-container\">$Z$</span>) is that it is theoretically possible that <span class=\"math-container\">$D$</span> has a 0 eigenvalue. That seems unlikely for most of the distributions for <span class=\"math-container\">$X$</span>, <span class=\"math-container\">$Y$</span>, <span class=\"math-container\">$Z$</span>, since that would imply that the corresponding eigenvector is\nthe eigenvector with 0 eigenvalue for all 3 of <span class=\"math-container\">$X$</span>, <span class=\"math-container\">$Y$</span>, <span class=\"math-container\">$Z$</span> simultaneously. If that happens, one may restart the sampling from the beginning, or add a small positive number (times an identity matrix) to <span class=\"math-container\">$A_0$</span>, <span class=\"math-container\">$B_0$</span>, <span class=\"math-container\">$C_0$</span>, or all of them.</p>\n" } ]
Question: How to construct symmetric and positive definite $A,B,C$ such that $A+B+C=I$? Answer 1: <p>This is easy. Denote by $\operatorname{diag}(x)$ the diagonal matrix whose diagonal is the vector $x$. Assuming that $A$ is diagonal, every valid triple $(A,B,C)$ can be expressed as follows: \begin{align*} A&amp;=\operatorname{diag}(a),\\ R&amp;=(I-A)^{1/2},\\ B&amp;=RU\operatorname{diag}(b)U^\ast R,\\ C&amp;=I-A-B, \end{align*} where $a$ and $b$ are two vectors whose entries lie inside $(0,1)$ and $U$ is a unitary matrix. Note that $C=RU\left(I-\operatorname{diag}(b)\right)U^\ast R$. Hence it is positive definite.</p> <p>So, to generate a random sample, all you need are random instances of $a,b$ and $U$. The unitary matrix $U$ can be obtained by performing a singular value decomposition on a random square matrix, or you may generate it using the method mentioned in <a href="https://en.wikipedia.org/wiki/Orthogonal_matrix#Randomization" rel="nofollow noreferrer">Wikipedia</a>.</p> Answer 2: <p>Another option is the following.</p> <ol> <li>Pick arbitrary random matrices <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, <span class="math-container">$Z$</span> (either real or complex depending on whether symmetric or Hermitian matrices are needed).</li> <li>Set <span class="math-container">$A_0 = X^\dagger X$</span>, <span class="math-container">$B_0 = Y^\dagger Y$</span>, <span class="math-container">$C_0 = Z^\dagger Z$</span>. These are positive Hermitian, but do not necessarily add up to one.</li> <li>Compute <span class="math-container">$D = A_0 + B_0 + C_0$</span> and set <span class="math-container">$A = D^{-1/2} A_0 D^{-1/2}$</span>, <span class="math-container">$B = D^{-1/2} B_0 D^{-1/2}$</span>, <span class="math-container">$C = D^{-1/2} C_0 D^{-1/2}$</span>.</li> </ol> <p>By construction <span class="math-container">$A+B+C = D^{-1/2} D D^{-1/2} = I$</span>.</p> <p>The minor downside of this construction (depending on the distribution used to pick <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, <span class="math-container">$Z$</span>) is that it is theoretically possible that <span class="math-container">$D$</span> has a 0 eigenvalue. That seems unlikely for most of the distributions for <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, <span class="math-container">$Z$</span>, since that would imply that the corresponding eigenvector is the eigenvector with 0 eigenvalue for all 3 of <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, <span class="math-container">$Z$</span> simultaneously. If that happens, one may restart the sampling from the beginning, or add a small positive number (times an identity matrix) to <span class="math-container">$A_0$</span>, <span class="math-container">$B_0$</span>, <span class="math-container">$C_0$</span>, or all of them.</p> Answer 3: <p>This is easy. Denote by $\operatorname{diag}(x)$ the diagonal matrix whose diagonal is the vector $x$. Assuming that $A$ is diagonal, every valid triple $(A,B,C)$ can be expressed as follows: \begin{align*} A&amp;=\operatorname{diag}(a),\\ R&amp;=(I-A)^{1/2},\\ B&amp;=RU\operatorname{diag}(b)U^\ast R,\\ C&amp;=I-A-B, \end{align*} where $a$ and $b$ are two vectors whose entries lie inside $(0,1)$ and $U$ is a unitary matrix. Note that $C=RU\left(I-\operatorname{diag}(b)\right)U^\ast R$. Hence it is positive definite.</p> <p>So, to generate a random sample, all you need are random instances of $a,b$ and $U$. The unitary matrix $U$ can be obtained by performing a singular value decomposition on a random square matrix, or you may generate it using the method mentioned in <a href="https://en.wikipedia.org/wiki/Orthogonal_matrix#Randomization" rel="nofollow noreferrer">Wikipedia</a>.</p> Answer 4: <p>Another option is the following.</p> <ol> <li>Pick arbitrary random matrices <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, <span class="math-container">$Z$</span> (either real or complex depending on whether symmetric or Hermitian matrices are needed).</li> <li>Set <span class="math-container">$A_0 = X^\dagger X$</span>, <span class="math-container">$B_0 = Y^\dagger Y$</span>, <span class="math-container">$C_0 = Z^\dagger Z$</span>. These are positive Hermitian, but do not necessarily add up to one.</li> <li>Compute <span class="math-container">$D = A_0 + B_0 + C_0$</span> and set <span class="math-container">$A = D^{-1/2} A_0 D^{-1/2}$</span>, <span class="math-container">$B = D^{-1/2} B_0 D^{-1/2}$</span>, <span class="math-container">$C = D^{-1/2} C_0 D^{-1/2}$</span>.</li> </ol> <p>By construction <span class="math-container">$A+B+C = D^{-1/2} D D^{-1/2} = I$</span>.</p> <p>The minor downside of this construction (depending on the distribution used to pick <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, <span class="math-container">$Z$</span>) is that it is theoretically possible that <span class="math-container">$D$</span> has a 0 eigenvalue. That seems unlikely for most of the distributions for <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, <span class="math-container">$Z$</span>, since that would imply that the corresponding eigenvector is the eigenvector with 0 eigenvalue for all 3 of <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, <span class="math-container">$Z$</span> simultaneously. If that happens, one may restart the sampling from the beginning, or add a small positive number (times an identity matrix) to <span class="math-container">$A_0$</span>, <span class="math-container">$B_0$</span>, <span class="math-container">$C_0$</span>, or all of them.</p> Answer 5: <p>This is easy. Denote by $\operatorname{diag}(x)$ the diagonal matrix whose diagonal is the vector $x$. Assuming that $A$ is diagonal, every valid triple $(A,B,C)$ can be expressed as follows: \begin{align*} A&amp;=\operatorname{diag}(a),\\ R&amp;=(I-A)^{1/2},\\ B&amp;=RU\operatorname{diag}(b)U^\ast R,\\ C&amp;=I-A-B, \end{align*} where $a$ and $b$ are two vectors whose entries lie inside $(0,1)$ and $U$ is a unitary matrix. Note that $C=RU\left(I-\operatorname{diag}(b)\right)U^\ast R$. Hence it is positive definite.</p> <p>So, to generate a random sample, all you need are random instances of $a,b$ and $U$. The unitary matrix $U$ can be obtained by performing a singular value decomposition on a random square matrix, or you may generate it using the method mentioned in <a href="https://en.wikipedia.org/wiki/Orthogonal_matrix#Randomization" rel="nofollow noreferrer">Wikipedia</a>.</p> Answer 6: <p>Another option is the following.</p> <ol> <li>Pick arbitrary random matrices <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, <span class="math-container">$Z$</span> (either real or complex depending on whether symmetric or Hermitian matrices are needed).</li> <li>Set <span class="math-container">$A_0 = X^\dagger X$</span>, <span class="math-container">$B_0 = Y^\dagger Y$</span>, <span class="math-container">$C_0 = Z^\dagger Z$</span>. These are positive Hermitian, but do not necessarily add up to one.</li> <li>Compute <span class="math-container">$D = A_0 + B_0 + C_0$</span> and set <span class="math-container">$A = D^{-1/2} A_0 D^{-1/2}$</span>, <span class="math-container">$B = D^{-1/2} B_0 D^{-1/2}$</span>, <span class="math-container">$C = D^{-1/2} C_0 D^{-1/2}$</span>.</li> </ol> <p>By construction <span class="math-container">$A+B+C = D^{-1/2} D D^{-1/2} = I$</span>.</p> <p>The minor downside of this construction (depending on the distribution used to pick <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, <span class="math-container">$Z$</span>) is that it is theoretically possible that <span class="math-container">$D$</span> has a 0 eigenvalue. That seems unlikely for most of the distributions for <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, <span class="math-container">$Z$</span>, since that would imply that the corresponding eigenvector is the eigenvector with 0 eigenvalue for all 3 of <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, <span class="math-container">$Z$</span> simultaneously. If that happens, one may restart the sampling from the beginning, or add a small positive number (times an identity matrix) to <span class="math-container">$A_0$</span>, <span class="math-container">$B_0$</span>, <span class="math-container">$C_0$</span>, or all of them.</p>
Relationship between wild ramification and restriction in Galois extension
2,085,044
4
https://math.stackexchange.com/questions/2085044/relationship-between-wild-ramification-and-restriction-in-galois-extension
<p>For each $n \geq 1$, we denote by $\mathbb{Q}_n$ the unique subfield of $\mathbb{Q}(\zeta_{p^{n+1}})$ for which $[\mathbb{Q}_n:\mathbb{Q}]=p^n$. Let now $K$ be a finite Galois extension of $\mathbb{Q}$ with the following properties:</p> <ol> <li>K contains $\mathbb{Q}_m$ for some $m \geq 1$;</li> <li>There exists an element $g \in Gal(K/\mathbb{Q})$ with the property that the order of $g$ is $p^n$ with $n&gt;m$ and $\left. g \right|_{\mathbb{Q}_m}$ is the generator of the Galois group $Gal(\mathbb{Q}_m/\mathbb{Q})$. </li> </ol> <p>Is it then true that $K$ must contain $\mathbb{Q}_t$ for some $t&gt;m$? </p> <p>I have tried to use quite a lot of theory around wild ramification and local considerations (including this post <a href="https://math.stackexchange.com/questions/2073623/wildly-ramified-extensions-and-p-power-roots-of-unity/2076644">wildly ramified extensions and $p$-power roots of unity</a> ), without success in producing a proof/counterexample.</p>
[ { "answer_id": 2087797, "score": 2, "text": "<p>Since the prime $p$ ($\\neq 2$ for simplification) plays a special role, let us concentrate on the case when $K/Q$ is a $p$-extension, i.e. a Galois extension whose group $G$ is a finite $p$-group. The case of a general $G$ can be derived from this provided we have sufficient information on the structure of $G$. Suppose also that $K/Q$ is $S$-ramified, i.e. unramified outside a finite set of primes $S$={$p$} $\\cup$ $T$, with $T$ = {$l_1,...,l_r$} or $T$ empty. Note that the presence of $p$ is necessary because we'll deal with the so-called cyclotomic $Z_p$-extension of $Q$, namely $G_S (k) = \\cup Q_n$. </p>\n\n<p>For a number field $k$ and the set $S_k$ of places of $k$ above $S$ (we'll write $S$ for $S_k$, no confusion is possible), denote by $k_S$ the maximal pro-$p$-extension of $k$ which is $S$-ramified, and $G_S (k) = Gal(k_S/k)$. The description of $G_S (k)$ by generators and relations can be approached using CFT and Galois cohomology, as for example in H. Koch's book <em>\"Galois Theory of p-extensions\"</em>. In particular, the minimal number $d$ of generators (resp. the number $r$ of relations) of $G_S (k)$ is equal to the $Z/pZ$-dimension of $H^1(G_S (k), Z/pZ)$ (resp. of $H^2(.)$). Actually, $d$ has been computed by Shafarevitch (<em>op. cit.</em>,§11, thm. 11.8), and $r$ is given by a so-called Euler-Poincaré characteristic formula : $1-d+r = -c$, where $c$ is the number of complex places of $k$.</p>\n\n<p>After these preliminaries, we can attack your problem over $k = Q$. Since no ramification condition is specified, we must consider two separate cases:</p>\n\n<p>1) If $T$ is empty, i.e. only wild ramification is allowed, Shafarevitch's formula over $Q$ gives $d=1$ and $r=0$, in other words, $G_S (Q)$ is the free pro-$p$-group on a single generator, which is $Z_p$. Equivalently, $Q_S = Q_{\\infty}$ , which implies that $K$ = a certain $Q_n$ , and the answer to your question is trivially YES</p>\n\n<p>2) If $T$ is not empty, i.e. tame ramification is allowed, let $ Q_m = K \\cap Q_{\\infty}$ , $G = Gal(K/Q)$ and $H = Gal(K/Q_m)$. Suppose, as you do, that there exists $g \\in G$, of order $n&gt;m$, such that its image $ g$ mod $H$ in $G/H$ generates $G/H = Gal(Q_m/Q)$. Replace $K$ by the fixed field $L$ of $(H \\cap&lt;g&gt;)$, so that the image $g$ mod $(H\\cap&lt;g&gt;)$ in $Gal(L/Q)$ generates a subgroup isomorphic to $Gal(Q_m/Q)$. This means that that the exact sequence $1 \\to Gal(L/Q_m) \\to Gal(L/Q) \\to Gal(Q_m/Q) \\to 1$ splits, i.e. describes a semi-direct product. So in this case the answer to your question will be NO if we can construct the adequate extension $L/Q_m$ . If $T$ = {$l_1,...,l_r$}, the number of relations of $G_S (Q)$ will be $r$, more precisely the relations will come from $r$ local tame relations of the type (<em>op. cit.</em>, §10, thm. 10.2) $t^{l-1}[t^{-1}, s^{-1}]=1$ (necessarily, $l\\equiv 1$ mod $p$ to allow ramification). This means that the maximal abelian quotient of $G_S (Q)$, considered as a $Z_p$-module, will have rank 1 and torsion isomorphic to a product of cyclic groups $Z/(l -1)Z$. By adjusting our $l$, we can immediately construct the desired extension $L/Q_m$ ./.</p>\n" }, { "answer_id": 2093136, "score": 1, "text": "<p>On re-reading my second comment, I do not find it very enlightening, so I propose to replace it by this more detailed answer (mainly in order to be able to type math symbols (*)). I also edited two unfortunate misprints in my first answer (just before the exact sequence of Galois groups).</p>\n\n<p>So let us construct our abelian extension $K/Q$ with the required $g$ of order $p^n$, with $n&gt;m$, and such that $Gal(K/Q)\\cong Gal(K/Q_m) \\times Gal(Q_m/Q)$. We have seen that he Galois group of the maximal abelian $S$-ramified $S$-extension of $Q$ is a direct product ($Z_p, +$)$\\times \\mathcal T$, where the torsion group $\\mathcal T$ is a finite product of cyclic groups of the form $Z/(l-1)Z$. Because a subgroup of $Z_p$ is isomorphic to $Z_p$ hence is pro-$p$-free, $Gal(K/Q)$ will be viewed as a quotient of $Gal(Q_m/Q) \\times\\mathcal T$ and $Gal(K/Q_m)$ as a quotient of $\\mathcal T$. Then choose $l \\in T$, such that $p^n$ divides exactly $ 1- p^n$, so that $Gal(K/Q)$ will contain an element $g$ of order $p^n$ coming from $\\mathcal T$. Taking further $l' \\in T, l' \\neq l $, $T=${$l, l'$}, and $Gal(K/Q_m) \\cong$ the factor $Z/(l'-1)Z$ of $\\mathcal T$, we get our extension $K$ with the desired properties. Things could be seen much more clearly by drawing Galois diagrams.</p>\n" }, { "answer_id": 2087797, "score": 2, "text": "<p>Since the prime $p$ ($\\neq 2$ for simplification) plays a special role, let us concentrate on the case when $K/Q$ is a $p$-extension, i.e. a Galois extension whose group $G$ is a finite $p$-group. The case of a general $G$ can be derived from this provided we have sufficient information on the structure of $G$. Suppose also that $K/Q$ is $S$-ramified, i.e. unramified outside a finite set of primes $S$={$p$} $\\cup$ $T$, with $T$ = {$l_1,...,l_r$} or $T$ empty. Note that the presence of $p$ is necessary because we'll deal with the so-called cyclotomic $Z_p$-extension of $Q$, namely $G_S (k) = \\cup Q_n$. </p>\n\n<p>For a number field $k$ and the set $S_k$ of places of $k$ above $S$ (we'll write $S$ for $S_k$, no confusion is possible), denote by $k_S$ the maximal pro-$p$-extension of $k$ which is $S$-ramified, and $G_S (k) = Gal(k_S/k)$. The description of $G_S (k)$ by generators and relations can be approached using CFT and Galois cohomology, as for example in H. Koch's book <em>\"Galois Theory of p-extensions\"</em>. In particular, the minimal number $d$ of generators (resp. the number $r$ of relations) of $G_S (k)$ is equal to the $Z/pZ$-dimension of $H^1(G_S (k), Z/pZ)$ (resp. of $H^2(.)$). Actually, $d$ has been computed by Shafarevitch (<em>op. cit.</em>,§11, thm. 11.8), and $r$ is given by a so-called Euler-Poincaré characteristic formula : $1-d+r = -c$, where $c$ is the number of complex places of $k$.</p>\n\n<p>After these preliminaries, we can attack your problem over $k = Q$. Since no ramification condition is specified, we must consider two separate cases:</p>\n\n<p>1) If $T$ is empty, i.e. only wild ramification is allowed, Shafarevitch's formula over $Q$ gives $d=1$ and $r=0$, in other words, $G_S (Q)$ is the free pro-$p$-group on a single generator, which is $Z_p$. Equivalently, $Q_S = Q_{\\infty}$ , which implies that $K$ = a certain $Q_n$ , and the answer to your question is trivially YES</p>\n\n<p>2) If $T$ is not empty, i.e. tame ramification is allowed, let $ Q_m = K \\cap Q_{\\infty}$ , $G = Gal(K/Q)$ and $H = Gal(K/Q_m)$. Suppose, as you do, that there exists $g \\in G$, of order $n&gt;m$, such that its image $ g$ mod $H$ in $G/H$ generates $G/H = Gal(Q_m/Q)$. Replace $K$ by the fixed field $L$ of $(H \\cap&lt;g&gt;)$, so that the image $g$ mod $(H\\cap&lt;g&gt;)$ in $Gal(L/Q)$ generates a subgroup isomorphic to $Gal(Q_m/Q)$. This means that that the exact sequence $1 \\to Gal(L/Q_m) \\to Gal(L/Q) \\to Gal(Q_m/Q) \\to 1$ splits, i.e. describes a semi-direct product. So in this case the answer to your question will be NO if we can construct the adequate extension $L/Q_m$ . If $T$ = {$l_1,...,l_r$}, the number of relations of $G_S (Q)$ will be $r$, more precisely the relations will come from $r$ local tame relations of the type (<em>op. cit.</em>, §10, thm. 10.2) $t^{l-1}[t^{-1}, s^{-1}]=1$ (necessarily, $l\\equiv 1$ mod $p$ to allow ramification). This means that the maximal abelian quotient of $G_S (Q)$, considered as a $Z_p$-module, will have rank 1 and torsion isomorphic to a product of cyclic groups $Z/(l -1)Z$. By adjusting our $l$, we can immediately construct the desired extension $L/Q_m$ ./.</p>\n" }, { "answer_id": 2093136, "score": 1, "text": "<p>On re-reading my second comment, I do not find it very enlightening, so I propose to replace it by this more detailed answer (mainly in order to be able to type math symbols (*)). I also edited two unfortunate misprints in my first answer (just before the exact sequence of Galois groups).</p>\n\n<p>So let us construct our abelian extension $K/Q$ with the required $g$ of order $p^n$, with $n&gt;m$, and such that $Gal(K/Q)\\cong Gal(K/Q_m) \\times Gal(Q_m/Q)$. We have seen that he Galois group of the maximal abelian $S$-ramified $S$-extension of $Q$ is a direct product ($Z_p, +$)$\\times \\mathcal T$, where the torsion group $\\mathcal T$ is a finite product of cyclic groups of the form $Z/(l-1)Z$. Because a subgroup of $Z_p$ is isomorphic to $Z_p$ hence is pro-$p$-free, $Gal(K/Q)$ will be viewed as a quotient of $Gal(Q_m/Q) \\times\\mathcal T$ and $Gal(K/Q_m)$ as a quotient of $\\mathcal T$. Then choose $l \\in T$, such that $p^n$ divides exactly $ 1- p^n$, so that $Gal(K/Q)$ will contain an element $g$ of order $p^n$ coming from $\\mathcal T$. Taking further $l' \\in T, l' \\neq l $, $T=${$l, l'$}, and $Gal(K/Q_m) \\cong$ the factor $Z/(l'-1)Z$ of $\\mathcal T$, we get our extension $K$ with the desired properties. Things could be seen much more clearly by drawing Galois diagrams.</p>\n" }, { "answer_id": 2087797, "score": 2, "text": "<p>Since the prime $p$ ($\\neq 2$ for simplification) plays a special role, let us concentrate on the case when $K/Q$ is a $p$-extension, i.e. a Galois extension whose group $G$ is a finite $p$-group. The case of a general $G$ can be derived from this provided we have sufficient information on the structure of $G$. Suppose also that $K/Q$ is $S$-ramified, i.e. unramified outside a finite set of primes $S$={$p$} $\\cup$ $T$, with $T$ = {$l_1,...,l_r$} or $T$ empty. Note that the presence of $p$ is necessary because we'll deal with the so-called cyclotomic $Z_p$-extension of $Q$, namely $G_S (k) = \\cup Q_n$. </p>\n\n<p>For a number field $k$ and the set $S_k$ of places of $k$ above $S$ (we'll write $S$ for $S_k$, no confusion is possible), denote by $k_S$ the maximal pro-$p$-extension of $k$ which is $S$-ramified, and $G_S (k) = Gal(k_S/k)$. The description of $G_S (k)$ by generators and relations can be approached using CFT and Galois cohomology, as for example in H. Koch's book <em>\"Galois Theory of p-extensions\"</em>. In particular, the minimal number $d$ of generators (resp. the number $r$ of relations) of $G_S (k)$ is equal to the $Z/pZ$-dimension of $H^1(G_S (k), Z/pZ)$ (resp. of $H^2(.)$). Actually, $d$ has been computed by Shafarevitch (<em>op. cit.</em>,§11, thm. 11.8), and $r$ is given by a so-called Euler-Poincaré characteristic formula : $1-d+r = -c$, where $c$ is the number of complex places of $k$.</p>\n\n<p>After these preliminaries, we can attack your problem over $k = Q$. Since no ramification condition is specified, we must consider two separate cases:</p>\n\n<p>1) If $T$ is empty, i.e. only wild ramification is allowed, Shafarevitch's formula over $Q$ gives $d=1$ and $r=0$, in other words, $G_S (Q)$ is the free pro-$p$-group on a single generator, which is $Z_p$. Equivalently, $Q_S = Q_{\\infty}$ , which implies that $K$ = a certain $Q_n$ , and the answer to your question is trivially YES</p>\n\n<p>2) If $T$ is not empty, i.e. tame ramification is allowed, let $ Q_m = K \\cap Q_{\\infty}$ , $G = Gal(K/Q)$ and $H = Gal(K/Q_m)$. Suppose, as you do, that there exists $g \\in G$, of order $n&gt;m$, such that its image $ g$ mod $H$ in $G/H$ generates $G/H = Gal(Q_m/Q)$. Replace $K$ by the fixed field $L$ of $(H \\cap&lt;g&gt;)$, so that the image $g$ mod $(H\\cap&lt;g&gt;)$ in $Gal(L/Q)$ generates a subgroup isomorphic to $Gal(Q_m/Q)$. This means that that the exact sequence $1 \\to Gal(L/Q_m) \\to Gal(L/Q) \\to Gal(Q_m/Q) \\to 1$ splits, i.e. describes a semi-direct product. So in this case the answer to your question will be NO if we can construct the adequate extension $L/Q_m$ . If $T$ = {$l_1,...,l_r$}, the number of relations of $G_S (Q)$ will be $r$, more precisely the relations will come from $r$ local tame relations of the type (<em>op. cit.</em>, §10, thm. 10.2) $t^{l-1}[t^{-1}, s^{-1}]=1$ (necessarily, $l\\equiv 1$ mod $p$ to allow ramification). This means that the maximal abelian quotient of $G_S (Q)$, considered as a $Z_p$-module, will have rank 1 and torsion isomorphic to a product of cyclic groups $Z/(l -1)Z$. By adjusting our $l$, we can immediately construct the desired extension $L/Q_m$ ./.</p>\n" }, { "answer_id": 2093136, "score": 1, "text": "<p>On re-reading my second comment, I do not find it very enlightening, so I propose to replace it by this more detailed answer (mainly in order to be able to type math symbols (*)). I also edited two unfortunate misprints in my first answer (just before the exact sequence of Galois groups).</p>\n\n<p>So let us construct our abelian extension $K/Q$ with the required $g$ of order $p^n$, with $n&gt;m$, and such that $Gal(K/Q)\\cong Gal(K/Q_m) \\times Gal(Q_m/Q)$. We have seen that he Galois group of the maximal abelian $S$-ramified $S$-extension of $Q$ is a direct product ($Z_p, +$)$\\times \\mathcal T$, where the torsion group $\\mathcal T$ is a finite product of cyclic groups of the form $Z/(l-1)Z$. Because a subgroup of $Z_p$ is isomorphic to $Z_p$ hence is pro-$p$-free, $Gal(K/Q)$ will be viewed as a quotient of $Gal(Q_m/Q) \\times\\mathcal T$ and $Gal(K/Q_m)$ as a quotient of $\\mathcal T$. Then choose $l \\in T$, such that $p^n$ divides exactly $ 1- p^n$, so that $Gal(K/Q)$ will contain an element $g$ of order $p^n$ coming from $\\mathcal T$. Taking further $l' \\in T, l' \\neq l $, $T=${$l, l'$}, and $Gal(K/Q_m) \\cong$ the factor $Z/(l'-1)Z$ of $\\mathcal T$, we get our extension $K$ with the desired properties. Things could be seen much more clearly by drawing Galois diagrams.</p>\n" } ]
Question: Relationship between wild ramification and restriction in Galois extension Answer 1: <p>Since the prime $p$ ($\neq 2$ for simplification) plays a special role, let us concentrate on the case when $K/Q$ is a $p$-extension, i.e. a Galois extension whose group $G$ is a finite $p$-group. The case of a general $G$ can be derived from this provided we have sufficient information on the structure of $G$. Suppose also that $K/Q$ is $S$-ramified, i.e. unramified outside a finite set of primes $S$={$p$} $\cup$ $T$, with $T$ = {$l_1,...,l_r$} or $T$ empty. Note that the presence of $p$ is necessary because we'll deal with the so-called cyclotomic $Z_p$-extension of $Q$, namely $G_S (k) = \cup Q_n$. </p> <p>For a number field $k$ and the set $S_k$ of places of $k$ above $S$ (we'll write $S$ for $S_k$, no confusion is possible), denote by $k_S$ the maximal pro-$p$-extension of $k$ which is $S$-ramified, and $G_S (k) = Gal(k_S/k)$. The description of $G_S (k)$ by generators and relations can be approached using CFT and Galois cohomology, as for example in H. Koch's book <em>"Galois Theory of p-extensions"</em>. In particular, the minimal number $d$ of generators (resp. the number $r$ of relations) of $G_S (k)$ is equal to the $Z/pZ$-dimension of $H^1(G_S (k), Z/pZ)$ (resp. of $H^2(.)$). Actually, $d$ has been computed by Shafarevitch (<em>op. cit.</em>,§11, thm. 11.8), and $r$ is given by a so-called Euler-Poincaré characteristic formula : $1-d+r = -c$, where $c$ is the number of complex places of $k$.</p> <p>After these preliminaries, we can attack your problem over $k = Q$. Since no ramification condition is specified, we must consider two separate cases:</p> <p>1) If $T$ is empty, i.e. only wild ramification is allowed, Shafarevitch's formula over $Q$ gives $d=1$ and $r=0$, in other words, $G_S (Q)$ is the free pro-$p$-group on a single generator, which is $Z_p$. Equivalently, $Q_S = Q_{\infty}$ , which implies that $K$ = a certain $Q_n$ , and the answer to your question is trivially YES</p> <p>2) If $T$ is not empty, i.e. tame ramification is allowed, let $ Q_m = K \cap Q_{\infty}$ , $G = Gal(K/Q)$ and $H = Gal(K/Q_m)$. Suppose, as you do, that there exists $g \in G$, of order $n&gt;m$, such that its image $ g$ mod $H$ in $G/H$ generates $G/H = Gal(Q_m/Q)$. Replace $K$ by the fixed field $L$ of $(H \cap&lt;g&gt;)$, so that the image $g$ mod $(H\cap&lt;g&gt;)$ in $Gal(L/Q)$ generates a subgroup isomorphic to $Gal(Q_m/Q)$. This means that that the exact sequence $1 \to Gal(L/Q_m) \to Gal(L/Q) \to Gal(Q_m/Q) \to 1$ splits, i.e. describes a semi-direct product. So in this case the answer to your question will be NO if we can construct the adequate extension $L/Q_m$ . If $T$ = {$l_1,...,l_r$}, the number of relations of $G_S (Q)$ will be $r$, more precisely the relations will come from $r$ local tame relations of the type (<em>op. cit.</em>, §10, thm. 10.2) $t^{l-1}[t^{-1}, s^{-1}]=1$ (necessarily, $l\equiv 1$ mod $p$ to allow ramification). This means that the maximal abelian quotient of $G_S (Q)$, considered as a $Z_p$-module, will have rank 1 and torsion isomorphic to a product of cyclic groups $Z/(l -1)Z$. By adjusting our $l$, we can immediately construct the desired extension $L/Q_m$ ./.</p> Answer 2: <p>On re-reading my second comment, I do not find it very enlightening, so I propose to replace it by this more detailed answer (mainly in order to be able to type math symbols (*)). I also edited two unfortunate misprints in my first answer (just before the exact sequence of Galois groups).</p> <p>So let us construct our abelian extension $K/Q$ with the required $g$ of order $p^n$, with $n&gt;m$, and such that $Gal(K/Q)\cong Gal(K/Q_m) \times Gal(Q_m/Q)$. We have seen that he Galois group of the maximal abelian $S$-ramified $S$-extension of $Q$ is a direct product ($Z_p, +$)$\times \mathcal T$, where the torsion group $\mathcal T$ is a finite product of cyclic groups of the form $Z/(l-1)Z$. Because a subgroup of $Z_p$ is isomorphic to $Z_p$ hence is pro-$p$-free, $Gal(K/Q)$ will be viewed as a quotient of $Gal(Q_m/Q) \times\mathcal T$ and $Gal(K/Q_m)$ as a quotient of $\mathcal T$. Then choose $l \in T$, such that $p^n$ divides exactly $ 1- p^n$, so that $Gal(K/Q)$ will contain an element $g$ of order $p^n$ coming from $\mathcal T$. Taking further $l' \in T, l' \neq l $, $T=${$l, l'$}, and $Gal(K/Q_m) \cong$ the factor $Z/(l'-1)Z$ of $\mathcal T$, we get our extension $K$ with the desired properties. Things could be seen much more clearly by drawing Galois diagrams.</p> Answer 3: <p>Since the prime $p$ ($\neq 2$ for simplification) plays a special role, let us concentrate on the case when $K/Q$ is a $p$-extension, i.e. a Galois extension whose group $G$ is a finite $p$-group. The case of a general $G$ can be derived from this provided we have sufficient information on the structure of $G$. Suppose also that $K/Q$ is $S$-ramified, i.e. unramified outside a finite set of primes $S$={$p$} $\cup$ $T$, with $T$ = {$l_1,...,l_r$} or $T$ empty. Note that the presence of $p$ is necessary because we'll deal with the so-called cyclotomic $Z_p$-extension of $Q$, namely $G_S (k) = \cup Q_n$. </p> <p>For a number field $k$ and the set $S_k$ of places of $k$ above $S$ (we'll write $S$ for $S_k$, no confusion is possible), denote by $k_S$ the maximal pro-$p$-extension of $k$ which is $S$-ramified, and $G_S (k) = Gal(k_S/k)$. The description of $G_S (k)$ by generators and relations can be approached using CFT and Galois cohomology, as for example in H. Koch's book <em>"Galois Theory of p-extensions"</em>. In particular, the minimal number $d$ of generators (resp. the number $r$ of relations) of $G_S (k)$ is equal to the $Z/pZ$-dimension of $H^1(G_S (k), Z/pZ)$ (resp. of $H^2(.)$). Actually, $d$ has been computed by Shafarevitch (<em>op. cit.</em>,§11, thm. 11.8), and $r$ is given by a so-called Euler-Poincaré characteristic formula : $1-d+r = -c$, where $c$ is the number of complex places of $k$.</p> <p>After these preliminaries, we can attack your problem over $k = Q$. Since no ramification condition is specified, we must consider two separate cases:</p> <p>1) If $T$ is empty, i.e. only wild ramification is allowed, Shafarevitch's formula over $Q$ gives $d=1$ and $r=0$, in other words, $G_S (Q)$ is the free pro-$p$-group on a single generator, which is $Z_p$. Equivalently, $Q_S = Q_{\infty}$ , which implies that $K$ = a certain $Q_n$ , and the answer to your question is trivially YES</p> <p>2) If $T$ is not empty, i.e. tame ramification is allowed, let $ Q_m = K \cap Q_{\infty}$ , $G = Gal(K/Q)$ and $H = Gal(K/Q_m)$. Suppose, as you do, that there exists $g \in G$, of order $n&gt;m$, such that its image $ g$ mod $H$ in $G/H$ generates $G/H = Gal(Q_m/Q)$. Replace $K$ by the fixed field $L$ of $(H \cap&lt;g&gt;)$, so that the image $g$ mod $(H\cap&lt;g&gt;)$ in $Gal(L/Q)$ generates a subgroup isomorphic to $Gal(Q_m/Q)$. This means that that the exact sequence $1 \to Gal(L/Q_m) \to Gal(L/Q) \to Gal(Q_m/Q) \to 1$ splits, i.e. describes a semi-direct product. So in this case the answer to your question will be NO if we can construct the adequate extension $L/Q_m$ . If $T$ = {$l_1,...,l_r$}, the number of relations of $G_S (Q)$ will be $r$, more precisely the relations will come from $r$ local tame relations of the type (<em>op. cit.</em>, §10, thm. 10.2) $t^{l-1}[t^{-1}, s^{-1}]=1$ (necessarily, $l\equiv 1$ mod $p$ to allow ramification). This means that the maximal abelian quotient of $G_S (Q)$, considered as a $Z_p$-module, will have rank 1 and torsion isomorphic to a product of cyclic groups $Z/(l -1)Z$. By adjusting our $l$, we can immediately construct the desired extension $L/Q_m$ ./.</p> Answer 4: <p>On re-reading my second comment, I do not find it very enlightening, so I propose to replace it by this more detailed answer (mainly in order to be able to type math symbols (*)). I also edited two unfortunate misprints in my first answer (just before the exact sequence of Galois groups).</p> <p>So let us construct our abelian extension $K/Q$ with the required $g$ of order $p^n$, with $n&gt;m$, and such that $Gal(K/Q)\cong Gal(K/Q_m) \times Gal(Q_m/Q)$. We have seen that he Galois group of the maximal abelian $S$-ramified $S$-extension of $Q$ is a direct product ($Z_p, +$)$\times \mathcal T$, where the torsion group $\mathcal T$ is a finite product of cyclic groups of the form $Z/(l-1)Z$. Because a subgroup of $Z_p$ is isomorphic to $Z_p$ hence is pro-$p$-free, $Gal(K/Q)$ will be viewed as a quotient of $Gal(Q_m/Q) \times\mathcal T$ and $Gal(K/Q_m)$ as a quotient of $\mathcal T$. Then choose $l \in T$, such that $p^n$ divides exactly $ 1- p^n$, so that $Gal(K/Q)$ will contain an element $g$ of order $p^n$ coming from $\mathcal T$. Taking further $l' \in T, l' \neq l $, $T=${$l, l'$}, and $Gal(K/Q_m) \cong$ the factor $Z/(l'-1)Z$ of $\mathcal T$, we get our extension $K$ with the desired properties. Things could be seen much more clearly by drawing Galois diagrams.</p> Answer 5: <p>Since the prime $p$ ($\neq 2$ for simplification) plays a special role, let us concentrate on the case when $K/Q$ is a $p$-extension, i.e. a Galois extension whose group $G$ is a finite $p$-group. The case of a general $G$ can be derived from this provided we have sufficient information on the structure of $G$. Suppose also that $K/Q$ is $S$-ramified, i.e. unramified outside a finite set of primes $S$={$p$} $\cup$ $T$, with $T$ = {$l_1,...,l_r$} or $T$ empty. Note that the presence of $p$ is necessary because we'll deal with the so-called cyclotomic $Z_p$-extension of $Q$, namely $G_S (k) = \cup Q_n$. </p> <p>For a number field $k$ and the set $S_k$ of places of $k$ above $S$ (we'll write $S$ for $S_k$, no confusion is possible), denote by $k_S$ the maximal pro-$p$-extension of $k$ which is $S$-ramified, and $G_S (k) = Gal(k_S/k)$. The description of $G_S (k)$ by generators and relations can be approached using CFT and Galois cohomology, as for example in H. Koch's book <em>"Galois Theory of p-extensions"</em>. In particular, the minimal number $d$ of generators (resp. the number $r$ of relations) of $G_S (k)$ is equal to the $Z/pZ$-dimension of $H^1(G_S (k), Z/pZ)$ (resp. of $H^2(.)$). Actually, $d$ has been computed by Shafarevitch (<em>op. cit.</em>,§11, thm. 11.8), and $r$ is given by a so-called Euler-Poincaré characteristic formula : $1-d+r = -c$, where $c$ is the number of complex places of $k$.</p> <p>After these preliminaries, we can attack your problem over $k = Q$. Since no ramification condition is specified, we must consider two separate cases:</p> <p>1) If $T$ is empty, i.e. only wild ramification is allowed, Shafarevitch's formula over $Q$ gives $d=1$ and $r=0$, in other words, $G_S (Q)$ is the free pro-$p$-group on a single generator, which is $Z_p$. Equivalently, $Q_S = Q_{\infty}$ , which implies that $K$ = a certain $Q_n$ , and the answer to your question is trivially YES</p> <p>2) If $T$ is not empty, i.e. tame ramification is allowed, let $ Q_m = K \cap Q_{\infty}$ , $G = Gal(K/Q)$ and $H = Gal(K/Q_m)$. Suppose, as you do, that there exists $g \in G$, of order $n&gt;m$, such that its image $ g$ mod $H$ in $G/H$ generates $G/H = Gal(Q_m/Q)$. Replace $K$ by the fixed field $L$ of $(H \cap&lt;g&gt;)$, so that the image $g$ mod $(H\cap&lt;g&gt;)$ in $Gal(L/Q)$ generates a subgroup isomorphic to $Gal(Q_m/Q)$. This means that that the exact sequence $1 \to Gal(L/Q_m) \to Gal(L/Q) \to Gal(Q_m/Q) \to 1$ splits, i.e. describes a semi-direct product. So in this case the answer to your question will be NO if we can construct the adequate extension $L/Q_m$ . If $T$ = {$l_1,...,l_r$}, the number of relations of $G_S (Q)$ will be $r$, more precisely the relations will come from $r$ local tame relations of the type (<em>op. cit.</em>, §10, thm. 10.2) $t^{l-1}[t^{-1}, s^{-1}]=1$ (necessarily, $l\equiv 1$ mod $p$ to allow ramification). This means that the maximal abelian quotient of $G_S (Q)$, considered as a $Z_p$-module, will have rank 1 and torsion isomorphic to a product of cyclic groups $Z/(l -1)Z$. By adjusting our $l$, we can immediately construct the desired extension $L/Q_m$ ./.</p> Answer 6: <p>On re-reading my second comment, I do not find it very enlightening, so I propose to replace it by this more detailed answer (mainly in order to be able to type math symbols (*)). I also edited two unfortunate misprints in my first answer (just before the exact sequence of Galois groups).</p> <p>So let us construct our abelian extension $K/Q$ with the required $g$ of order $p^n$, with $n&gt;m$, and such that $Gal(K/Q)\cong Gal(K/Q_m) \times Gal(Q_m/Q)$. We have seen that he Galois group of the maximal abelian $S$-ramified $S$-extension of $Q$ is a direct product ($Z_p, +$)$\times \mathcal T$, where the torsion group $\mathcal T$ is a finite product of cyclic groups of the form $Z/(l-1)Z$. Because a subgroup of $Z_p$ is isomorphic to $Z_p$ hence is pro-$p$-free, $Gal(K/Q)$ will be viewed as a quotient of $Gal(Q_m/Q) \times\mathcal T$ and $Gal(K/Q_m)$ as a quotient of $\mathcal T$. Then choose $l \in T$, such that $p^n$ divides exactly $ 1- p^n$, so that $Gal(K/Q)$ will contain an element $g$ of order $p^n$ coming from $\mathcal T$. Taking further $l' \in T, l' \neq l $, $T=${$l, l'$}, and $Gal(K/Q_m) \cong$ the factor $Z/(l'-1)Z$ of $\mathcal T$, we get our extension $K$ with the desired properties. Things could be seen much more clearly by drawing Galois diagrams.</p>
Proof of Almost uniform convergence implies Convergence almost everywhere
2,085,080
4
https://math.stackexchange.com/questions/2085080/proof-of-almost-uniform-convergence-implies-convergence-almost-everywhere
<p>I read the same proof that almost uniform convergence implies convergence almost everywhere on several sources (Friedman's Foundations of Modern Analysis and online sources), and they all seem to use the same proof:</p> <p><a href="https://i.stack.imgur.com/E2w1O.png" rel="nofollow noreferrer">Proof taken from Proofwiki, https://proofwiki.org/wiki/Convergence_a.u._Implies_Convergence_a.e.</a></p> <p>However, I have a problem with this proof. While the set on which $f_n$ does not converge to $f$ is definitely a subset of $B$, I do not see why the reverse inclusion (that $f_n$ does not converge to $f$ for every element of $B$) is true. Hence, might it not be possible that $f_n$ converges to $f$only on a proper subset of $B$ that just happens to be non-measurable? (Which is possible since the measure space is not specified to be complete.) Then the proof would be false.</p> <p>I would really appreciate help in understanding why my critique of the proof does not hold. Thanks in advance.</p>
[ { "answer_id": 2085102, "score": 2, "text": "<p>Actually if $(f_n)_{n \\in \\mathbf{N}}$ is a sequence of measurable functions to a complete space, then the set of convergence points of $(f_n)$ is <em>always</em> measurable. <a href=\"https://proofwiki.org/wiki/Definition:Almost_Uniform_Convergence\" rel=\"nofollow noreferrer\">In your example</a>, $f_n : D \\to \\mathbf{R}$ and $\\mathbf{R}$ is a complete space.</p>\n\n<p>$$\\{x \\in D \\; \\vert \\; (f_n(x)) \\text{ converges}\\}=\\bigcap_{\\epsilon \\in \\mathbf{Q_+^*}}\\bigcup_{N \\in \\mathbf{N}}\\bigcap_{n,m\\ge N}\\{x \\in D \\; \\vert \\; |f_n(x)-f_m(x)| \\le \\epsilon\\}$$</p>\n\n<p>And $\\{x \\in D \\; \\vert \\; |f_n(x)-f_m(x)| \\le \\epsilon\\}=|f_n-f_m|^{-1}([0,\\epsilon])$ is measurable because $|f_n-f_m|$ is still measurable.</p>\n" }, { "answer_id": 2085131, "score": 1, "text": "<p>This answer seeks to address your confusion over the definition of \"almost everywhere\" (a.e.). According to Rudin - Real and Complex Analysis, something occurs a.e. if the set where it doesn't occur is a <em>subset</em> of a set of measure 0. If the measure isn't complete the actual set on which it doesn't occur could be non-measurable. This fact is used, for example, in Rudin's book in Chapter 8 when proving that (under certain conditions) if the projection of a subset of $U \\subset X \\times Y$ onto $X$ has measure 0 for almost all $y \\in Y$, $U$ has measure 0.</p>\n" }, { "answer_id": 2085102, "score": 2, "text": "<p>Actually if $(f_n)_{n \\in \\mathbf{N}}$ is a sequence of measurable functions to a complete space, then the set of convergence points of $(f_n)$ is <em>always</em> measurable. <a href=\"https://proofwiki.org/wiki/Definition:Almost_Uniform_Convergence\" rel=\"nofollow noreferrer\">In your example</a>, $f_n : D \\to \\mathbf{R}$ and $\\mathbf{R}$ is a complete space.</p>\n\n<p>$$\\{x \\in D \\; \\vert \\; (f_n(x)) \\text{ converges}\\}=\\bigcap_{\\epsilon \\in \\mathbf{Q_+^*}}\\bigcup_{N \\in \\mathbf{N}}\\bigcap_{n,m\\ge N}\\{x \\in D \\; \\vert \\; |f_n(x)-f_m(x)| \\le \\epsilon\\}$$</p>\n\n<p>And $\\{x \\in D \\; \\vert \\; |f_n(x)-f_m(x)| \\le \\epsilon\\}=|f_n-f_m|^{-1}([0,\\epsilon])$ is measurable because $|f_n-f_m|$ is still measurable.</p>\n" }, { "answer_id": 2085131, "score": 1, "text": "<p>This answer seeks to address your confusion over the definition of \"almost everywhere\" (a.e.). According to Rudin - Real and Complex Analysis, something occurs a.e. if the set where it doesn't occur is a <em>subset</em> of a set of measure 0. If the measure isn't complete the actual set on which it doesn't occur could be non-measurable. This fact is used, for example, in Rudin's book in Chapter 8 when proving that (under certain conditions) if the projection of a subset of $U \\subset X \\times Y$ onto $X$ has measure 0 for almost all $y \\in Y$, $U$ has measure 0.</p>\n" }, { "answer_id": 2085102, "score": 2, "text": "<p>Actually if $(f_n)_{n \\in \\mathbf{N}}$ is a sequence of measurable functions to a complete space, then the set of convergence points of $(f_n)$ is <em>always</em> measurable. <a href=\"https://proofwiki.org/wiki/Definition:Almost_Uniform_Convergence\" rel=\"nofollow noreferrer\">In your example</a>, $f_n : D \\to \\mathbf{R}$ and $\\mathbf{R}$ is a complete space.</p>\n\n<p>$$\\{x \\in D \\; \\vert \\; (f_n(x)) \\text{ converges}\\}=\\bigcap_{\\epsilon \\in \\mathbf{Q_+^*}}\\bigcup_{N \\in \\mathbf{N}}\\bigcap_{n,m\\ge N}\\{x \\in D \\; \\vert \\; |f_n(x)-f_m(x)| \\le \\epsilon\\}$$</p>\n\n<p>And $\\{x \\in D \\; \\vert \\; |f_n(x)-f_m(x)| \\le \\epsilon\\}=|f_n-f_m|^{-1}([0,\\epsilon])$ is measurable because $|f_n-f_m|$ is still measurable.</p>\n" }, { "answer_id": 2085131, "score": 1, "text": "<p>This answer seeks to address your confusion over the definition of \"almost everywhere\" (a.e.). According to Rudin - Real and Complex Analysis, something occurs a.e. if the set where it doesn't occur is a <em>subset</em> of a set of measure 0. If the measure isn't complete the actual set on which it doesn't occur could be non-measurable. This fact is used, for example, in Rudin's book in Chapter 8 when proving that (under certain conditions) if the projection of a subset of $U \\subset X \\times Y$ onto $X$ has measure 0 for almost all $y \\in Y$, $U$ has measure 0.</p>\n" } ]
Question: Proof of Almost uniform convergence implies Convergence almost everywhere Answer 1: <p>Actually if $(f_n)_{n \in \mathbf{N}}$ is a sequence of measurable functions to a complete space, then the set of convergence points of $(f_n)$ is <em>always</em> measurable. <a href="https://proofwiki.org/wiki/Definition:Almost_Uniform_Convergence" rel="nofollow noreferrer">In your example</a>, $f_n : D \to \mathbf{R}$ and $\mathbf{R}$ is a complete space.</p> <p>$$\{x \in D \; \vert \; (f_n(x)) \text{ converges}\}=\bigcap_{\epsilon \in \mathbf{Q_+^*}}\bigcup_{N \in \mathbf{N}}\bigcap_{n,m\ge N}\{x \in D \; \vert \; |f_n(x)-f_m(x)| \le \epsilon\}$$</p> <p>And $\{x \in D \; \vert \; |f_n(x)-f_m(x)| \le \epsilon\}=|f_n-f_m|^{-1}([0,\epsilon])$ is measurable because $|f_n-f_m|$ is still measurable.</p> Answer 2: <p>This answer seeks to address your confusion over the definition of "almost everywhere" (a.e.). According to Rudin - Real and Complex Analysis, something occurs a.e. if the set where it doesn't occur is a <em>subset</em> of a set of measure 0. If the measure isn't complete the actual set on which it doesn't occur could be non-measurable. This fact is used, for example, in Rudin's book in Chapter 8 when proving that (under certain conditions) if the projection of a subset of $U \subset X \times Y$ onto $X$ has measure 0 for almost all $y \in Y$, $U$ has measure 0.</p> Answer 3: <p>Actually if $(f_n)_{n \in \mathbf{N}}$ is a sequence of measurable functions to a complete space, then the set of convergence points of $(f_n)$ is <em>always</em> measurable. <a href="https://proofwiki.org/wiki/Definition:Almost_Uniform_Convergence" rel="nofollow noreferrer">In your example</a>, $f_n : D \to \mathbf{R}$ and $\mathbf{R}$ is a complete space.</p> <p>$$\{x \in D \; \vert \; (f_n(x)) \text{ converges}\}=\bigcap_{\epsilon \in \mathbf{Q_+^*}}\bigcup_{N \in \mathbf{N}}\bigcap_{n,m\ge N}\{x \in D \; \vert \; |f_n(x)-f_m(x)| \le \epsilon\}$$</p> <p>And $\{x \in D \; \vert \; |f_n(x)-f_m(x)| \le \epsilon\}=|f_n-f_m|^{-1}([0,\epsilon])$ is measurable because $|f_n-f_m|$ is still measurable.</p> Answer 4: <p>This answer seeks to address your confusion over the definition of "almost everywhere" (a.e.). According to Rudin - Real and Complex Analysis, something occurs a.e. if the set where it doesn't occur is a <em>subset</em> of a set of measure 0. If the measure isn't complete the actual set on which it doesn't occur could be non-measurable. This fact is used, for example, in Rudin's book in Chapter 8 when proving that (under certain conditions) if the projection of a subset of $U \subset X \times Y$ onto $X$ has measure 0 for almost all $y \in Y$, $U$ has measure 0.</p> Answer 5: <p>Actually if $(f_n)_{n \in \mathbf{N}}$ is a sequence of measurable functions to a complete space, then the set of convergence points of $(f_n)$ is <em>always</em> measurable. <a href="https://proofwiki.org/wiki/Definition:Almost_Uniform_Convergence" rel="nofollow noreferrer">In your example</a>, $f_n : D \to \mathbf{R}$ and $\mathbf{R}$ is a complete space.</p> <p>$$\{x \in D \; \vert \; (f_n(x)) \text{ converges}\}=\bigcap_{\epsilon \in \mathbf{Q_+^*}}\bigcup_{N \in \mathbf{N}}\bigcap_{n,m\ge N}\{x \in D \; \vert \; |f_n(x)-f_m(x)| \le \epsilon\}$$</p> <p>And $\{x \in D \; \vert \; |f_n(x)-f_m(x)| \le \epsilon\}=|f_n-f_m|^{-1}([0,\epsilon])$ is measurable because $|f_n-f_m|$ is still measurable.</p> Answer 6: <p>This answer seeks to address your confusion over the definition of "almost everywhere" (a.e.). According to Rudin - Real and Complex Analysis, something occurs a.e. if the set where it doesn't occur is a <em>subset</em> of a set of measure 0. If the measure isn't complete the actual set on which it doesn't occur could be non-measurable. This fact is used, for example, in Rudin's book in Chapter 8 when proving that (under certain conditions) if the projection of a subset of $U \subset X \times Y$ onto $X$ has measure 0 for almost all $y \in Y$, $U$ has measure 0.</p>
why is $\mathbb{C}P^n$ a strong neighbourhood-deformation retraction of $\mathbb{C}P^{n+1}$
2,085,132
4
https://math.stackexchange.com/questions/2085132/why-is-mathbbcpn-a-strong-neighbourhood-deformation-retraction-of-mathbb
<p>Let $X$ be a topological space. A subspace $A\subset X$ is said to be a strong neighbourhood-deformation retract of $X$, if there is an open neighbourhood $U$ of $A$ and a continuous map $h:U\times [0,1]\to U$ such that</p> <p>$h(u,0)=u$, $\;h(u,1)\in A$ and $h(a,t)=a$ for all $u\in U$ and $(a,t)\in A\times [0,1]$. (Note that this is not exactly the same definition of $A$ strong deformation rectract in $X$ as you can see here <a href="https://en.wikipedia.org/wiki/Retract" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Retract</a> .. However, $A$ is a strong deformation retract in $U$).</p> <p>Consider the complex projective space $\mathbb{C}P^n:=\mathbb{C}^{n+1}\setminus \{0\} /\sim$, where $x,y\in \mathbb{C}^{n+1}\setminus \{0\}$ satisfy $x\sim y$ $:\iff \exists \lambda\in \mathbb{C}\setminus \{0\}$ such that $x=\lambda y$. I know that $\mathbb{C}P^n\cong S^{2n+1}/x\sim \lambda x,\;\lambda\in S^1$. </p> <p>My question: Why is $\mathbb{C}P^n$ a strong neighbourhood-deformation retraction of $\mathbb{C}P^{n+1}$? I need this fact for other calculations, but I don't know why this is true.</p> <p>I appreciate your help.</p> <p><strong>Edit</strong>: Can I use that $S^{n-1}$ is a strong neighbourhood deformation retract of $D^n$ (with $U=\mathbb{R}^n\setminus \{0\}$ ) ?</p>
[ { "answer_id": 2085146, "score": 3, "text": "<p>Since projective space is a manifold it suffices to use the fact that any closed submanifold $S\\subset M$ is a SNDR (in the ambient manifold $M$). More specifically, a neighborhood of $S$ in $M$ will be diffeomorphic to a vector bundle and using such an identification you can then use the linear structure to construct a deformation retraction explicitly.</p>\n" }, { "answer_id": 2085228, "score": 3, "text": "<p>$\\newcommand{\\Cpx}{\\mathbf{C}}\\newcommand{\\Proj}{\\mathbf{P}}$Let $x = [x_{0}: x_{1} : \\dots : x_{n}]$ be homogeneous coordinates on $\\Cpx\\Proj^{n}$, identify $\\Cpx\\Proj^{n-1}$ with the locus $\\{x_{0} = 0\\}$, and let $U$ be the complement of the point $O = [1 : 0 : \\dots : 0]$.</p>\n\n<p>The mapping\n$$\nh(x, t) = [(1 - t)x_{0} : x_{1} : \\cdots : x_{n}]\n$$\nsatisfies the desired conditions.</p>\n\n<p>To see geometrically what this does, look in $\\Cpx^{n+1}$, with $\\Cpx^{n}$ represented by the hyperplane $\\{x_{0} = 0\\}$ and $U$ the complement of the $x_{0}$-axis. The map $h$ continuously (in fact, smoothly) joins the identity map (at $t = 0$) to the coordinate projection from $\\Cpx^{n+1}$ to $\\Cpx^{n}$ (at $t = 1$), a.k.a. <em>projection away from a point</em> (namely $O$ in $\\Cpx\\Proj^{n}$). The points on the $x_{0}$-axis are precisely those whose image is the origin (and so does not define a point of $\\Cpx\\Proj^{n-1}$).</p>\n" }, { "answer_id": 2085146, "score": 3, "text": "<p>Since projective space is a manifold it suffices to use the fact that any closed submanifold $S\\subset M$ is a SNDR (in the ambient manifold $M$). More specifically, a neighborhood of $S$ in $M$ will be diffeomorphic to a vector bundle and using such an identification you can then use the linear structure to construct a deformation retraction explicitly.</p>\n" }, { "answer_id": 2085228, "score": 3, "text": "<p>$\\newcommand{\\Cpx}{\\mathbf{C}}\\newcommand{\\Proj}{\\mathbf{P}}$Let $x = [x_{0}: x_{1} : \\dots : x_{n}]$ be homogeneous coordinates on $\\Cpx\\Proj^{n}$, identify $\\Cpx\\Proj^{n-1}$ with the locus $\\{x_{0} = 0\\}$, and let $U$ be the complement of the point $O = [1 : 0 : \\dots : 0]$.</p>\n\n<p>The mapping\n$$\nh(x, t) = [(1 - t)x_{0} : x_{1} : \\cdots : x_{n}]\n$$\nsatisfies the desired conditions.</p>\n\n<p>To see geometrically what this does, look in $\\Cpx^{n+1}$, with $\\Cpx^{n}$ represented by the hyperplane $\\{x_{0} = 0\\}$ and $U$ the complement of the $x_{0}$-axis. The map $h$ continuously (in fact, smoothly) joins the identity map (at $t = 0$) to the coordinate projection from $\\Cpx^{n+1}$ to $\\Cpx^{n}$ (at $t = 1$), a.k.a. <em>projection away from a point</em> (namely $O$ in $\\Cpx\\Proj^{n}$). The points on the $x_{0}$-axis are precisely those whose image is the origin (and so does not define a point of $\\Cpx\\Proj^{n-1}$).</p>\n" }, { "answer_id": 2085146, "score": 3, "text": "<p>Since projective space is a manifold it suffices to use the fact that any closed submanifold $S\\subset M$ is a SNDR (in the ambient manifold $M$). More specifically, a neighborhood of $S$ in $M$ will be diffeomorphic to a vector bundle and using such an identification you can then use the linear structure to construct a deformation retraction explicitly.</p>\n" }, { "answer_id": 2085228, "score": 3, "text": "<p>$\\newcommand{\\Cpx}{\\mathbf{C}}\\newcommand{\\Proj}{\\mathbf{P}}$Let $x = [x_{0}: x_{1} : \\dots : x_{n}]$ be homogeneous coordinates on $\\Cpx\\Proj^{n}$, identify $\\Cpx\\Proj^{n-1}$ with the locus $\\{x_{0} = 0\\}$, and let $U$ be the complement of the point $O = [1 : 0 : \\dots : 0]$.</p>\n\n<p>The mapping\n$$\nh(x, t) = [(1 - t)x_{0} : x_{1} : \\cdots : x_{n}]\n$$\nsatisfies the desired conditions.</p>\n\n<p>To see geometrically what this does, look in $\\Cpx^{n+1}$, with $\\Cpx^{n}$ represented by the hyperplane $\\{x_{0} = 0\\}$ and $U$ the complement of the $x_{0}$-axis. The map $h$ continuously (in fact, smoothly) joins the identity map (at $t = 0$) to the coordinate projection from $\\Cpx^{n+1}$ to $\\Cpx^{n}$ (at $t = 1$), a.k.a. <em>projection away from a point</em> (namely $O$ in $\\Cpx\\Proj^{n}$). The points on the $x_{0}$-axis are precisely those whose image is the origin (and so does not define a point of $\\Cpx\\Proj^{n-1}$).</p>\n" } ]
Question: why is $\mathbb{C}P^n$ a strong neighbourhood-deformation retraction of $\mathbb{C}P^{n+1}$ Answer 1: <p>Since projective space is a manifold it suffices to use the fact that any closed submanifold $S\subset M$ is a SNDR (in the ambient manifold $M$). More specifically, a neighborhood of $S$ in $M$ will be diffeomorphic to a vector bundle and using such an identification you can then use the linear structure to construct a deformation retraction explicitly.</p> Answer 2: <p>$\newcommand{\Cpx}{\mathbf{C}}\newcommand{\Proj}{\mathbf{P}}$Let $x = [x_{0}: x_{1} : \dots : x_{n}]$ be homogeneous coordinates on $\Cpx\Proj^{n}$, identify $\Cpx\Proj^{n-1}$ with the locus $\{x_{0} = 0\}$, and let $U$ be the complement of the point $O = [1 : 0 : \dots : 0]$.</p> <p>The mapping $$ h(x, t) = [(1 - t)x_{0} : x_{1} : \cdots : x_{n}] $$ satisfies the desired conditions.</p> <p>To see geometrically what this does, look in $\Cpx^{n+1}$, with $\Cpx^{n}$ represented by the hyperplane $\{x_{0} = 0\}$ and $U$ the complement of the $x_{0}$-axis. The map $h$ continuously (in fact, smoothly) joins the identity map (at $t = 0$) to the coordinate projection from $\Cpx^{n+1}$ to $\Cpx^{n}$ (at $t = 1$), a.k.a. <em>projection away from a point</em> (namely $O$ in $\Cpx\Proj^{n}$). The points on the $x_{0}$-axis are precisely those whose image is the origin (and so does not define a point of $\Cpx\Proj^{n-1}$).</p> Answer 3: <p>Since projective space is a manifold it suffices to use the fact that any closed submanifold $S\subset M$ is a SNDR (in the ambient manifold $M$). More specifically, a neighborhood of $S$ in $M$ will be diffeomorphic to a vector bundle and using such an identification you can then use the linear structure to construct a deformation retraction explicitly.</p> Answer 4: <p>$\newcommand{\Cpx}{\mathbf{C}}\newcommand{\Proj}{\mathbf{P}}$Let $x = [x_{0}: x_{1} : \dots : x_{n}]$ be homogeneous coordinates on $\Cpx\Proj^{n}$, identify $\Cpx\Proj^{n-1}$ with the locus $\{x_{0} = 0\}$, and let $U$ be the complement of the point $O = [1 : 0 : \dots : 0]$.</p> <p>The mapping $$ h(x, t) = [(1 - t)x_{0} : x_{1} : \cdots : x_{n}] $$ satisfies the desired conditions.</p> <p>To see geometrically what this does, look in $\Cpx^{n+1}$, with $\Cpx^{n}$ represented by the hyperplane $\{x_{0} = 0\}$ and $U$ the complement of the $x_{0}$-axis. The map $h$ continuously (in fact, smoothly) joins the identity map (at $t = 0$) to the coordinate projection from $\Cpx^{n+1}$ to $\Cpx^{n}$ (at $t = 1$), a.k.a. <em>projection away from a point</em> (namely $O$ in $\Cpx\Proj^{n}$). The points on the $x_{0}$-axis are precisely those whose image is the origin (and so does not define a point of $\Cpx\Proj^{n-1}$).</p> Answer 5: <p>Since projective space is a manifold it suffices to use the fact that any closed submanifold $S\subset M$ is a SNDR (in the ambient manifold $M$). More specifically, a neighborhood of $S$ in $M$ will be diffeomorphic to a vector bundle and using such an identification you can then use the linear structure to construct a deformation retraction explicitly.</p> Answer 6: <p>$\newcommand{\Cpx}{\mathbf{C}}\newcommand{\Proj}{\mathbf{P}}$Let $x = [x_{0}: x_{1} : \dots : x_{n}]$ be homogeneous coordinates on $\Cpx\Proj^{n}$, identify $\Cpx\Proj^{n-1}$ with the locus $\{x_{0} = 0\}$, and let $U$ be the complement of the point $O = [1 : 0 : \dots : 0]$.</p> <p>The mapping $$ h(x, t) = [(1 - t)x_{0} : x_{1} : \cdots : x_{n}] $$ satisfies the desired conditions.</p> <p>To see geometrically what this does, look in $\Cpx^{n+1}$, with $\Cpx^{n}$ represented by the hyperplane $\{x_{0} = 0\}$ and $U$ the complement of the $x_{0}$-axis. The map $h$ continuously (in fact, smoothly) joins the identity map (at $t = 0$) to the coordinate projection from $\Cpx^{n+1}$ to $\Cpx^{n}$ (at $t = 1$), a.k.a. <em>projection away from a point</em> (namely $O$ in $\Cpx\Proj^{n}$). The points on the $x_{0}$-axis are precisely those whose image is the origin (and so does not define a point of $\Cpx\Proj^{n-1}$).</p>
How to compute $E[\log(X)]$ when $X$ follows a beta distribution?
2,085,184
4
https://math.stackexchange.com/questions/2085184/how-to-compute-e-logx-when-x-follows-a-beta-distribution
<p>Given a <a href="https://en.wikipedia.org/wiki/Beta_distribution" rel="nofollow noreferrer">Beta</a> variable <span class="math-container">$X \sim B(\alpha\ge 2,\beta)$</span>, how do I compute the expectation of its logarithm <span class="math-container">$E[\log(X)]$</span>?</p> <p>This is deemed &quot;obvious&quot; on <a href="https://mathoverflow.net/q/159654/36889">MO</a>, but I see no easy way to compute <span class="math-container">$\int_0^1 x^{\alpha-1}(1-x)^{\beta-1}\log x \; dx$</span>. Differentiating <a href="https://en.wikipedia.org/wiki/Beta_function" rel="nofollow noreferrer">Beta function</a> <span class="math-container">$B(\alpha,\beta)=\int_0^1 x^{\alpha-1}(1-x)^{\beta-1}dx$</span> by <span class="math-container">$\alpha$</span> results in <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">Digamma function</a> - is this really the way to go?</p> <p>Thanks.</p>
[ { "answer_id": 2085200, "score": 6, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Beta_distribution#Moments_of_logarithmically_transformed_random_variables\" rel=\"nofollow noreferrer\">Wikipedia</a> has the following for the beta distribution with all valid values for <span class=\"math-container\">$\\alpha$</span> and <span class=\"math-container\">$\\beta$</span>:</p>\n<p><span class=\"math-container\">$$\nE[\\log(X)] = \\psi(\\alpha) - \\psi(\\alpha+\\beta)\n$$</span></p>\n<p>where <span class=\"math-container\">$\\psi(x) = \\frac{\\Gamma'(x)}{\\Gamma(x)}$</span>.</p>\n<p>For completeness, the integral is calculated as follows, with <span class=\"math-container\">$B(\\alpha,\\beta)=\\frac{\\Gamma(\\alpha)\\Gamma(\\beta)}{\\Gamma(\\alpha+\\beta)}$</span>\n<span class=\"math-container\">$$\n\\int_0^1 \\ln x\\, f(x;\\alpha,\\beta)\\,dx \\\\[4pt]\n= \\int_0^1 \\ln x \\,\\frac{ x^{\\alpha-1}(1-x)^{\\beta-1}}{B(\\alpha,\\beta)}\\,dx \\\\[4pt]\n= \\frac{1}{B(\\alpha,\\beta)} \\, \\int_0^1 \\frac{\\partial\\left( x^{\\alpha-1}(1-x)^{\\beta-1}\\right)}{\\partial \\alpha}\\,dx \\\\[4pt]\n= \\frac{1}{B(\\alpha,\\beta)} \\frac{\\partial}{\\partial \\alpha} \\int_0^1 x^{\\alpha-1}(1-x)^{\\beta-1}\\,dx \\\\[4pt]\n= \\frac{1}{B(\\alpha,\\beta)} \\frac{\\partial B(\\alpha,\\beta)}{\\partial \\alpha} \\\\[4pt]\n= \\frac{\\partial \\ln B(\\alpha,\\beta)}{\\partial \\alpha} \\\\[4pt]\n= \\frac{\\partial \\ln \\Gamma(\\alpha)}{\\partial \\alpha} - \\frac{\\partial \\ln \\Gamma(\\alpha + \\beta)}{\\partial \\alpha} \\\\[4pt]\n= \\psi(\\alpha) - \\psi(\\alpha + \\beta)\n$$</span></p>\n" }, { "answer_id": 2085195, "score": 4, "text": "<p>$$\n\\int_{0}^1\\frac{x^{\\alpha-1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha,\\beta)}\\log x dx\n$$\nwe notice that\n$$\n\\lim_{k\\to 0}\\frac{\\partial}{\\partial k}x^k = \\log x\n$$\nso we have an integral\n$$\n\\lim_{k\\to 0}\\frac{\\partial}{\\partial k}\\int_{0}^1\\frac{x^{\\alpha-1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha,\\beta)}x^k dx = \\lim_{k\\to 0}\\frac{\\partial}{\\partial k}\\int_{0}^1\\frac{x^{\\alpha + k -1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha,\\beta)} dx\n$$\nif we set $\\alpha + k \\to \\alpha'$ then we have\n$$\n\\lim_{k\\to 0}\\frac{\\partial}{\\partial k}\\frac{{\\text{Beta}(\\alpha',\\beta)}}{{\\text{Beta}(\\alpha,\\beta)}}\\int_{0}^1\\frac{x^{\\alpha'-1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha',\\beta)} dx\n$$\nBut the integral is just integrating the $\\text{Beta}(\\alpha',\\beta)$ over the entire support i.e. equals 1.</p>\n\n<p>so \n$$\n\\int_{0}^1\\frac{x^{\\alpha-1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha,\\beta)}\\log x dx = \\lim_{k\\to 0}\\frac{\\partial}{\\partial k}\\frac{{\\text{Beta}(\\alpha + k,\\beta)}}{{\\text{Beta}(\\alpha,\\beta)}}\n$$\nSome helper information\n$$\n\\frac{\\partial}{\\partial \\alpha}\\text{Beta}(\\alpha,\\beta) = \\text{Beta}(\\alpha,\\beta)(\\psi(\\alpha) -\\psi(x+y))\n$$\n(which is where the digamma function arises)\nBe careful how you take limits with $k$</p>\n" }, { "answer_id": 2085200, "score": 6, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Beta_distribution#Moments_of_logarithmically_transformed_random_variables\" rel=\"nofollow noreferrer\">Wikipedia</a> has the following for the beta distribution with all valid values for <span class=\"math-container\">$\\alpha$</span> and <span class=\"math-container\">$\\beta$</span>:</p>\n<p><span class=\"math-container\">$$\nE[\\log(X)] = \\psi(\\alpha) - \\psi(\\alpha+\\beta)\n$$</span></p>\n<p>where <span class=\"math-container\">$\\psi(x) = \\frac{\\Gamma'(x)}{\\Gamma(x)}$</span>.</p>\n<p>For completeness, the integral is calculated as follows, with <span class=\"math-container\">$B(\\alpha,\\beta)=\\frac{\\Gamma(\\alpha)\\Gamma(\\beta)}{\\Gamma(\\alpha+\\beta)}$</span>\n<span class=\"math-container\">$$\n\\int_0^1 \\ln x\\, f(x;\\alpha,\\beta)\\,dx \\\\[4pt]\n= \\int_0^1 \\ln x \\,\\frac{ x^{\\alpha-1}(1-x)^{\\beta-1}}{B(\\alpha,\\beta)}\\,dx \\\\[4pt]\n= \\frac{1}{B(\\alpha,\\beta)} \\, \\int_0^1 \\frac{\\partial\\left( x^{\\alpha-1}(1-x)^{\\beta-1}\\right)}{\\partial \\alpha}\\,dx \\\\[4pt]\n= \\frac{1}{B(\\alpha,\\beta)} \\frac{\\partial}{\\partial \\alpha} \\int_0^1 x^{\\alpha-1}(1-x)^{\\beta-1}\\,dx \\\\[4pt]\n= \\frac{1}{B(\\alpha,\\beta)} \\frac{\\partial B(\\alpha,\\beta)}{\\partial \\alpha} \\\\[4pt]\n= \\frac{\\partial \\ln B(\\alpha,\\beta)}{\\partial \\alpha} \\\\[4pt]\n= \\frac{\\partial \\ln \\Gamma(\\alpha)}{\\partial \\alpha} - \\frac{\\partial \\ln \\Gamma(\\alpha + \\beta)}{\\partial \\alpha} \\\\[4pt]\n= \\psi(\\alpha) - \\psi(\\alpha + \\beta)\n$$</span></p>\n" }, { "answer_id": 2085195, "score": 4, "text": "<p>$$\n\\int_{0}^1\\frac{x^{\\alpha-1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha,\\beta)}\\log x dx\n$$\nwe notice that\n$$\n\\lim_{k\\to 0}\\frac{\\partial}{\\partial k}x^k = \\log x\n$$\nso we have an integral\n$$\n\\lim_{k\\to 0}\\frac{\\partial}{\\partial k}\\int_{0}^1\\frac{x^{\\alpha-1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha,\\beta)}x^k dx = \\lim_{k\\to 0}\\frac{\\partial}{\\partial k}\\int_{0}^1\\frac{x^{\\alpha + k -1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha,\\beta)} dx\n$$\nif we set $\\alpha + k \\to \\alpha'$ then we have\n$$\n\\lim_{k\\to 0}\\frac{\\partial}{\\partial k}\\frac{{\\text{Beta}(\\alpha',\\beta)}}{{\\text{Beta}(\\alpha,\\beta)}}\\int_{0}^1\\frac{x^{\\alpha'-1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha',\\beta)} dx\n$$\nBut the integral is just integrating the $\\text{Beta}(\\alpha',\\beta)$ over the entire support i.e. equals 1.</p>\n\n<p>so \n$$\n\\int_{0}^1\\frac{x^{\\alpha-1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha,\\beta)}\\log x dx = \\lim_{k\\to 0}\\frac{\\partial}{\\partial k}\\frac{{\\text{Beta}(\\alpha + k,\\beta)}}{{\\text{Beta}(\\alpha,\\beta)}}\n$$\nSome helper information\n$$\n\\frac{\\partial}{\\partial \\alpha}\\text{Beta}(\\alpha,\\beta) = \\text{Beta}(\\alpha,\\beta)(\\psi(\\alpha) -\\psi(x+y))\n$$\n(which is where the digamma function arises)\nBe careful how you take limits with $k$</p>\n" }, { "answer_id": 2085200, "score": 6, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Beta_distribution#Moments_of_logarithmically_transformed_random_variables\" rel=\"nofollow noreferrer\">Wikipedia</a> has the following for the beta distribution with all valid values for <span class=\"math-container\">$\\alpha$</span> and <span class=\"math-container\">$\\beta$</span>:</p>\n<p><span class=\"math-container\">$$\nE[\\log(X)] = \\psi(\\alpha) - \\psi(\\alpha+\\beta)\n$$</span></p>\n<p>where <span class=\"math-container\">$\\psi(x) = \\frac{\\Gamma'(x)}{\\Gamma(x)}$</span>.</p>\n<p>For completeness, the integral is calculated as follows, with <span class=\"math-container\">$B(\\alpha,\\beta)=\\frac{\\Gamma(\\alpha)\\Gamma(\\beta)}{\\Gamma(\\alpha+\\beta)}$</span>\n<span class=\"math-container\">$$\n\\int_0^1 \\ln x\\, f(x;\\alpha,\\beta)\\,dx \\\\[4pt]\n= \\int_0^1 \\ln x \\,\\frac{ x^{\\alpha-1}(1-x)^{\\beta-1}}{B(\\alpha,\\beta)}\\,dx \\\\[4pt]\n= \\frac{1}{B(\\alpha,\\beta)} \\, \\int_0^1 \\frac{\\partial\\left( x^{\\alpha-1}(1-x)^{\\beta-1}\\right)}{\\partial \\alpha}\\,dx \\\\[4pt]\n= \\frac{1}{B(\\alpha,\\beta)} \\frac{\\partial}{\\partial \\alpha} \\int_0^1 x^{\\alpha-1}(1-x)^{\\beta-1}\\,dx \\\\[4pt]\n= \\frac{1}{B(\\alpha,\\beta)} \\frac{\\partial B(\\alpha,\\beta)}{\\partial \\alpha} \\\\[4pt]\n= \\frac{\\partial \\ln B(\\alpha,\\beta)}{\\partial \\alpha} \\\\[4pt]\n= \\frac{\\partial \\ln \\Gamma(\\alpha)}{\\partial \\alpha} - \\frac{\\partial \\ln \\Gamma(\\alpha + \\beta)}{\\partial \\alpha} \\\\[4pt]\n= \\psi(\\alpha) - \\psi(\\alpha + \\beta)\n$$</span></p>\n" }, { "answer_id": 2085195, "score": 4, "text": "<p>$$\n\\int_{0}^1\\frac{x^{\\alpha-1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha,\\beta)}\\log x dx\n$$\nwe notice that\n$$\n\\lim_{k\\to 0}\\frac{\\partial}{\\partial k}x^k = \\log x\n$$\nso we have an integral\n$$\n\\lim_{k\\to 0}\\frac{\\partial}{\\partial k}\\int_{0}^1\\frac{x^{\\alpha-1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha,\\beta)}x^k dx = \\lim_{k\\to 0}\\frac{\\partial}{\\partial k}\\int_{0}^1\\frac{x^{\\alpha + k -1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha,\\beta)} dx\n$$\nif we set $\\alpha + k \\to \\alpha'$ then we have\n$$\n\\lim_{k\\to 0}\\frac{\\partial}{\\partial k}\\frac{{\\text{Beta}(\\alpha',\\beta)}}{{\\text{Beta}(\\alpha,\\beta)}}\\int_{0}^1\\frac{x^{\\alpha'-1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha',\\beta)} dx\n$$\nBut the integral is just integrating the $\\text{Beta}(\\alpha',\\beta)$ over the entire support i.e. equals 1.</p>\n\n<p>so \n$$\n\\int_{0}^1\\frac{x^{\\alpha-1}(1-x)^{\\beta-1}} {\\text{Beta}(\\alpha,\\beta)}\\log x dx = \\lim_{k\\to 0}\\frac{\\partial}{\\partial k}\\frac{{\\text{Beta}(\\alpha + k,\\beta)}}{{\\text{Beta}(\\alpha,\\beta)}}\n$$\nSome helper information\n$$\n\\frac{\\partial}{\\partial \\alpha}\\text{Beta}(\\alpha,\\beta) = \\text{Beta}(\\alpha,\\beta)(\\psi(\\alpha) -\\psi(x+y))\n$$\n(which is where the digamma function arises)\nBe careful how you take limits with $k$</p>\n" } ]
Question: How to compute $E[\log(X)]$ when $X$ follows a beta distribution? Answer 1: <p><a href="https://en.wikipedia.org/wiki/Beta_distribution#Moments_of_logarithmically_transformed_random_variables" rel="nofollow noreferrer">Wikipedia</a> has the following for the beta distribution with all valid values for <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>:</p> <p><span class="math-container">$$ E[\log(X)] = \psi(\alpha) - \psi(\alpha+\beta) $$</span></p> <p>where <span class="math-container">$\psi(x) = \frac{\Gamma'(x)}{\Gamma(x)}$</span>.</p> <p>For completeness, the integral is calculated as follows, with <span class="math-container">$B(\alpha,\beta)=\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}$</span> <span class="math-container">$$ \int_0^1 \ln x\, f(x;\alpha,\beta)\,dx \\[4pt] = \int_0^1 \ln x \,\frac{ x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}\,dx \\[4pt] = \frac{1}{B(\alpha,\beta)} \, \int_0^1 \frac{\partial\left( x^{\alpha-1}(1-x)^{\beta-1}\right)}{\partial \alpha}\,dx \\[4pt] = \frac{1}{B(\alpha,\beta)} \frac{\partial}{\partial \alpha} \int_0^1 x^{\alpha-1}(1-x)^{\beta-1}\,dx \\[4pt] = \frac{1}{B(\alpha,\beta)} \frac{\partial B(\alpha,\beta)}{\partial \alpha} \\[4pt] = \frac{\partial \ln B(\alpha,\beta)}{\partial \alpha} \\[4pt] = \frac{\partial \ln \Gamma(\alpha)}{\partial \alpha} - \frac{\partial \ln \Gamma(\alpha + \beta)}{\partial \alpha} \\[4pt] = \psi(\alpha) - \psi(\alpha + \beta) $$</span></p> Answer 2: <p>$$ \int_{0}^1\frac{x^{\alpha-1}(1-x)^{\beta-1}} {\text{Beta}(\alpha,\beta)}\log x dx $$ we notice that $$ \lim_{k\to 0}\frac{\partial}{\partial k}x^k = \log x $$ so we have an integral $$ \lim_{k\to 0}\frac{\partial}{\partial k}\int_{0}^1\frac{x^{\alpha-1}(1-x)^{\beta-1}} {\text{Beta}(\alpha,\beta)}x^k dx = \lim_{k\to 0}\frac{\partial}{\partial k}\int_{0}^1\frac{x^{\alpha + k -1}(1-x)^{\beta-1}} {\text{Beta}(\alpha,\beta)} dx $$ if we set $\alpha + k \to \alpha'$ then we have $$ \lim_{k\to 0}\frac{\partial}{\partial k}\frac{{\text{Beta}(\alpha',\beta)}}{{\text{Beta}(\alpha,\beta)}}\int_{0}^1\frac{x^{\alpha'-1}(1-x)^{\beta-1}} {\text{Beta}(\alpha',\beta)} dx $$ But the integral is just integrating the $\text{Beta}(\alpha',\beta)$ over the entire support i.e. equals 1.</p> <p>so $$ \int_{0}^1\frac{x^{\alpha-1}(1-x)^{\beta-1}} {\text{Beta}(\alpha,\beta)}\log x dx = \lim_{k\to 0}\frac{\partial}{\partial k}\frac{{\text{Beta}(\alpha + k,\beta)}}{{\text{Beta}(\alpha,\beta)}} $$ Some helper information $$ \frac{\partial}{\partial \alpha}\text{Beta}(\alpha,\beta) = \text{Beta}(\alpha,\beta)(\psi(\alpha) -\psi(x+y)) $$ (which is where the digamma function arises) Be careful how you take limits with $k$</p> Answer 3: <p><a href="https://en.wikipedia.org/wiki/Beta_distribution#Moments_of_logarithmically_transformed_random_variables" rel="nofollow noreferrer">Wikipedia</a> has the following for the beta distribution with all valid values for <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>:</p> <p><span class="math-container">$$ E[\log(X)] = \psi(\alpha) - \psi(\alpha+\beta) $$</span></p> <p>where <span class="math-container">$\psi(x) = \frac{\Gamma'(x)}{\Gamma(x)}$</span>.</p> <p>For completeness, the integral is calculated as follows, with <span class="math-container">$B(\alpha,\beta)=\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}$</span> <span class="math-container">$$ \int_0^1 \ln x\, f(x;\alpha,\beta)\,dx \\[4pt] = \int_0^1 \ln x \,\frac{ x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}\,dx \\[4pt] = \frac{1}{B(\alpha,\beta)} \, \int_0^1 \frac{\partial\left( x^{\alpha-1}(1-x)^{\beta-1}\right)}{\partial \alpha}\,dx \\[4pt] = \frac{1}{B(\alpha,\beta)} \frac{\partial}{\partial \alpha} \int_0^1 x^{\alpha-1}(1-x)^{\beta-1}\,dx \\[4pt] = \frac{1}{B(\alpha,\beta)} \frac{\partial B(\alpha,\beta)}{\partial \alpha} \\[4pt] = \frac{\partial \ln B(\alpha,\beta)}{\partial \alpha} \\[4pt] = \frac{\partial \ln \Gamma(\alpha)}{\partial \alpha} - \frac{\partial \ln \Gamma(\alpha + \beta)}{\partial \alpha} \\[4pt] = \psi(\alpha) - \psi(\alpha + \beta) $$</span></p> Answer 4: <p>$$ \int_{0}^1\frac{x^{\alpha-1}(1-x)^{\beta-1}} {\text{Beta}(\alpha,\beta)}\log x dx $$ we notice that $$ \lim_{k\to 0}\frac{\partial}{\partial k}x^k = \log x $$ so we have an integral $$ \lim_{k\to 0}\frac{\partial}{\partial k}\int_{0}^1\frac{x^{\alpha-1}(1-x)^{\beta-1}} {\text{Beta}(\alpha,\beta)}x^k dx = \lim_{k\to 0}\frac{\partial}{\partial k}\int_{0}^1\frac{x^{\alpha + k -1}(1-x)^{\beta-1}} {\text{Beta}(\alpha,\beta)} dx $$ if we set $\alpha + k \to \alpha'$ then we have $$ \lim_{k\to 0}\frac{\partial}{\partial k}\frac{{\text{Beta}(\alpha',\beta)}}{{\text{Beta}(\alpha,\beta)}}\int_{0}^1\frac{x^{\alpha'-1}(1-x)^{\beta-1}} {\text{Beta}(\alpha',\beta)} dx $$ But the integral is just integrating the $\text{Beta}(\alpha',\beta)$ over the entire support i.e. equals 1.</p> <p>so $$ \int_{0}^1\frac{x^{\alpha-1}(1-x)^{\beta-1}} {\text{Beta}(\alpha,\beta)}\log x dx = \lim_{k\to 0}\frac{\partial}{\partial k}\frac{{\text{Beta}(\alpha + k,\beta)}}{{\text{Beta}(\alpha,\beta)}} $$ Some helper information $$ \frac{\partial}{\partial \alpha}\text{Beta}(\alpha,\beta) = \text{Beta}(\alpha,\beta)(\psi(\alpha) -\psi(x+y)) $$ (which is where the digamma function arises) Be careful how you take limits with $k$</p> Answer 5: <p><a href="https://en.wikipedia.org/wiki/Beta_distribution#Moments_of_logarithmically_transformed_random_variables" rel="nofollow noreferrer">Wikipedia</a> has the following for the beta distribution with all valid values for <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>:</p> <p><span class="math-container">$$ E[\log(X)] = \psi(\alpha) - \psi(\alpha+\beta) $$</span></p> <p>where <span class="math-container">$\psi(x) = \frac{\Gamma'(x)}{\Gamma(x)}$</span>.</p> <p>For completeness, the integral is calculated as follows, with <span class="math-container">$B(\alpha,\beta)=\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}$</span> <span class="math-container">$$ \int_0^1 \ln x\, f(x;\alpha,\beta)\,dx \\[4pt] = \int_0^1 \ln x \,\frac{ x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}\,dx \\[4pt] = \frac{1}{B(\alpha,\beta)} \, \int_0^1 \frac{\partial\left( x^{\alpha-1}(1-x)^{\beta-1}\right)}{\partial \alpha}\,dx \\[4pt] = \frac{1}{B(\alpha,\beta)} \frac{\partial}{\partial \alpha} \int_0^1 x^{\alpha-1}(1-x)^{\beta-1}\,dx \\[4pt] = \frac{1}{B(\alpha,\beta)} \frac{\partial B(\alpha,\beta)}{\partial \alpha} \\[4pt] = \frac{\partial \ln B(\alpha,\beta)}{\partial \alpha} \\[4pt] = \frac{\partial \ln \Gamma(\alpha)}{\partial \alpha} - \frac{\partial \ln \Gamma(\alpha + \beta)}{\partial \alpha} \\[4pt] = \psi(\alpha) - \psi(\alpha + \beta) $$</span></p> Answer 6: <p>$$ \int_{0}^1\frac{x^{\alpha-1}(1-x)^{\beta-1}} {\text{Beta}(\alpha,\beta)}\log x dx $$ we notice that $$ \lim_{k\to 0}\frac{\partial}{\partial k}x^k = \log x $$ so we have an integral $$ \lim_{k\to 0}\frac{\partial}{\partial k}\int_{0}^1\frac{x^{\alpha-1}(1-x)^{\beta-1}} {\text{Beta}(\alpha,\beta)}x^k dx = \lim_{k\to 0}\frac{\partial}{\partial k}\int_{0}^1\frac{x^{\alpha + k -1}(1-x)^{\beta-1}} {\text{Beta}(\alpha,\beta)} dx $$ if we set $\alpha + k \to \alpha'$ then we have $$ \lim_{k\to 0}\frac{\partial}{\partial k}\frac{{\text{Beta}(\alpha',\beta)}}{{\text{Beta}(\alpha,\beta)}}\int_{0}^1\frac{x^{\alpha'-1}(1-x)^{\beta-1}} {\text{Beta}(\alpha',\beta)} dx $$ But the integral is just integrating the $\text{Beta}(\alpha',\beta)$ over the entire support i.e. equals 1.</p> <p>so $$ \int_{0}^1\frac{x^{\alpha-1}(1-x)^{\beta-1}} {\text{Beta}(\alpha,\beta)}\log x dx = \lim_{k\to 0}\frac{\partial}{\partial k}\frac{{\text{Beta}(\alpha + k,\beta)}}{{\text{Beta}(\alpha,\beta)}} $$ Some helper information $$ \frac{\partial}{\partial \alpha}\text{Beta}(\alpha,\beta) = \text{Beta}(\alpha,\beta)(\psi(\alpha) -\psi(x+y)) $$ (which is where the digamma function arises) Be careful how you take limits with $k$</p>
Probability: there are $n$ rooms, and $m$ meetings, $m \leq n$, what&#39;s the probability of all meetings scheduled to a different room
2,085,473
4
https://math.stackexchange.com/questions/2085473/probability-there-are-n-rooms-and-m-meetings-m-leq-n-whats-the-proba
<p>Quite new in stats... definitely not my strong area. I came across this probability question, and I am not sure how to do this! </p> <p>The question goes: </p> <blockquote> <p>pretend that there's this meeting scheduling engine used by this company and is not synced in real time, so when people schedule their meetings online to book a room, there may be overlaps. let's say there are $N$ rooms, and $M$ meetings, where $M \leq N$, what is the probability that all meetings scheduled to a different room?</p> </blockquote> <p>My thought was that, the first meeting doesnt matter, can be in any room; then the 2nd meeting has $\frac{1}{N-1}$ chance of being in a room. so for two rooms not colliding, the chance of them being in separate rooms is $\frac{1}{N-1}$. Right? I am not confident about this one neither... </p> <p>Any hint/advice/guidance helps!</p> <p><strong>update</strong> to clarify:</p> <blockquote> <p>1) each room can only host up to one meeting<br> 2) one meeting can only happen in one room</p> </blockquote>
[ { "answer_id": 2085478, "score": 3, "text": "<p>The total number of ways to distribute meetings to rooms is $N^M$, because for example the first meeting can take place in $N$ rooms, the second also in $N$ rooms, and so on.</p>\n\n<p>Let's calculate the number of favorable situations. The first meeting has $N$ choices. The second has $N-1$ choices and so on. So the probability of all meetings are in different rooms is $$\\frac{N(N-1)\\cdots(N-M+1)}{N^M}$$</p>\n" }, { "answer_id": 2085482, "score": 3, "text": "<p>If you are not talking about scheduling (for which time overlap would\nan issue), but instead talking about just where the meeting takes place, then\nthere are $N^M$ possible assignments of meeting rooms. There are $\\binom{N}{M}$\nways to choose $M$ different rooms for the meetings and $M!$ ways to assign the $M$ meetings to those rooms, so the probability is ${\\binom{N}{M}M! \\over N^M}$.</p>\n" }, { "answer_id": 2085511, "score": 2, "text": "<p>Same solution as the others with a slightly different perspective (somewhat in line with the reasoning you tried to provide).</p>\n\n<p>Let $E_i$ be the event that room $i$ does not collide with rooms $1,2,3,...,i-1$.</p>\n\n<p>As you noted the first room does not matter.</p>\n\n<p>$$P(E_1)=1$$</p>\n\n<p>For the second room, there are $N-1$ rooms we can pick out of the $N$ total taking into account the first room was chosen:</p>\n\n<p>$$P(E_2|E_1)=\\dfrac{N-1}{N}$$</p>\n\n<p>Similarly,</p>\n\n<p>$$P(E_3|E_2,E_1)=\\dfrac{N-2}{N}$$</p>\n\n<p>And the pattern continues in this fashion. </p>\n\n<p>$$P(E_i|E_{i-1},E_{i-2},\\cdots ,E_1) = \\dfrac{N-i+1}{N}$$</p>\n\n<p>You seek </p>\n\n<p>$$P(E_1\\cap E_2\\cap \\cdots\\cap E_m) = P(E_1)P(E_2|E_1)P(E_3|E_2,E_1)\\cdots P(E_m|E_{m-1},E_{m-2},\\cdots,E_1)$$</p>\n\n<p>$$=1\\cdot \\dfrac{N-1}{N}\\cdot \\dfrac{N-2}{N}\\cdots \\dfrac{N-m+1}{N}$$</p>\n" }, { "answer_id": 2085478, "score": 3, "text": "<p>The total number of ways to distribute meetings to rooms is $N^M$, because for example the first meeting can take place in $N$ rooms, the second also in $N$ rooms, and so on.</p>\n\n<p>Let's calculate the number of favorable situations. The first meeting has $N$ choices. The second has $N-1$ choices and so on. So the probability of all meetings are in different rooms is $$\\frac{N(N-1)\\cdots(N-M+1)}{N^M}$$</p>\n" }, { "answer_id": 2085482, "score": 3, "text": "<p>If you are not talking about scheduling (for which time overlap would\nan issue), but instead talking about just where the meeting takes place, then\nthere are $N^M$ possible assignments of meeting rooms. There are $\\binom{N}{M}$\nways to choose $M$ different rooms for the meetings and $M!$ ways to assign the $M$ meetings to those rooms, so the probability is ${\\binom{N}{M}M! \\over N^M}$.</p>\n" }, { "answer_id": 2085511, "score": 2, "text": "<p>Same solution as the others with a slightly different perspective (somewhat in line with the reasoning you tried to provide).</p>\n\n<p>Let $E_i$ be the event that room $i$ does not collide with rooms $1,2,3,...,i-1$.</p>\n\n<p>As you noted the first room does not matter.</p>\n\n<p>$$P(E_1)=1$$</p>\n\n<p>For the second room, there are $N-1$ rooms we can pick out of the $N$ total taking into account the first room was chosen:</p>\n\n<p>$$P(E_2|E_1)=\\dfrac{N-1}{N}$$</p>\n\n<p>Similarly,</p>\n\n<p>$$P(E_3|E_2,E_1)=\\dfrac{N-2}{N}$$</p>\n\n<p>And the pattern continues in this fashion. </p>\n\n<p>$$P(E_i|E_{i-1},E_{i-2},\\cdots ,E_1) = \\dfrac{N-i+1}{N}$$</p>\n\n<p>You seek </p>\n\n<p>$$P(E_1\\cap E_2\\cap \\cdots\\cap E_m) = P(E_1)P(E_2|E_1)P(E_3|E_2,E_1)\\cdots P(E_m|E_{m-1},E_{m-2},\\cdots,E_1)$$</p>\n\n<p>$$=1\\cdot \\dfrac{N-1}{N}\\cdot \\dfrac{N-2}{N}\\cdots \\dfrac{N-m+1}{N}$$</p>\n" }, { "answer_id": 2085478, "score": 3, "text": "<p>The total number of ways to distribute meetings to rooms is $N^M$, because for example the first meeting can take place in $N$ rooms, the second also in $N$ rooms, and so on.</p>\n\n<p>Let's calculate the number of favorable situations. The first meeting has $N$ choices. The second has $N-1$ choices and so on. So the probability of all meetings are in different rooms is $$\\frac{N(N-1)\\cdots(N-M+1)}{N^M}$$</p>\n" }, { "answer_id": 2085482, "score": 3, "text": "<p>If you are not talking about scheduling (for which time overlap would\nan issue), but instead talking about just where the meeting takes place, then\nthere are $N^M$ possible assignments of meeting rooms. There are $\\binom{N}{M}$\nways to choose $M$ different rooms for the meetings and $M!$ ways to assign the $M$ meetings to those rooms, so the probability is ${\\binom{N}{M}M! \\over N^M}$.</p>\n" }, { "answer_id": 2085511, "score": 2, "text": "<p>Same solution as the others with a slightly different perspective (somewhat in line with the reasoning you tried to provide).</p>\n\n<p>Let $E_i$ be the event that room $i$ does not collide with rooms $1,2,3,...,i-1$.</p>\n\n<p>As you noted the first room does not matter.</p>\n\n<p>$$P(E_1)=1$$</p>\n\n<p>For the second room, there are $N-1$ rooms we can pick out of the $N$ total taking into account the first room was chosen:</p>\n\n<p>$$P(E_2|E_1)=\\dfrac{N-1}{N}$$</p>\n\n<p>Similarly,</p>\n\n<p>$$P(E_3|E_2,E_1)=\\dfrac{N-2}{N}$$</p>\n\n<p>And the pattern continues in this fashion. </p>\n\n<p>$$P(E_i|E_{i-1},E_{i-2},\\cdots ,E_1) = \\dfrac{N-i+1}{N}$$</p>\n\n<p>You seek </p>\n\n<p>$$P(E_1\\cap E_2\\cap \\cdots\\cap E_m) = P(E_1)P(E_2|E_1)P(E_3|E_2,E_1)\\cdots P(E_m|E_{m-1},E_{m-2},\\cdots,E_1)$$</p>\n\n<p>$$=1\\cdot \\dfrac{N-1}{N}\\cdot \\dfrac{N-2}{N}\\cdots \\dfrac{N-m+1}{N}$$</p>\n" } ]
Question: Probability: there are $n$ rooms, and $m$ meetings, $m \leq n$, what&#39;s the probability of all meetings scheduled to a different room Answer 1: <p>The total number of ways to distribute meetings to rooms is $N^M$, because for example the first meeting can take place in $N$ rooms, the second also in $N$ rooms, and so on.</p> <p>Let's calculate the number of favorable situations. The first meeting has $N$ choices. The second has $N-1$ choices and so on. So the probability of all meetings are in different rooms is $$\frac{N(N-1)\cdots(N-M+1)}{N^M}$$</p> Answer 2: <p>If you are not talking about scheduling (for which time overlap would an issue), but instead talking about just where the meeting takes place, then there are $N^M$ possible assignments of meeting rooms. There are $\binom{N}{M}$ ways to choose $M$ different rooms for the meetings and $M!$ ways to assign the $M$ meetings to those rooms, so the probability is ${\binom{N}{M}M! \over N^M}$.</p> Answer 3: <p>Same solution as the others with a slightly different perspective (somewhat in line with the reasoning you tried to provide).</p> <p>Let $E_i$ be the event that room $i$ does not collide with rooms $1,2,3,...,i-1$.</p> <p>As you noted the first room does not matter.</p> <p>$$P(E_1)=1$$</p> <p>For the second room, there are $N-1$ rooms we can pick out of the $N$ total taking into account the first room was chosen:</p> <p>$$P(E_2|E_1)=\dfrac{N-1}{N}$$</p> <p>Similarly,</p> <p>$$P(E_3|E_2,E_1)=\dfrac{N-2}{N}$$</p> <p>And the pattern continues in this fashion. </p> <p>$$P(E_i|E_{i-1},E_{i-2},\cdots ,E_1) = \dfrac{N-i+1}{N}$$</p> <p>You seek </p> <p>$$P(E_1\cap E_2\cap \cdots\cap E_m) = P(E_1)P(E_2|E_1)P(E_3|E_2,E_1)\cdots P(E_m|E_{m-1},E_{m-2},\cdots,E_1)$$</p> <p>$$=1\cdot \dfrac{N-1}{N}\cdot \dfrac{N-2}{N}\cdots \dfrac{N-m+1}{N}$$</p> Answer 4: <p>The total number of ways to distribute meetings to rooms is $N^M$, because for example the first meeting can take place in $N$ rooms, the second also in $N$ rooms, and so on.</p> <p>Let's calculate the number of favorable situations. The first meeting has $N$ choices. The second has $N-1$ choices and so on. So the probability of all meetings are in different rooms is $$\frac{N(N-1)\cdots(N-M+1)}{N^M}$$</p> Answer 5: <p>If you are not talking about scheduling (for which time overlap would an issue), but instead talking about just where the meeting takes place, then there are $N^M$ possible assignments of meeting rooms. There are $\binom{N}{M}$ ways to choose $M$ different rooms for the meetings and $M!$ ways to assign the $M$ meetings to those rooms, so the probability is ${\binom{N}{M}M! \over N^M}$.</p> Answer 6: <p>Same solution as the others with a slightly different perspective (somewhat in line with the reasoning you tried to provide).</p> <p>Let $E_i$ be the event that room $i$ does not collide with rooms $1,2,3,...,i-1$.</p> <p>As you noted the first room does not matter.</p> <p>$$P(E_1)=1$$</p> <p>For the second room, there are $N-1$ rooms we can pick out of the $N$ total taking into account the first room was chosen:</p> <p>$$P(E_2|E_1)=\dfrac{N-1}{N}$$</p> <p>Similarly,</p> <p>$$P(E_3|E_2,E_1)=\dfrac{N-2}{N}$$</p> <p>And the pattern continues in this fashion. </p> <p>$$P(E_i|E_{i-1},E_{i-2},\cdots ,E_1) = \dfrac{N-i+1}{N}$$</p> <p>You seek </p> <p>$$P(E_1\cap E_2\cap \cdots\cap E_m) = P(E_1)P(E_2|E_1)P(E_3|E_2,E_1)\cdots P(E_m|E_{m-1},E_{m-2},\cdots,E_1)$$</p> <p>$$=1\cdot \dfrac{N-1}{N}\cdot \dfrac{N-2}{N}\cdots \dfrac{N-m+1}{N}$$</p> Answer 7: <p>The total number of ways to distribute meetings to rooms is $N^M$, because for example the first meeting can take place in $N$ rooms, the second also in $N$ rooms, and so on.</p> <p>Let's calculate the number of favorable situations. The first meeting has $N$ choices. The second has $N-1$ choices and so on. So the probability of all meetings are in different rooms is $$\frac{N(N-1)\cdots(N-M+1)}{N^M}$$</p> Answer 8: <p>If you are not talking about scheduling (for which time overlap would an issue), but instead talking about just where the meeting takes place, then there are $N^M$ possible assignments of meeting rooms. There are $\binom{N}{M}$ ways to choose $M$ different rooms for the meetings and $M!$ ways to assign the $M$ meetings to those rooms, so the probability is ${\binom{N}{M}M! \over N^M}$.</p> Answer 9: <p>Same solution as the others with a slightly different perspective (somewhat in line with the reasoning you tried to provide).</p> <p>Let $E_i$ be the event that room $i$ does not collide with rooms $1,2,3,...,i-1$.</p> <p>As you noted the first room does not matter.</p> <p>$$P(E_1)=1$$</p> <p>For the second room, there are $N-1$ rooms we can pick out of the $N$ total taking into account the first room was chosen:</p> <p>$$P(E_2|E_1)=\dfrac{N-1}{N}$$</p> <p>Similarly,</p> <p>$$P(E_3|E_2,E_1)=\dfrac{N-2}{N}$$</p> <p>And the pattern continues in this fashion. </p> <p>$$P(E_i|E_{i-1},E_{i-2},\cdots ,E_1) = \dfrac{N-i+1}{N}$$</p> <p>You seek </p> <p>$$P(E_1\cap E_2\cap \cdots\cap E_m) = P(E_1)P(E_2|E_1)P(E_3|E_2,E_1)\cdots P(E_m|E_{m-1},E_{m-2},\cdots,E_1)$$</p> <p>$$=1\cdot \dfrac{N-1}{N}\cdot \dfrac{N-2}{N}\cdots \dfrac{N-m+1}{N}$$</p>
Continuity of a very familiar function.
2,085,600
4
https://math.stackexchange.com/questions/2085600/continuity-of-a-very-familiar-function
<p>Let $f:\left[0,1\right]\times\left[0,1\right]\rightarrow\mathbb{R}$ be continuous and let $g:\left[0,1\right]\rightarrow\left[0,1\right]$ be also a continuous function. Is the following function continuous? \begin{eqnarray*} h:\left[0,1\right] &amp; \rightarrow &amp; \mathbb{R}\\ x &amp; \rightarrow &amp; \sup_{g\left(x\right)\leq t\leq1}f\left(t,x\right) \end{eqnarray*}</p> <p>Thank you.</p>
[ { "answer_id": 2085754, "score": 2, "text": "<p>Define $\\phi(x,s) = f(g(x)+s(1-g(x)), x)$, then $h(x) = \\max_{t \\in [0,1]} \\phi(x,t)$.</p>\n\n<p>Suppose $x_n \\to x$, then $\\phi(x_n,t) \\to \\phi(x,t)$ for all $t$ and since\n$h(x_n)\\ge \\phi(x_n,t)$, we see $\\liminf_n h(x_n) \\ge \\phi(x,t)$ for all $t$ and\nso $\\liminf_n h(x_n) \\ge h(x)$.</p>\n\n<p>Let $t_n$ be such that $h(x_n) = \\phi(x_n,t_n)$, then let $x_{n_k}, t_{n_k}$\nbe such that\n$\\phi(x_{n_k}, t_{n_k}) \\to \\limsup_n h(x_n)$ and $ t_{n_k} \\to t^*$, then\n$\\phi(x_{n_k}, t_{n_k}) \\to \\phi(x,t^*) \\le h(x)$ and\nso\n$\\limsup_n h(x_n) \\le h(x)$.</p>\n\n<p>Hence $\\lim_n h(x_n) = h(x)$ and so $h$ is continuous.</p>\n" }, { "answer_id": 2085757, "score": 2, "text": "<p>Note that $f$ and $g$ are uniformly continuous. Let the metric $d$ on $[0,1]\\times[0,1]$ be $d((a,b),(x,y))=|x-a|+|y-b|$. This is equivalent to the usual metric. Fix $\\varepsilon&gt;0$ and pick $\\delta_1,\\delta_2&gt;0$ so small that if $|a-b|&lt;\\delta_1$, then $|g(a)-g(b)|&lt;\\dfrac{\\delta_2}{2}$ where if $d((a,b),(x,y))&lt;\\delta_2$, then $|f(a,b)-f(x,y)|&lt;\\varepsilon$. Let $x_0\\in[0,1]$ and $\\delta=\\dfrac{1}{2}\\min\\{\\delta_1,\\delta_2\\}$. </p>\n\n<p>If $|x-x_0|&lt;\\delta$, then $|h(x)-h(x_0)|=\\left|\\sup_{g(x)\\le t\\le 1}f(t,x)-\\sup_{g(x_0)\\le t\\le 1}f(t,x_0)\\right|$. The sups are attained because of compactness, so we may as well let $f(a,x)=\\sup_{g(x)\\le t\\le 1}f(t,x)$ and $f(b,x_0)=\\sup_{g(x_0)\\le t\\le 1}f(t,x_0)$. Note that $|f(g(x),x)-f(g(x_0),x_0)|&lt;\\varepsilon$ as we contrived that $|g(x)-g(x_0)|+|x-x_0|&lt;\\delta_2$.</p>\n\n<p>First suppose that $a$ and $b$ do not lie between $g(x)$ and $g(x_0)$. Then $f(a,x)&lt;f(a,x_0)+\\varepsilon\\le f(b,x_0)+\\varepsilon$ and $f(b,x_0)&lt;f(b,x)+\\varepsilon\\le f(a,x)+\\varepsilon$. Hence, $|f(a,x)-f(b,x_0)|&lt;\\varepsilon$. </p>\n\n<p>Now suppose WLOG that $b$ is between $g(x)$ and $g(x_0)$ (the case of $a$ is identical). We have that $|f(a,x)-f(a,x_0)|&lt;\\varepsilon$ and $|f(g(x),x)-f(b,x_0)|&lt;\\varepsilon$. Hence, $f(a,x)&lt;\\varepsilon+f(a,x_0)\\le \\varepsilon+f(b,x_0)$ and $f(b,x_0)&lt;\\varepsilon +f(g(x),x)\\le \\varepsilon +f(a,x)\\ldotp$ Hence, $|f(a,x)-f(b,x_0)|&lt;\\varepsilon$.</p>\n\n<p>Hence, $h$ is continuous.</p>\n" }, { "answer_id": 2085754, "score": 2, "text": "<p>Define $\\phi(x,s) = f(g(x)+s(1-g(x)), x)$, then $h(x) = \\max_{t \\in [0,1]} \\phi(x,t)$.</p>\n\n<p>Suppose $x_n \\to x$, then $\\phi(x_n,t) \\to \\phi(x,t)$ for all $t$ and since\n$h(x_n)\\ge \\phi(x_n,t)$, we see $\\liminf_n h(x_n) \\ge \\phi(x,t)$ for all $t$ and\nso $\\liminf_n h(x_n) \\ge h(x)$.</p>\n\n<p>Let $t_n$ be such that $h(x_n) = \\phi(x_n,t_n)$, then let $x_{n_k}, t_{n_k}$\nbe such that\n$\\phi(x_{n_k}, t_{n_k}) \\to \\limsup_n h(x_n)$ and $ t_{n_k} \\to t^*$, then\n$\\phi(x_{n_k}, t_{n_k}) \\to \\phi(x,t^*) \\le h(x)$ and\nso\n$\\limsup_n h(x_n) \\le h(x)$.</p>\n\n<p>Hence $\\lim_n h(x_n) = h(x)$ and so $h$ is continuous.</p>\n" }, { "answer_id": 2085757, "score": 2, "text": "<p>Note that $f$ and $g$ are uniformly continuous. Let the metric $d$ on $[0,1]\\times[0,1]$ be $d((a,b),(x,y))=|x-a|+|y-b|$. This is equivalent to the usual metric. Fix $\\varepsilon&gt;0$ and pick $\\delta_1,\\delta_2&gt;0$ so small that if $|a-b|&lt;\\delta_1$, then $|g(a)-g(b)|&lt;\\dfrac{\\delta_2}{2}$ where if $d((a,b),(x,y))&lt;\\delta_2$, then $|f(a,b)-f(x,y)|&lt;\\varepsilon$. Let $x_0\\in[0,1]$ and $\\delta=\\dfrac{1}{2}\\min\\{\\delta_1,\\delta_2\\}$. </p>\n\n<p>If $|x-x_0|&lt;\\delta$, then $|h(x)-h(x_0)|=\\left|\\sup_{g(x)\\le t\\le 1}f(t,x)-\\sup_{g(x_0)\\le t\\le 1}f(t,x_0)\\right|$. The sups are attained because of compactness, so we may as well let $f(a,x)=\\sup_{g(x)\\le t\\le 1}f(t,x)$ and $f(b,x_0)=\\sup_{g(x_0)\\le t\\le 1}f(t,x_0)$. Note that $|f(g(x),x)-f(g(x_0),x_0)|&lt;\\varepsilon$ as we contrived that $|g(x)-g(x_0)|+|x-x_0|&lt;\\delta_2$.</p>\n\n<p>First suppose that $a$ and $b$ do not lie between $g(x)$ and $g(x_0)$. Then $f(a,x)&lt;f(a,x_0)+\\varepsilon\\le f(b,x_0)+\\varepsilon$ and $f(b,x_0)&lt;f(b,x)+\\varepsilon\\le f(a,x)+\\varepsilon$. Hence, $|f(a,x)-f(b,x_0)|&lt;\\varepsilon$. </p>\n\n<p>Now suppose WLOG that $b$ is between $g(x)$ and $g(x_0)$ (the case of $a$ is identical). We have that $|f(a,x)-f(a,x_0)|&lt;\\varepsilon$ and $|f(g(x),x)-f(b,x_0)|&lt;\\varepsilon$. Hence, $f(a,x)&lt;\\varepsilon+f(a,x_0)\\le \\varepsilon+f(b,x_0)$ and $f(b,x_0)&lt;\\varepsilon +f(g(x),x)\\le \\varepsilon +f(a,x)\\ldotp$ Hence, $|f(a,x)-f(b,x_0)|&lt;\\varepsilon$.</p>\n\n<p>Hence, $h$ is continuous.</p>\n" }, { "answer_id": 2085754, "score": 2, "text": "<p>Define $\\phi(x,s) = f(g(x)+s(1-g(x)), x)$, then $h(x) = \\max_{t \\in [0,1]} \\phi(x,t)$.</p>\n\n<p>Suppose $x_n \\to x$, then $\\phi(x_n,t) \\to \\phi(x,t)$ for all $t$ and since\n$h(x_n)\\ge \\phi(x_n,t)$, we see $\\liminf_n h(x_n) \\ge \\phi(x,t)$ for all $t$ and\nso $\\liminf_n h(x_n) \\ge h(x)$.</p>\n\n<p>Let $t_n$ be such that $h(x_n) = \\phi(x_n,t_n)$, then let $x_{n_k}, t_{n_k}$\nbe such that\n$\\phi(x_{n_k}, t_{n_k}) \\to \\limsup_n h(x_n)$ and $ t_{n_k} \\to t^*$, then\n$\\phi(x_{n_k}, t_{n_k}) \\to \\phi(x,t^*) \\le h(x)$ and\nso\n$\\limsup_n h(x_n) \\le h(x)$.</p>\n\n<p>Hence $\\lim_n h(x_n) = h(x)$ and so $h$ is continuous.</p>\n" }, { "answer_id": 2085757, "score": 2, "text": "<p>Note that $f$ and $g$ are uniformly continuous. Let the metric $d$ on $[0,1]\\times[0,1]$ be $d((a,b),(x,y))=|x-a|+|y-b|$. This is equivalent to the usual metric. Fix $\\varepsilon&gt;0$ and pick $\\delta_1,\\delta_2&gt;0$ so small that if $|a-b|&lt;\\delta_1$, then $|g(a)-g(b)|&lt;\\dfrac{\\delta_2}{2}$ where if $d((a,b),(x,y))&lt;\\delta_2$, then $|f(a,b)-f(x,y)|&lt;\\varepsilon$. Let $x_0\\in[0,1]$ and $\\delta=\\dfrac{1}{2}\\min\\{\\delta_1,\\delta_2\\}$. </p>\n\n<p>If $|x-x_0|&lt;\\delta$, then $|h(x)-h(x_0)|=\\left|\\sup_{g(x)\\le t\\le 1}f(t,x)-\\sup_{g(x_0)\\le t\\le 1}f(t,x_0)\\right|$. The sups are attained because of compactness, so we may as well let $f(a,x)=\\sup_{g(x)\\le t\\le 1}f(t,x)$ and $f(b,x_0)=\\sup_{g(x_0)\\le t\\le 1}f(t,x_0)$. Note that $|f(g(x),x)-f(g(x_0),x_0)|&lt;\\varepsilon$ as we contrived that $|g(x)-g(x_0)|+|x-x_0|&lt;\\delta_2$.</p>\n\n<p>First suppose that $a$ and $b$ do not lie between $g(x)$ and $g(x_0)$. Then $f(a,x)&lt;f(a,x_0)+\\varepsilon\\le f(b,x_0)+\\varepsilon$ and $f(b,x_0)&lt;f(b,x)+\\varepsilon\\le f(a,x)+\\varepsilon$. Hence, $|f(a,x)-f(b,x_0)|&lt;\\varepsilon$. </p>\n\n<p>Now suppose WLOG that $b$ is between $g(x)$ and $g(x_0)$ (the case of $a$ is identical). We have that $|f(a,x)-f(a,x_0)|&lt;\\varepsilon$ and $|f(g(x),x)-f(b,x_0)|&lt;\\varepsilon$. Hence, $f(a,x)&lt;\\varepsilon+f(a,x_0)\\le \\varepsilon+f(b,x_0)$ and $f(b,x_0)&lt;\\varepsilon +f(g(x),x)\\le \\varepsilon +f(a,x)\\ldotp$ Hence, $|f(a,x)-f(b,x_0)|&lt;\\varepsilon$.</p>\n\n<p>Hence, $h$ is continuous.</p>\n" } ]
Question: Continuity of a very familiar function. Answer 1: <p>Define $\phi(x,s) = f(g(x)+s(1-g(x)), x)$, then $h(x) = \max_{t \in [0,1]} \phi(x,t)$.</p> <p>Suppose $x_n \to x$, then $\phi(x_n,t) \to \phi(x,t)$ for all $t$ and since $h(x_n)\ge \phi(x_n,t)$, we see $\liminf_n h(x_n) \ge \phi(x,t)$ for all $t$ and so $\liminf_n h(x_n) \ge h(x)$.</p> <p>Let $t_n$ be such that $h(x_n) = \phi(x_n,t_n)$, then let $x_{n_k}, t_{n_k}$ be such that $\phi(x_{n_k}, t_{n_k}) \to \limsup_n h(x_n)$ and $ t_{n_k} \to t^*$, then $\phi(x_{n_k}, t_{n_k}) \to \phi(x,t^*) \le h(x)$ and so $\limsup_n h(x_n) \le h(x)$.</p> <p>Hence $\lim_n h(x_n) = h(x)$ and so $h$ is continuous.</p> Answer 2: <p>Note that $f$ and $g$ are uniformly continuous. Let the metric $d$ on $[0,1]\times[0,1]$ be $d((a,b),(x,y))=|x-a|+|y-b|$. This is equivalent to the usual metric. Fix $\varepsilon&gt;0$ and pick $\delta_1,\delta_2&gt;0$ so small that if $|a-b|&lt;\delta_1$, then $|g(a)-g(b)|&lt;\dfrac{\delta_2}{2}$ where if $d((a,b),(x,y))&lt;\delta_2$, then $|f(a,b)-f(x,y)|&lt;\varepsilon$. Let $x_0\in[0,1]$ and $\delta=\dfrac{1}{2}\min\{\delta_1,\delta_2\}$. </p> <p>If $|x-x_0|&lt;\delta$, then $|h(x)-h(x_0)|=\left|\sup_{g(x)\le t\le 1}f(t,x)-\sup_{g(x_0)\le t\le 1}f(t,x_0)\right|$. The sups are attained because of compactness, so we may as well let $f(a,x)=\sup_{g(x)\le t\le 1}f(t,x)$ and $f(b,x_0)=\sup_{g(x_0)\le t\le 1}f(t,x_0)$. Note that $|f(g(x),x)-f(g(x_0),x_0)|&lt;\varepsilon$ as we contrived that $|g(x)-g(x_0)|+|x-x_0|&lt;\delta_2$.</p> <p>First suppose that $a$ and $b$ do not lie between $g(x)$ and $g(x_0)$. Then $f(a,x)&lt;f(a,x_0)+\varepsilon\le f(b,x_0)+\varepsilon$ and $f(b,x_0)&lt;f(b,x)+\varepsilon\le f(a,x)+\varepsilon$. Hence, $|f(a,x)-f(b,x_0)|&lt;\varepsilon$. </p> <p>Now suppose WLOG that $b$ is between $g(x)$ and $g(x_0)$ (the case of $a$ is identical). We have that $|f(a,x)-f(a,x_0)|&lt;\varepsilon$ and $|f(g(x),x)-f(b,x_0)|&lt;\varepsilon$. Hence, $f(a,x)&lt;\varepsilon+f(a,x_0)\le \varepsilon+f(b,x_0)$ and $f(b,x_0)&lt;\varepsilon +f(g(x),x)\le \varepsilon +f(a,x)\ldotp$ Hence, $|f(a,x)-f(b,x_0)|&lt;\varepsilon$.</p> <p>Hence, $h$ is continuous.</p> Answer 3: <p>Define $\phi(x,s) = f(g(x)+s(1-g(x)), x)$, then $h(x) = \max_{t \in [0,1]} \phi(x,t)$.</p> <p>Suppose $x_n \to x$, then $\phi(x_n,t) \to \phi(x,t)$ for all $t$ and since $h(x_n)\ge \phi(x_n,t)$, we see $\liminf_n h(x_n) \ge \phi(x,t)$ for all $t$ and so $\liminf_n h(x_n) \ge h(x)$.</p> <p>Let $t_n$ be such that $h(x_n) = \phi(x_n,t_n)$, then let $x_{n_k}, t_{n_k}$ be such that $\phi(x_{n_k}, t_{n_k}) \to \limsup_n h(x_n)$ and $ t_{n_k} \to t^*$, then $\phi(x_{n_k}, t_{n_k}) \to \phi(x,t^*) \le h(x)$ and so $\limsup_n h(x_n) \le h(x)$.</p> <p>Hence $\lim_n h(x_n) = h(x)$ and so $h$ is continuous.</p> Answer 4: <p>Note that $f$ and $g$ are uniformly continuous. Let the metric $d$ on $[0,1]\times[0,1]$ be $d((a,b),(x,y))=|x-a|+|y-b|$. This is equivalent to the usual metric. Fix $\varepsilon&gt;0$ and pick $\delta_1,\delta_2&gt;0$ so small that if $|a-b|&lt;\delta_1$, then $|g(a)-g(b)|&lt;\dfrac{\delta_2}{2}$ where if $d((a,b),(x,y))&lt;\delta_2$, then $|f(a,b)-f(x,y)|&lt;\varepsilon$. Let $x_0\in[0,1]$ and $\delta=\dfrac{1}{2}\min\{\delta_1,\delta_2\}$. </p> <p>If $|x-x_0|&lt;\delta$, then $|h(x)-h(x_0)|=\left|\sup_{g(x)\le t\le 1}f(t,x)-\sup_{g(x_0)\le t\le 1}f(t,x_0)\right|$. The sups are attained because of compactness, so we may as well let $f(a,x)=\sup_{g(x)\le t\le 1}f(t,x)$ and $f(b,x_0)=\sup_{g(x_0)\le t\le 1}f(t,x_0)$. Note that $|f(g(x),x)-f(g(x_0),x_0)|&lt;\varepsilon$ as we contrived that $|g(x)-g(x_0)|+|x-x_0|&lt;\delta_2$.</p> <p>First suppose that $a$ and $b$ do not lie between $g(x)$ and $g(x_0)$. Then $f(a,x)&lt;f(a,x_0)+\varepsilon\le f(b,x_0)+\varepsilon$ and $f(b,x_0)&lt;f(b,x)+\varepsilon\le f(a,x)+\varepsilon$. Hence, $|f(a,x)-f(b,x_0)|&lt;\varepsilon$. </p> <p>Now suppose WLOG that $b$ is between $g(x)$ and $g(x_0)$ (the case of $a$ is identical). We have that $|f(a,x)-f(a,x_0)|&lt;\varepsilon$ and $|f(g(x),x)-f(b,x_0)|&lt;\varepsilon$. Hence, $f(a,x)&lt;\varepsilon+f(a,x_0)\le \varepsilon+f(b,x_0)$ and $f(b,x_0)&lt;\varepsilon +f(g(x),x)\le \varepsilon +f(a,x)\ldotp$ Hence, $|f(a,x)-f(b,x_0)|&lt;\varepsilon$.</p> <p>Hence, $h$ is continuous.</p> Answer 5: <p>Define $\phi(x,s) = f(g(x)+s(1-g(x)), x)$, then $h(x) = \max_{t \in [0,1]} \phi(x,t)$.</p> <p>Suppose $x_n \to x$, then $\phi(x_n,t) \to \phi(x,t)$ for all $t$ and since $h(x_n)\ge \phi(x_n,t)$, we see $\liminf_n h(x_n) \ge \phi(x,t)$ for all $t$ and so $\liminf_n h(x_n) \ge h(x)$.</p> <p>Let $t_n$ be such that $h(x_n) = \phi(x_n,t_n)$, then let $x_{n_k}, t_{n_k}$ be such that $\phi(x_{n_k}, t_{n_k}) \to \limsup_n h(x_n)$ and $ t_{n_k} \to t^*$, then $\phi(x_{n_k}, t_{n_k}) \to \phi(x,t^*) \le h(x)$ and so $\limsup_n h(x_n) \le h(x)$.</p> <p>Hence $\lim_n h(x_n) = h(x)$ and so $h$ is continuous.</p> Answer 6: <p>Note that $f$ and $g$ are uniformly continuous. Let the metric $d$ on $[0,1]\times[0,1]$ be $d((a,b),(x,y))=|x-a|+|y-b|$. This is equivalent to the usual metric. Fix $\varepsilon&gt;0$ and pick $\delta_1,\delta_2&gt;0$ so small that if $|a-b|&lt;\delta_1$, then $|g(a)-g(b)|&lt;\dfrac{\delta_2}{2}$ where if $d((a,b),(x,y))&lt;\delta_2$, then $|f(a,b)-f(x,y)|&lt;\varepsilon$. Let $x_0\in[0,1]$ and $\delta=\dfrac{1}{2}\min\{\delta_1,\delta_2\}$. </p> <p>If $|x-x_0|&lt;\delta$, then $|h(x)-h(x_0)|=\left|\sup_{g(x)\le t\le 1}f(t,x)-\sup_{g(x_0)\le t\le 1}f(t,x_0)\right|$. The sups are attained because of compactness, so we may as well let $f(a,x)=\sup_{g(x)\le t\le 1}f(t,x)$ and $f(b,x_0)=\sup_{g(x_0)\le t\le 1}f(t,x_0)$. Note that $|f(g(x),x)-f(g(x_0),x_0)|&lt;\varepsilon$ as we contrived that $|g(x)-g(x_0)|+|x-x_0|&lt;\delta_2$.</p> <p>First suppose that $a$ and $b$ do not lie between $g(x)$ and $g(x_0)$. Then $f(a,x)&lt;f(a,x_0)+\varepsilon\le f(b,x_0)+\varepsilon$ and $f(b,x_0)&lt;f(b,x)+\varepsilon\le f(a,x)+\varepsilon$. Hence, $|f(a,x)-f(b,x_0)|&lt;\varepsilon$. </p> <p>Now suppose WLOG that $b$ is between $g(x)$ and $g(x_0)$ (the case of $a$ is identical). We have that $|f(a,x)-f(a,x_0)|&lt;\varepsilon$ and $|f(g(x),x)-f(b,x_0)|&lt;\varepsilon$. Hence, $f(a,x)&lt;\varepsilon+f(a,x_0)\le \varepsilon+f(b,x_0)$ and $f(b,x_0)&lt;\varepsilon +f(g(x),x)\le \varepsilon +f(a,x)\ldotp$ Hence, $|f(a,x)-f(b,x_0)|&lt;\varepsilon$.</p> <p>Hence, $h$ is continuous.</p>
Integral of $\sec^2{x} \tan^2{x}$
2,085,671
4
https://math.stackexchange.com/questions/2085671/integral-of-sec2x-tan2x
<p>I'm once again stuck; I'm trying to find $$\int{(\sec^2{x} \tan^2{x})dx}$$ but end up with things like: $\tan^2x-\int{2\tan^2x \sec^2x}$, which doesn't help.</p> <p>Would it be best to approach using integration by parts or substitution?</p>
[ { "answer_id": 2085673, "score": 6, "text": "<p>Let $u=\\tan x$, $du=\\sec^2 x~dx$. The integral becomes:</p>\n\n<p>$$\\int u^2\\ du$$</p>\n\n<p>Can you continue from here?</p>\n" }, { "answer_id": 2085686, "score": 2, "text": "<p>I'm assuming you got your answer using 'by parts'. Below is how you could finish it using 'by parts'. Note that other solutions such as by @ZacharySelk are simpler.</p>\n\n<p>Using your line of working:</p>\n\n<p>$$\\int sec^2x \\tan^2x dx = tan^2x - 2\\int \\sec^2x \\tan^2x dx$$</p>\n\n<p>You can move the $- 2\\int \\sec^2x \\tan^2x dx$ to the left hand side of the equation by addition.</p>\n\n<p>$$\\int \\sec^2x \\tan^2x dx+ 2\\int \\sec^2x \\tan^2x dx= tan^2x +c, c\\in\\mathbb{R}$$</p>\n\n<p>Note that once we have a side without an integral on it you need to include a constant of integration. I have used $c$.</p>\n\n<p>The two expressions on the left hand side are the same so you can add them giving:</p>\n\n<p>$$3\\int \\sec^2x \\tan^2x dx= tan^2x +c$$</p>\n\n<p>So simply divide by 3 to get your answer:</p>\n\n<p>$$\\int \\sec^2x \\tan^2x dx= \\frac{tan^2x}{3} +\\frac{c}{3}$$</p>\n\n<p>Note that as $c$ is an real number we could replace $\\frac{c}{3}$ with $c_2$ to write the answer more neatly as:</p>\n\n<p>$$\\int \\sec^2x \\tan^2x dx= \\frac{tan^2x}{3} +c_2$$</p>\n" }, { "answer_id": 2085675, "score": 0, "text": "<p>Putting $u = \\tan x$, $du=\\sec^2x\\ dx$, we get:</p>\n\n<p>$\\ \\ \\ \\ \\int u^2du$</p>\n\n<p>$=\\frac{u^3}{3}+c$</p>\n\n<p>$=\\frac{\\tan^3x}{3}+c$</p>\n" }, { "answer_id": 2085673, "score": 6, "text": "<p>Let $u=\\tan x$, $du=\\sec^2 x~dx$. The integral becomes:</p>\n\n<p>$$\\int u^2\\ du$$</p>\n\n<p>Can you continue from here?</p>\n" }, { "answer_id": 2085686, "score": 2, "text": "<p>I'm assuming you got your answer using 'by parts'. Below is how you could finish it using 'by parts'. Note that other solutions such as by @ZacharySelk are simpler.</p>\n\n<p>Using your line of working:</p>\n\n<p>$$\\int sec^2x \\tan^2x dx = tan^2x - 2\\int \\sec^2x \\tan^2x dx$$</p>\n\n<p>You can move the $- 2\\int \\sec^2x \\tan^2x dx$ to the left hand side of the equation by addition.</p>\n\n<p>$$\\int \\sec^2x \\tan^2x dx+ 2\\int \\sec^2x \\tan^2x dx= tan^2x +c, c\\in\\mathbb{R}$$</p>\n\n<p>Note that once we have a side without an integral on it you need to include a constant of integration. I have used $c$.</p>\n\n<p>The two expressions on the left hand side are the same so you can add them giving:</p>\n\n<p>$$3\\int \\sec^2x \\tan^2x dx= tan^2x +c$$</p>\n\n<p>So simply divide by 3 to get your answer:</p>\n\n<p>$$\\int \\sec^2x \\tan^2x dx= \\frac{tan^2x}{3} +\\frac{c}{3}$$</p>\n\n<p>Note that as $c$ is an real number we could replace $\\frac{c}{3}$ with $c_2$ to write the answer more neatly as:</p>\n\n<p>$$\\int \\sec^2x \\tan^2x dx= \\frac{tan^2x}{3} +c_2$$</p>\n" }, { "answer_id": 2085675, "score": 0, "text": "<p>Putting $u = \\tan x$, $du=\\sec^2x\\ dx$, we get:</p>\n\n<p>$\\ \\ \\ \\ \\int u^2du$</p>\n\n<p>$=\\frac{u^3}{3}+c$</p>\n\n<p>$=\\frac{\\tan^3x}{3}+c$</p>\n" }, { "answer_id": 2085673, "score": 6, "text": "<p>Let $u=\\tan x$, $du=\\sec^2 x~dx$. The integral becomes:</p>\n\n<p>$$\\int u^2\\ du$$</p>\n\n<p>Can you continue from here?</p>\n" }, { "answer_id": 2085686, "score": 2, "text": "<p>I'm assuming you got your answer using 'by parts'. Below is how you could finish it using 'by parts'. Note that other solutions such as by @ZacharySelk are simpler.</p>\n\n<p>Using your line of working:</p>\n\n<p>$$\\int sec^2x \\tan^2x dx = tan^2x - 2\\int \\sec^2x \\tan^2x dx$$</p>\n\n<p>You can move the $- 2\\int \\sec^2x \\tan^2x dx$ to the left hand side of the equation by addition.</p>\n\n<p>$$\\int \\sec^2x \\tan^2x dx+ 2\\int \\sec^2x \\tan^2x dx= tan^2x +c, c\\in\\mathbb{R}$$</p>\n\n<p>Note that once we have a side without an integral on it you need to include a constant of integration. I have used $c$.</p>\n\n<p>The two expressions on the left hand side are the same so you can add them giving:</p>\n\n<p>$$3\\int \\sec^2x \\tan^2x dx= tan^2x +c$$</p>\n\n<p>So simply divide by 3 to get your answer:</p>\n\n<p>$$\\int \\sec^2x \\tan^2x dx= \\frac{tan^2x}{3} +\\frac{c}{3}$$</p>\n\n<p>Note that as $c$ is an real number we could replace $\\frac{c}{3}$ with $c_2$ to write the answer more neatly as:</p>\n\n<p>$$\\int \\sec^2x \\tan^2x dx= \\frac{tan^2x}{3} +c_2$$</p>\n" }, { "answer_id": 2085675, "score": 0, "text": "<p>Putting $u = \\tan x$, $du=\\sec^2x\\ dx$, we get:</p>\n\n<p>$\\ \\ \\ \\ \\int u^2du$</p>\n\n<p>$=\\frac{u^3}{3}+c$</p>\n\n<p>$=\\frac{\\tan^3x}{3}+c$</p>\n" } ]
Question: Integral of $\sec^2{x} \tan^2{x}$ Answer 1: <p>Let $u=\tan x$, $du=\sec^2 x~dx$. The integral becomes:</p> <p>$$\int u^2\ du$$</p> <p>Can you continue from here?</p> Answer 2: <p>I'm assuming you got your answer using 'by parts'. Below is how you could finish it using 'by parts'. Note that other solutions such as by @ZacharySelk are simpler.</p> <p>Using your line of working:</p> <p>$$\int sec^2x \tan^2x dx = tan^2x - 2\int \sec^2x \tan^2x dx$$</p> <p>You can move the $- 2\int \sec^2x \tan^2x dx$ to the left hand side of the equation by addition.</p> <p>$$\int \sec^2x \tan^2x dx+ 2\int \sec^2x \tan^2x dx= tan^2x +c, c\in\mathbb{R}$$</p> <p>Note that once we have a side without an integral on it you need to include a constant of integration. I have used $c$.</p> <p>The two expressions on the left hand side are the same so you can add them giving:</p> <p>$$3\int \sec^2x \tan^2x dx= tan^2x +c$$</p> <p>So simply divide by 3 to get your answer:</p> <p>$$\int \sec^2x \tan^2x dx= \frac{tan^2x}{3} +\frac{c}{3}$$</p> <p>Note that as $c$ is an real number we could replace $\frac{c}{3}$ with $c_2$ to write the answer more neatly as:</p> <p>$$\int \sec^2x \tan^2x dx= \frac{tan^2x}{3} +c_2$$</p> Answer 3: <p>Putting $u = \tan x$, $du=\sec^2x\ dx$, we get:</p> <p>$\ \ \ \ \int u^2du$</p> <p>$=\frac{u^3}{3}+c$</p> <p>$=\frac{\tan^3x}{3}+c$</p> Answer 4: <p>Let $u=\tan x$, $du=\sec^2 x~dx$. The integral becomes:</p> <p>$$\int u^2\ du$$</p> <p>Can you continue from here?</p> Answer 5: <p>I'm assuming you got your answer using 'by parts'. Below is how you could finish it using 'by parts'. Note that other solutions such as by @ZacharySelk are simpler.</p> <p>Using your line of working:</p> <p>$$\int sec^2x \tan^2x dx = tan^2x - 2\int \sec^2x \tan^2x dx$$</p> <p>You can move the $- 2\int \sec^2x \tan^2x dx$ to the left hand side of the equation by addition.</p> <p>$$\int \sec^2x \tan^2x dx+ 2\int \sec^2x \tan^2x dx= tan^2x +c, c\in\mathbb{R}$$</p> <p>Note that once we have a side without an integral on it you need to include a constant of integration. I have used $c$.</p> <p>The two expressions on the left hand side are the same so you can add them giving:</p> <p>$$3\int \sec^2x \tan^2x dx= tan^2x +c$$</p> <p>So simply divide by 3 to get your answer:</p> <p>$$\int \sec^2x \tan^2x dx= \frac{tan^2x}{3} +\frac{c}{3}$$</p> <p>Note that as $c$ is an real number we could replace $\frac{c}{3}$ with $c_2$ to write the answer more neatly as:</p> <p>$$\int \sec^2x \tan^2x dx= \frac{tan^2x}{3} +c_2$$</p> Answer 6: <p>Putting $u = \tan x$, $du=\sec^2x\ dx$, we get:</p> <p>$\ \ \ \ \int u^2du$</p> <p>$=\frac{u^3}{3}+c$</p> <p>$=\frac{\tan^3x}{3}+c$</p> Answer 7: <p>Let $u=\tan x$, $du=\sec^2 x~dx$. The integral becomes:</p> <p>$$\int u^2\ du$$</p> <p>Can you continue from here?</p> Answer 8: <p>I'm assuming you got your answer using 'by parts'. Below is how you could finish it using 'by parts'. Note that other solutions such as by @ZacharySelk are simpler.</p> <p>Using your line of working:</p> <p>$$\int sec^2x \tan^2x dx = tan^2x - 2\int \sec^2x \tan^2x dx$$</p> <p>You can move the $- 2\int \sec^2x \tan^2x dx$ to the left hand side of the equation by addition.</p> <p>$$\int \sec^2x \tan^2x dx+ 2\int \sec^2x \tan^2x dx= tan^2x +c, c\in\mathbb{R}$$</p> <p>Note that once we have a side without an integral on it you need to include a constant of integration. I have used $c$.</p> <p>The two expressions on the left hand side are the same so you can add them giving:</p> <p>$$3\int \sec^2x \tan^2x dx= tan^2x +c$$</p> <p>So simply divide by 3 to get your answer:</p> <p>$$\int \sec^2x \tan^2x dx= \frac{tan^2x}{3} +\frac{c}{3}$$</p> <p>Note that as $c$ is an real number we could replace $\frac{c}{3}$ with $c_2$ to write the answer more neatly as:</p> <p>$$\int \sec^2x \tan^2x dx= \frac{tan^2x}{3} +c_2$$</p> Answer 9: <p>Putting $u = \tan x$, $du=\sec^2x\ dx$, we get:</p> <p>$\ \ \ \ \int u^2du$</p> <p>$=\frac{u^3}{3}+c$</p> <p>$=\frac{\tan^3x}{3}+c$</p>