question
string
question_id
int64
score
int64
link
string
body
string
answers
list
verbolised
string
Prime number modulo
2,085,685
4
https://math.stackexchange.com/questions/2085685/prime-number-modulo
<p>A would be very thankful if somebody would help me understand and resolve following problem:</p> <p>"Let $p$ be an odd prime number, let $a$ be any integer, let $b = a^{(p-1)/2}$. Show that $b$ mod $p$ is either $0$ or $1$ or $(p-1)$."</p> <p>Thanks</p>
[ { "answer_id": 2085698, "score": 2, "text": "<p>Little Fermat's theorem tells that $a^p\\equiv a\\pmod p$. Since $p-1$ is even, we can write\n$$a(a^{(p-1)/2}+1)(a^{(p-1)/2}-1)\\equiv 0\\pmod p$$\nNow, since $\\Bbb Z_p$ is an integral domain (in fact it is a field), one of the factors is $0$.</p>\n" }, { "answer_id": 2085695, "score": 1, "text": "<p>If $p$ divides $a$, then $a^{(p-1)/2}\\equiv 0 \\mod p$, and then $b\\equiv 0 \\mod p$. If p does not divide $a$, then $a^{p-1} \\equiv 1 \\mod p$ (Fermat's little thm). Therefore, since $b=a^{(p-1)/2}$, we have $b^2\\equiv 1 \\mod p$, and then $b\\equiv \\pm 1 \\mod p$.</p>\n" }, { "answer_id": 2085698, "score": 2, "text": "<p>Little Fermat's theorem tells that $a^p\\equiv a\\pmod p$. Since $p-1$ is even, we can write\n$$a(a^{(p-1)/2}+1)(a^{(p-1)/2}-1)\\equiv 0\\pmod p$$\nNow, since $\\Bbb Z_p$ is an integral domain (in fact it is a field), one of the factors is $0$.</p>\n" }, { "answer_id": 2085695, "score": 1, "text": "<p>If $p$ divides $a$, then $a^{(p-1)/2}\\equiv 0 \\mod p$, and then $b\\equiv 0 \\mod p$. If p does not divide $a$, then $a^{p-1} \\equiv 1 \\mod p$ (Fermat's little thm). Therefore, since $b=a^{(p-1)/2}$, we have $b^2\\equiv 1 \\mod p$, and then $b\\equiv \\pm 1 \\mod p$.</p>\n" }, { "answer_id": 2085698, "score": 2, "text": "<p>Little Fermat's theorem tells that $a^p\\equiv a\\pmod p$. Since $p-1$ is even, we can write\n$$a(a^{(p-1)/2}+1)(a^{(p-1)/2}-1)\\equiv 0\\pmod p$$\nNow, since $\\Bbb Z_p$ is an integral domain (in fact it is a field), one of the factors is $0$.</p>\n" }, { "answer_id": 2085695, "score": 1, "text": "<p>If $p$ divides $a$, then $a^{(p-1)/2}\\equiv 0 \\mod p$, and then $b\\equiv 0 \\mod p$. If p does not divide $a$, then $a^{p-1} \\equiv 1 \\mod p$ (Fermat's little thm). Therefore, since $b=a^{(p-1)/2}$, we have $b^2\\equiv 1 \\mod p$, and then $b\\equiv \\pm 1 \\mod p$.</p>\n" } ]
Question: Prime number modulo Answer 1: <p>Little Fermat's theorem tells that $a^p\equiv a\pmod p$. Since $p-1$ is even, we can write $$a(a^{(p-1)/2}+1)(a^{(p-1)/2}-1)\equiv 0\pmod p$$ Now, since $\Bbb Z_p$ is an integral domain (in fact it is a field), one of the factors is $0$.</p> Answer 2: <p>If $p$ divides $a$, then $a^{(p-1)/2}\equiv 0 \mod p$, and then $b\equiv 0 \mod p$. If p does not divide $a$, then $a^{p-1} \equiv 1 \mod p$ (Fermat's little thm). Therefore, since $b=a^{(p-1)/2}$, we have $b^2\equiv 1 \mod p$, and then $b\equiv \pm 1 \mod p$.</p> Answer 3: <p>Little Fermat's theorem tells that $a^p\equiv a\pmod p$. Since $p-1$ is even, we can write $$a(a^{(p-1)/2}+1)(a^{(p-1)/2}-1)\equiv 0\pmod p$$ Now, since $\Bbb Z_p$ is an integral domain (in fact it is a field), one of the factors is $0$.</p> Answer 4: <p>If $p$ divides $a$, then $a^{(p-1)/2}\equiv 0 \mod p$, and then $b\equiv 0 \mod p$. If p does not divide $a$, then $a^{p-1} \equiv 1 \mod p$ (Fermat's little thm). Therefore, since $b=a^{(p-1)/2}$, we have $b^2\equiv 1 \mod p$, and then $b\equiv \pm 1 \mod p$.</p> Answer 5: <p>Little Fermat's theorem tells that $a^p\equiv a\pmod p$. Since $p-1$ is even, we can write $$a(a^{(p-1)/2}+1)(a^{(p-1)/2}-1)\equiv 0\pmod p$$ Now, since $\Bbb Z_p$ is an integral domain (in fact it is a field), one of the factors is $0$.</p> Answer 6: <p>If $p$ divides $a$, then $a^{(p-1)/2}\equiv 0 \mod p$, and then $b\equiv 0 \mod p$. If p does not divide $a$, then $a^{p-1} \equiv 1 \mod p$ (Fermat's little thm). Therefore, since $b=a^{(p-1)/2}$, we have $b^2\equiv 1 \mod p$, and then $b\equiv \pm 1 \mod p$.</p>
Famous irrational constants whose $n$th digit can be computed in constant time?
2,085,736
4
https://math.stackexchange.com/questions/2085736/famous-irrational-constants-whose-nth-digit-can-be-computed-in-constant-time
<p>The BBP formula for $\pi$ runs in $O(n \log n)$. I was wondering if there are irrational constants whose $n$-th digit can be computed in time $O(1)$ or at most $O(\log n)$.</p>
[ { "answer_id": 2085740, "score": 5, "text": "<p>The number $$\\sum_{k=1}^\\infty\\frac1{10^{10^k}}$$\nis irrational. With good marketing can become as famous as $\\pi$.</p>\n" }, { "answer_id": 2085769, "score": 2, "text": "<p>Let $S$ be the set of all natural numbers starting with $11$ in some fixed base $b$ (if operating on a binary computer, pick $b=2$). Then $S$ does not have a natural density, which precludes it from being eventually periodic. Thus $\\sum_{n\\in S} 10^{-n}$ is irrational, and any digit can be computed in constant time for a reasonable representation of the index.</p>\n" }, { "answer_id": 2085740, "score": 5, "text": "<p>The number $$\\sum_{k=1}^\\infty\\frac1{10^{10^k}}$$\nis irrational. With good marketing can become as famous as $\\pi$.</p>\n" }, { "answer_id": 2085769, "score": 2, "text": "<p>Let $S$ be the set of all natural numbers starting with $11$ in some fixed base $b$ (if operating on a binary computer, pick $b=2$). Then $S$ does not have a natural density, which precludes it from being eventually periodic. Thus $\\sum_{n\\in S} 10^{-n}$ is irrational, and any digit can be computed in constant time for a reasonable representation of the index.</p>\n" }, { "answer_id": 2085740, "score": 5, "text": "<p>The number $$\\sum_{k=1}^\\infty\\frac1{10^{10^k}}$$\nis irrational. With good marketing can become as famous as $\\pi$.</p>\n" }, { "answer_id": 2085769, "score": 2, "text": "<p>Let $S$ be the set of all natural numbers starting with $11$ in some fixed base $b$ (if operating on a binary computer, pick $b=2$). Then $S$ does not have a natural density, which precludes it from being eventually periodic. Thus $\\sum_{n\\in S} 10^{-n}$ is irrational, and any digit can be computed in constant time for a reasonable representation of the index.</p>\n" } ]
Question: Famous irrational constants whose $n$th digit can be computed in constant time? Answer 1: <p>The number $$\sum_{k=1}^\infty\frac1{10^{10^k}}$$ is irrational. With good marketing can become as famous as $\pi$.</p> Answer 2: <p>Let $S$ be the set of all natural numbers starting with $11$ in some fixed base $b$ (if operating on a binary computer, pick $b=2$). Then $S$ does not have a natural density, which precludes it from being eventually periodic. Thus $\sum_{n\in S} 10^{-n}$ is irrational, and any digit can be computed in constant time for a reasonable representation of the index.</p> Answer 3: <p>The number $$\sum_{k=1}^\infty\frac1{10^{10^k}}$$ is irrational. With good marketing can become as famous as $\pi$.</p> Answer 4: <p>Let $S$ be the set of all natural numbers starting with $11$ in some fixed base $b$ (if operating on a binary computer, pick $b=2$). Then $S$ does not have a natural density, which precludes it from being eventually periodic. Thus $\sum_{n\in S} 10^{-n}$ is irrational, and any digit can be computed in constant time for a reasonable representation of the index.</p> Answer 5: <p>The number $$\sum_{k=1}^\infty\frac1{10^{10^k}}$$ is irrational. With good marketing can become as famous as $\pi$.</p> Answer 6: <p>Let $S$ be the set of all natural numbers starting with $11$ in some fixed base $b$ (if operating on a binary computer, pick $b=2$). Then $S$ does not have a natural density, which precludes it from being eventually periodic. Thus $\sum_{n\in S} 10^{-n}$ is irrational, and any digit can be computed in constant time for a reasonable representation of the index.</p>
show there is no C^1 homeomorphism from R^3 to R^2
2,085,798
4
https://math.stackexchange.com/questions/2085798/show-there-is-no-c1-homeomorphism-from-r3-to-r2
<p>Show that there is no C^1 homeomorphism from $R^3$ to $R^2$.</p> <p>I am fully aware that in general $R^m$ and $R^n$ are not homeomorphism by homology theory. I wonder if we add the condition $C^1$ we can have a proof using differential calculus.</p> <p>Here is what i've tried just in case someone ask: </p> <p>I figured that by rank theorem the differential of such map can't have rank 2. But i don't see any contradictions if the rank is 1 or 0. I don't think i go through the right way. </p>
[ { "answer_id": 2085861, "score": 2, "text": "<p>a) If the linear map $df_A:\\mathbb R^3\\to \\mathbb R$ has rank $2$ at a single point $A\\in \\mathbb R^3$, then locally near $A$ the rank theorem says that up to a change of coordinates $f(x,y,z)=(x,y)$, so that $f$ is not injective and thus even less a homeomorphism. </p>\n\n<p>b) If $f$ is nowhere of rank $2$, then all points of $\\mathbb R^3$ are critical .<br>\nBut then $f(\\mathbb R^3)$ has zero Lebesgue mesure by <a href=\"https://en.wikipedia.org/wiki/Sard&#39;s_theorem\" rel=\"nofollow noreferrer\">Sard's theorem</a> and thus $f$ is hilariously far from being surjective anf even farther from being a homeomorphism.<br>\n[The use of Sard requires the slightly stronger hypothesis that $f$ be at least $C^2$] </p>\n" }, { "answer_id": 2086193, "score": 2, "text": "<p>Let me expand on my comment: I thought it was clear, Georges comment clearly shows that my comment was clearly unclear. </p>\n\n<p>Claim. If $f: R^n\\to R^{n-1}$ is $C^1$-smooth, then $f$ cannot be injective.</p>\n\n<p>Proof. Let $k\\le n-1$ denote the maximum of all $i$'s such that there exists a point $p\\in R^n$ such that $rank(df_p)=i$. Then, by continuity of the determinant function (applied to $k\\times k$ submatrices of $df$), there exists an open subset $U\\subset R^n$ such that $df$ has rank $k$ on $U$. Now, apply the constant rank theorem to the restriction $f|U$ and conclude that $f: U\\to R^{n-1}$ cannot be injective. qed </p>\n\n<p><strong>Note.</strong> No need to appeal to Sard's theorem in this argument. </p>\n" }, { "answer_id": 2085861, "score": 2, "text": "<p>a) If the linear map $df_A:\\mathbb R^3\\to \\mathbb R$ has rank $2$ at a single point $A\\in \\mathbb R^3$, then locally near $A$ the rank theorem says that up to a change of coordinates $f(x,y,z)=(x,y)$, so that $f$ is not injective and thus even less a homeomorphism. </p>\n\n<p>b) If $f$ is nowhere of rank $2$, then all points of $\\mathbb R^3$ are critical .<br>\nBut then $f(\\mathbb R^3)$ has zero Lebesgue mesure by <a href=\"https://en.wikipedia.org/wiki/Sard&#39;s_theorem\" rel=\"nofollow noreferrer\">Sard's theorem</a> and thus $f$ is hilariously far from being surjective anf even farther from being a homeomorphism.<br>\n[The use of Sard requires the slightly stronger hypothesis that $f$ be at least $C^2$] </p>\n" }, { "answer_id": 2086193, "score": 2, "text": "<p>Let me expand on my comment: I thought it was clear, Georges comment clearly shows that my comment was clearly unclear. </p>\n\n<p>Claim. If $f: R^n\\to R^{n-1}$ is $C^1$-smooth, then $f$ cannot be injective.</p>\n\n<p>Proof. Let $k\\le n-1$ denote the maximum of all $i$'s such that there exists a point $p\\in R^n$ such that $rank(df_p)=i$. Then, by continuity of the determinant function (applied to $k\\times k$ submatrices of $df$), there exists an open subset $U\\subset R^n$ such that $df$ has rank $k$ on $U$. Now, apply the constant rank theorem to the restriction $f|U$ and conclude that $f: U\\to R^{n-1}$ cannot be injective. qed </p>\n\n<p><strong>Note.</strong> No need to appeal to Sard's theorem in this argument. </p>\n" }, { "answer_id": 2085861, "score": 2, "text": "<p>a) If the linear map $df_A:\\mathbb R^3\\to \\mathbb R$ has rank $2$ at a single point $A\\in \\mathbb R^3$, then locally near $A$ the rank theorem says that up to a change of coordinates $f(x,y,z)=(x,y)$, so that $f$ is not injective and thus even less a homeomorphism. </p>\n\n<p>b) If $f$ is nowhere of rank $2$, then all points of $\\mathbb R^3$ are critical .<br>\nBut then $f(\\mathbb R^3)$ has zero Lebesgue mesure by <a href=\"https://en.wikipedia.org/wiki/Sard&#39;s_theorem\" rel=\"nofollow noreferrer\">Sard's theorem</a> and thus $f$ is hilariously far from being surjective anf even farther from being a homeomorphism.<br>\n[The use of Sard requires the slightly stronger hypothesis that $f$ be at least $C^2$] </p>\n" }, { "answer_id": 2086193, "score": 2, "text": "<p>Let me expand on my comment: I thought it was clear, Georges comment clearly shows that my comment was clearly unclear. </p>\n\n<p>Claim. If $f: R^n\\to R^{n-1}$ is $C^1$-smooth, then $f$ cannot be injective.</p>\n\n<p>Proof. Let $k\\le n-1$ denote the maximum of all $i$'s such that there exists a point $p\\in R^n$ such that $rank(df_p)=i$. Then, by continuity of the determinant function (applied to $k\\times k$ submatrices of $df$), there exists an open subset $U\\subset R^n$ such that $df$ has rank $k$ on $U$. Now, apply the constant rank theorem to the restriction $f|U$ and conclude that $f: U\\to R^{n-1}$ cannot be injective. qed </p>\n\n<p><strong>Note.</strong> No need to appeal to Sard's theorem in this argument. </p>\n" } ]
Question: show there is no C^1 homeomorphism from R^3 to R^2 Answer 1: <p>a) If the linear map $df_A:\mathbb R^3\to \mathbb R$ has rank $2$ at a single point $A\in \mathbb R^3$, then locally near $A$ the rank theorem says that up to a change of coordinates $f(x,y,z)=(x,y)$, so that $f$ is not injective and thus even less a homeomorphism. </p> <p>b) If $f$ is nowhere of rank $2$, then all points of $\mathbb R^3$ are critical .<br> But then $f(\mathbb R^3)$ has zero Lebesgue mesure by <a href="https://en.wikipedia.org/wiki/Sard&#39;s_theorem" rel="nofollow noreferrer">Sard's theorem</a> and thus $f$ is hilariously far from being surjective anf even farther from being a homeomorphism.<br> [The use of Sard requires the slightly stronger hypothesis that $f$ be at least $C^2$] </p> Answer 2: <p>Let me expand on my comment: I thought it was clear, Georges comment clearly shows that my comment was clearly unclear. </p> <p>Claim. If $f: R^n\to R^{n-1}$ is $C^1$-smooth, then $f$ cannot be injective.</p> <p>Proof. Let $k\le n-1$ denote the maximum of all $i$'s such that there exists a point $p\in R^n$ such that $rank(df_p)=i$. Then, by continuity of the determinant function (applied to $k\times k$ submatrices of $df$), there exists an open subset $U\subset R^n$ such that $df$ has rank $k$ on $U$. Now, apply the constant rank theorem to the restriction $f|U$ and conclude that $f: U\to R^{n-1}$ cannot be injective. qed </p> <p><strong>Note.</strong> No need to appeal to Sard's theorem in this argument. </p> Answer 3: <p>a) If the linear map $df_A:\mathbb R^3\to \mathbb R$ has rank $2$ at a single point $A\in \mathbb R^3$, then locally near $A$ the rank theorem says that up to a change of coordinates $f(x,y,z)=(x,y)$, so that $f$ is not injective and thus even less a homeomorphism. </p> <p>b) If $f$ is nowhere of rank $2$, then all points of $\mathbb R^3$ are critical .<br> But then $f(\mathbb R^3)$ has zero Lebesgue mesure by <a href="https://en.wikipedia.org/wiki/Sard&#39;s_theorem" rel="nofollow noreferrer">Sard's theorem</a> and thus $f$ is hilariously far from being surjective anf even farther from being a homeomorphism.<br> [The use of Sard requires the slightly stronger hypothesis that $f$ be at least $C^2$] </p> Answer 4: <p>Let me expand on my comment: I thought it was clear, Georges comment clearly shows that my comment was clearly unclear. </p> <p>Claim. If $f: R^n\to R^{n-1}$ is $C^1$-smooth, then $f$ cannot be injective.</p> <p>Proof. Let $k\le n-1$ denote the maximum of all $i$'s such that there exists a point $p\in R^n$ such that $rank(df_p)=i$. Then, by continuity of the determinant function (applied to $k\times k$ submatrices of $df$), there exists an open subset $U\subset R^n$ such that $df$ has rank $k$ on $U$. Now, apply the constant rank theorem to the restriction $f|U$ and conclude that $f: U\to R^{n-1}$ cannot be injective. qed </p> <p><strong>Note.</strong> No need to appeal to Sard's theorem in this argument. </p> Answer 5: <p>a) If the linear map $df_A:\mathbb R^3\to \mathbb R$ has rank $2$ at a single point $A\in \mathbb R^3$, then locally near $A$ the rank theorem says that up to a change of coordinates $f(x,y,z)=(x,y)$, so that $f$ is not injective and thus even less a homeomorphism. </p> <p>b) If $f$ is nowhere of rank $2$, then all points of $\mathbb R^3$ are critical .<br> But then $f(\mathbb R^3)$ has zero Lebesgue mesure by <a href="https://en.wikipedia.org/wiki/Sard&#39;s_theorem" rel="nofollow noreferrer">Sard's theorem</a> and thus $f$ is hilariously far from being surjective anf even farther from being a homeomorphism.<br> [The use of Sard requires the slightly stronger hypothesis that $f$ be at least $C^2$] </p> Answer 6: <p>Let me expand on my comment: I thought it was clear, Georges comment clearly shows that my comment was clearly unclear. </p> <p>Claim. If $f: R^n\to R^{n-1}$ is $C^1$-smooth, then $f$ cannot be injective.</p> <p>Proof. Let $k\le n-1$ denote the maximum of all $i$'s such that there exists a point $p\in R^n$ such that $rank(df_p)=i$. Then, by continuity of the determinant function (applied to $k\times k$ submatrices of $df$), there exists an open subset $U\subset R^n$ such that $df$ has rank $k$ on $U$. Now, apply the constant rank theorem to the restriction $f|U$ and conclude that $f: U\to R^{n-1}$ cannot be injective. qed </p> <p><strong>Note.</strong> No need to appeal to Sard's theorem in this argument. </p>
Calculate the limit without using L&#39;Hopital&#39;s Rule: $\lim\limits_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}$
2,085,878
4
https://math.stackexchange.com/questions/2085878/calculate-the-limit-without-using-lhopitals-rule-lim-limits-x-to-0-frac
<p>Need to calculate the following limit without using L'Hopital's Rule:</p> <p>$$ \lim_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} $$</p> <p>The problem I'm facing is that no matter what I do I still get expression of the form $$ \frac{0}{0} $$ I thought maybe to use $$ t = e^{2x} $$ But I still can't simplify it enough..</p> <p>Thank you</p>
[ { "answer_id": 2085881, "score": 6, "text": "<p>HINT:</p>\n\n<p>As $x\\to0,x\\ne0$ so safely divide numerator &amp; denominator by $x$</p>\n\n<p>and use </p>\n\n<p>$$\\lim_{h\\to0}\\dfrac{e^h-1}h=1$$</p>\n\n<p>Observe that the exponent of $e,$ the limit variable and the denominator are <strong>same</strong>.</p>\n" }, { "answer_id": 2085888, "score": 4, "text": "<p>$$ \\lim_{x \\to 0} \\frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\\\\\n \\lim_{x \\to 0} \\frac{5x - (1+2x+\\frac{(2x)^2}{2!}+o(x^3))+1}{3x +3(1+4x+\\frac{(4x)^2}{2!}+o(x^3))-3} =\\\\\n \\lim_{x \\to 0} \\frac{3x-\\frac{(2x)^2}{2!}-o(x^3)}{15x +3\\frac{(4x)^2}{2!}+o(x^3)} =\n\\lim_{x \\to 0} \\frac{3x}{15x } =\\frac{3}{15}$$</p>\n" }, { "answer_id": 2085889, "score": 3, "text": "<p>Using Taylor's series:</p>\n\n<p>$$\\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \\frac{5x - (1+2x + O(x^2))+1}{3x +3(1+4x+O(x^2))-3} = \\frac{3x - O(x^2)}{15x+ O(x^2)} .$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow 0} \\frac{3x - O(x^2)}{15x+ O(x^2)} = \\frac{1}{5}.$$</p>\n" }, { "answer_id": 2085885, "score": 1, "text": "<p><strong>Hint</strong>: We can divide by $x $ to get $$\\lim_{x \\to 0} \\frac {5-(\\frac {e^{2x}-1}{2x}(2))}{3+3 (\\frac {e^{4x}-1}{4x}(4))} $$ Can you take it from here?</p>\n" }, { "answer_id": 2085890, "score": 1, "text": "<p>Do you know the Taylor formula? If so, then\n$$\ne^x = 1 + x + o(x)\n$$\nas $x \\to 0$. Here, $o(x)$ is a function such that $o(x)/x \\to 0$ as $x \\to 0$.\nHence\n$$\n\\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \\frac{5x - (1 + 2x + o(x)) + 1}{3x + 3(1 + 4x + o(x))-3} = \\frac{3x+ o(x)}{15x+o(x)} = \\frac{3+o(x)/x}{15+o(x)/x}\n$$\nwhich shows that your limit is $1/5$.</p>\n" }, { "answer_id": 2085923, "score": 1, "text": "<p>$$ \\frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\\frac12\\frac{\\dfrac52-\\dfrac{e^{2x}-1}{2x}}{\\dfrac34+3\\dfrac{e^{4x}-1}{4x}}\\to\\frac12\\frac{\\dfrac52-L}{\\dfrac34+3L}.$$ (by a scaling of $x$, the two ratios tend to $L$).</p>\n\n<p>Then you can take for granted that $L=1$, giving the answer $\\dfrac15$.</p>\n" }, { "answer_id": 2085881, "score": 6, "text": "<p>HINT:</p>\n\n<p>As $x\\to0,x\\ne0$ so safely divide numerator &amp; denominator by $x$</p>\n\n<p>and use </p>\n\n<p>$$\\lim_{h\\to0}\\dfrac{e^h-1}h=1$$</p>\n\n<p>Observe that the exponent of $e,$ the limit variable and the denominator are <strong>same</strong>.</p>\n" }, { "answer_id": 2085888, "score": 4, "text": "<p>$$ \\lim_{x \\to 0} \\frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\\\\\n \\lim_{x \\to 0} \\frac{5x - (1+2x+\\frac{(2x)^2}{2!}+o(x^3))+1}{3x +3(1+4x+\\frac{(4x)^2}{2!}+o(x^3))-3} =\\\\\n \\lim_{x \\to 0} \\frac{3x-\\frac{(2x)^2}{2!}-o(x^3)}{15x +3\\frac{(4x)^2}{2!}+o(x^3)} =\n\\lim_{x \\to 0} \\frac{3x}{15x } =\\frac{3}{15}$$</p>\n" }, { "answer_id": 2085889, "score": 3, "text": "<p>Using Taylor's series:</p>\n\n<p>$$\\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \\frac{5x - (1+2x + O(x^2))+1}{3x +3(1+4x+O(x^2))-3} = \\frac{3x - O(x^2)}{15x+ O(x^2)} .$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow 0} \\frac{3x - O(x^2)}{15x+ O(x^2)} = \\frac{1}{5}.$$</p>\n" }, { "answer_id": 2085885, "score": 1, "text": "<p><strong>Hint</strong>: We can divide by $x $ to get $$\\lim_{x \\to 0} \\frac {5-(\\frac {e^{2x}-1}{2x}(2))}{3+3 (\\frac {e^{4x}-1}{4x}(4))} $$ Can you take it from here?</p>\n" }, { "answer_id": 2085890, "score": 1, "text": "<p>Do you know the Taylor formula? If so, then\n$$\ne^x = 1 + x + o(x)\n$$\nas $x \\to 0$. Here, $o(x)$ is a function such that $o(x)/x \\to 0$ as $x \\to 0$.\nHence\n$$\n\\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \\frac{5x - (1 + 2x + o(x)) + 1}{3x + 3(1 + 4x + o(x))-3} = \\frac{3x+ o(x)}{15x+o(x)} = \\frac{3+o(x)/x}{15+o(x)/x}\n$$\nwhich shows that your limit is $1/5$.</p>\n" }, { "answer_id": 2085923, "score": 1, "text": "<p>$$ \\frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\\frac12\\frac{\\dfrac52-\\dfrac{e^{2x}-1}{2x}}{\\dfrac34+3\\dfrac{e^{4x}-1}{4x}}\\to\\frac12\\frac{\\dfrac52-L}{\\dfrac34+3L}.$$ (by a scaling of $x$, the two ratios tend to $L$).</p>\n\n<p>Then you can take for granted that $L=1$, giving the answer $\\dfrac15$.</p>\n" }, { "answer_id": 2085881, "score": 6, "text": "<p>HINT:</p>\n\n<p>As $x\\to0,x\\ne0$ so safely divide numerator &amp; denominator by $x$</p>\n\n<p>and use </p>\n\n<p>$$\\lim_{h\\to0}\\dfrac{e^h-1}h=1$$</p>\n\n<p>Observe that the exponent of $e,$ the limit variable and the denominator are <strong>same</strong>.</p>\n" }, { "answer_id": 2085888, "score": 4, "text": "<p>$$ \\lim_{x \\to 0} \\frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\\\\\n \\lim_{x \\to 0} \\frac{5x - (1+2x+\\frac{(2x)^2}{2!}+o(x^3))+1}{3x +3(1+4x+\\frac{(4x)^2}{2!}+o(x^3))-3} =\\\\\n \\lim_{x \\to 0} \\frac{3x-\\frac{(2x)^2}{2!}-o(x^3)}{15x +3\\frac{(4x)^2}{2!}+o(x^3)} =\n\\lim_{x \\to 0} \\frac{3x}{15x } =\\frac{3}{15}$$</p>\n" }, { "answer_id": 2085889, "score": 3, "text": "<p>Using Taylor's series:</p>\n\n<p>$$\\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \\frac{5x - (1+2x + O(x^2))+1}{3x +3(1+4x+O(x^2))-3} = \\frac{3x - O(x^2)}{15x+ O(x^2)} .$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow 0} \\frac{3x - O(x^2)}{15x+ O(x^2)} = \\frac{1}{5}.$$</p>\n" }, { "answer_id": 2085885, "score": 1, "text": "<p><strong>Hint</strong>: We can divide by $x $ to get $$\\lim_{x \\to 0} \\frac {5-(\\frac {e^{2x}-1}{2x}(2))}{3+3 (\\frac {e^{4x}-1}{4x}(4))} $$ Can you take it from here?</p>\n" }, { "answer_id": 2085890, "score": 1, "text": "<p>Do you know the Taylor formula? If so, then\n$$\ne^x = 1 + x + o(x)\n$$\nas $x \\to 0$. Here, $o(x)$ is a function such that $o(x)/x \\to 0$ as $x \\to 0$.\nHence\n$$\n\\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \\frac{5x - (1 + 2x + o(x)) + 1}{3x + 3(1 + 4x + o(x))-3} = \\frac{3x+ o(x)}{15x+o(x)} = \\frac{3+o(x)/x}{15+o(x)/x}\n$$\nwhich shows that your limit is $1/5$.</p>\n" }, { "answer_id": 2085923, "score": 1, "text": "<p>$$ \\frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\\frac12\\frac{\\dfrac52-\\dfrac{e^{2x}-1}{2x}}{\\dfrac34+3\\dfrac{e^{4x}-1}{4x}}\\to\\frac12\\frac{\\dfrac52-L}{\\dfrac34+3L}.$$ (by a scaling of $x$, the two ratios tend to $L$).</p>\n\n<p>Then you can take for granted that $L=1$, giving the answer $\\dfrac15$.</p>\n" } ]
Question: Calculate the limit without using L&#39;Hopital&#39;s Rule: $\lim\limits_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}$ Answer 1: <p>HINT:</p> <p>As $x\to0,x\ne0$ so safely divide numerator &amp; denominator by $x$</p> <p>and use </p> <p>$$\lim_{h\to0}\dfrac{e^h-1}h=1$$</p> <p>Observe that the exponent of $e,$ the limit variable and the denominator are <strong>same</strong>.</p> Answer 2: <p>$$ \lim_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\\ \lim_{x \to 0} \frac{5x - (1+2x+\frac{(2x)^2}{2!}+o(x^3))+1}{3x +3(1+4x+\frac{(4x)^2}{2!}+o(x^3))-3} =\\ \lim_{x \to 0} \frac{3x-\frac{(2x)^2}{2!}-o(x^3)}{15x +3\frac{(4x)^2}{2!}+o(x^3)} = \lim_{x \to 0} \frac{3x}{15x } =\frac{3}{15}$$</p> Answer 3: <p>Using Taylor's series:</p> <p>$$\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \frac{5x - (1+2x + O(x^2))+1}{3x +3(1+4x+O(x^2))-3} = \frac{3x - O(x^2)}{15x+ O(x^2)} .$$</p> <p>Thus, $$\lim_{x \rightarrow 0} \frac{3x - O(x^2)}{15x+ O(x^2)} = \frac{1}{5}.$$</p> Answer 4: <p><strong>Hint</strong>: We can divide by $x $ to get $$\lim_{x \to 0} \frac {5-(\frac {e^{2x}-1}{2x}(2))}{3+3 (\frac {e^{4x}-1}{4x}(4))} $$ Can you take it from here?</p> Answer 5: <p>Do you know the Taylor formula? If so, then $$ e^x = 1 + x + o(x) $$ as $x \to 0$. Here, $o(x)$ is a function such that $o(x)/x \to 0$ as $x \to 0$. Hence $$ \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \frac{5x - (1 + 2x + o(x)) + 1}{3x + 3(1 + 4x + o(x))-3} = \frac{3x+ o(x)}{15x+o(x)} = \frac{3+o(x)/x}{15+o(x)/x} $$ which shows that your limit is $1/5$.</p> Answer 6: <p>$$ \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\frac12\frac{\dfrac52-\dfrac{e^{2x}-1}{2x}}{\dfrac34+3\dfrac{e^{4x}-1}{4x}}\to\frac12\frac{\dfrac52-L}{\dfrac34+3L}.$$ (by a scaling of $x$, the two ratios tend to $L$).</p> <p>Then you can take for granted that $L=1$, giving the answer $\dfrac15$.</p> Answer 7: <p>HINT:</p> <p>As $x\to0,x\ne0$ so safely divide numerator &amp; denominator by $x$</p> <p>and use </p> <p>$$\lim_{h\to0}\dfrac{e^h-1}h=1$$</p> <p>Observe that the exponent of $e,$ the limit variable and the denominator are <strong>same</strong>.</p> Answer 8: <p>$$ \lim_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\\ \lim_{x \to 0} \frac{5x - (1+2x+\frac{(2x)^2}{2!}+o(x^3))+1}{3x +3(1+4x+\frac{(4x)^2}{2!}+o(x^3))-3} =\\ \lim_{x \to 0} \frac{3x-\frac{(2x)^2}{2!}-o(x^3)}{15x +3\frac{(4x)^2}{2!}+o(x^3)} = \lim_{x \to 0} \frac{3x}{15x } =\frac{3}{15}$$</p> Answer 9: <p>Using Taylor's series:</p> <p>$$\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \frac{5x - (1+2x + O(x^2))+1}{3x +3(1+4x+O(x^2))-3} = \frac{3x - O(x^2)}{15x+ O(x^2)} .$$</p> <p>Thus, $$\lim_{x \rightarrow 0} \frac{3x - O(x^2)}{15x+ O(x^2)} = \frac{1}{5}.$$</p> Answer 10: <p><strong>Hint</strong>: We can divide by $x $ to get $$\lim_{x \to 0} \frac {5-(\frac {e^{2x}-1}{2x}(2))}{3+3 (\frac {e^{4x}-1}{4x}(4))} $$ Can you take it from here?</p> Answer 11: <p>Do you know the Taylor formula? If so, then $$ e^x = 1 + x + o(x) $$ as $x \to 0$. Here, $o(x)$ is a function such that $o(x)/x \to 0$ as $x \to 0$. Hence $$ \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \frac{5x - (1 + 2x + o(x)) + 1}{3x + 3(1 + 4x + o(x))-3} = \frac{3x+ o(x)}{15x+o(x)} = \frac{3+o(x)/x}{15+o(x)/x} $$ which shows that your limit is $1/5$.</p> Answer 12: <p>$$ \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\frac12\frac{\dfrac52-\dfrac{e^{2x}-1}{2x}}{\dfrac34+3\dfrac{e^{4x}-1}{4x}}\to\frac12\frac{\dfrac52-L}{\dfrac34+3L}.$$ (by a scaling of $x$, the two ratios tend to $L$).</p> <p>Then you can take for granted that $L=1$, giving the answer $\dfrac15$.</p> Answer 13: <p>HINT:</p> <p>As $x\to0,x\ne0$ so safely divide numerator &amp; denominator by $x$</p> <p>and use </p> <p>$$\lim_{h\to0}\dfrac{e^h-1}h=1$$</p> <p>Observe that the exponent of $e,$ the limit variable and the denominator are <strong>same</strong>.</p> Answer 14: <p>$$ \lim_{x \to 0} \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\\ \lim_{x \to 0} \frac{5x - (1+2x+\frac{(2x)^2}{2!}+o(x^3))+1}{3x +3(1+4x+\frac{(4x)^2}{2!}+o(x^3))-3} =\\ \lim_{x \to 0} \frac{3x-\frac{(2x)^2}{2!}-o(x^3)}{15x +3\frac{(4x)^2}{2!}+o(x^3)} = \lim_{x \to 0} \frac{3x}{15x } =\frac{3}{15}$$</p> Answer 15: <p>Using Taylor's series:</p> <p>$$\frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \frac{5x - (1+2x + O(x^2))+1}{3x +3(1+4x+O(x^2))-3} = \frac{3x - O(x^2)}{15x+ O(x^2)} .$$</p> <p>Thus, $$\lim_{x \rightarrow 0} \frac{3x - O(x^2)}{15x+ O(x^2)} = \frac{1}{5}.$$</p> Answer 16: <p><strong>Hint</strong>: We can divide by $x $ to get $$\lim_{x \to 0} \frac {5-(\frac {e^{2x}-1}{2x}(2))}{3+3 (\frac {e^{4x}-1}{4x}(4))} $$ Can you take it from here?</p> Answer 17: <p>Do you know the Taylor formula? If so, then $$ e^x = 1 + x + o(x) $$ as $x \to 0$. Here, $o(x)$ is a function such that $o(x)/x \to 0$ as $x \to 0$. Hence $$ \frac{5x - e^{2x}+1}{3x +3e^{4x}-3} = \frac{5x - (1 + 2x + o(x)) + 1}{3x + 3(1 + 4x + o(x))-3} = \frac{3x+ o(x)}{15x+o(x)} = \frac{3+o(x)/x}{15+o(x)/x} $$ which shows that your limit is $1/5$.</p> Answer 18: <p>$$ \frac{5x - e^{2x}+1}{3x +3e^{4x}-3}=\frac12\frac{\dfrac52-\dfrac{e^{2x}-1}{2x}}{\dfrac34+3\dfrac{e^{4x}-1}{4x}}\to\frac12\frac{\dfrac52-L}{\dfrac34+3L}.$$ (by a scaling of $x$, the two ratios tend to $L$).</p> <p>Then you can take for granted that $L=1$, giving the answer $\dfrac15$.</p>
Given a non-discrete topology on a finite set $X$, can you define a metric?
2,086,000
4
https://math.stackexchange.com/questions/2086000/given-a-non-discrete-topology-on-a-finite-set-x-can-you-define-a-metric
<p>Let $ X $ be a finite set and $ \tau $ a topology in $ X $ different from the discrete topology. Is it possible to define a metric in the set $ X $ such that $ \tau $ is the topology associated with the metric?</p>
[ { "answer_id": 2086015, "score": 5, "text": "<p>In general the answer is <em>no</em>. A topology for which such a metric <em>does</em> exist is said to be <em>metrizable</em>, and many topologies are not metrizable, even on finite sets. In fact, the only $T_1$ topology on a finite set is the discrete topology, which of course is metrizable; since a metric space is automatically $T_1$, this means that there is only one metrizable topology on a finite set, the discrete topology. And if the set has more than one element, there are other topologies, starting with the indiscrete topology.</p>\n\n<p>One of the nicer non-metrizable topologies on the set $[n]=\\{1,\\ldots,n\\}$ is the one whose open sets are initial segments: </p>\n\n<p>$$\\tau=\\{[k]:k\\in[n]\\}\\cup\\{\\varnothing\\}\\;.$$</p>\n\n<p>Since this isn’t $T_1$, it clearly cannot be metrizable. I’ll call it a <em>nest topology</em> on $[n]$.</p>\n\n<p>More generally, if $X$ is any finite set, let $\\mathscr{P}$ be a partition of $X$ having at least one part with more one element. Give each $P\\in\\mathscr{P}$ a nest topology, and make each $P\\in\\mathscr{P}$ an open set in $X$. The resulting topology is $T_0$ but not $T_1$. The closest you can get to a metrizable space finite space without actually making the topology discrete is a special case of this, in which one part has two points, and the rest have one each. This is what you get if you start with a discrete space and replace one of the points with an open copy of the <a href=\"https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space\" rel=\"nofollow noreferrer\">Sierpiński space</a>. You now have a space with (say) $n+1$ points, $n$ of which are isolated; the remaining point has a smallest nbhd consisting of itself and one of the other points. </p>\n\n<p>The situation with infinite spaces is much more interesting. You may know some of this already, but this seems a nice opportunity to list some common examples of (mostly quite nice) non-metrizable spaces. All of them are at least $T_1$. The spaces in the second, third, and fifth bullet points are $T_5$ spaces, i.e., $T_1$ and hereditarily normal. And the spaces in the fourth bullet point are Tikhonov provided that the factor spaces are Tikhonov.</p>\n\n<ul>\n<li><p>The cofinite topology on an infinite set is not metrizable. One way to see this is to note that every metric space is Hausdorff, but the cofinite topology on an infinite set is not Hausdorff. For similar reasons the co-countable topology on an uncountable set is not metrizable.</p></li>\n<li><p>The <a href=\"https://en.wikipedia.org/wiki/Lower_limit_topology\" rel=\"nofollow noreferrer\">Sorgenfrey line</a> $S$ is not metrizable. One way to see this is to show that $S\\times S$, the <a href=\"https://en.wikipedia.org/wiki/Sorgenfrey_plane\" rel=\"nofollow noreferrer\">Sorgenfrey plane</a>, is not normal; since every metric space is normal, and the product of two metric spaces is metrizable, it follows that $S$ cannot be metrizable.</p></li>\n<li><p>The <a href=\"https://en.wikipedia.org/wiki/Alexandroff_extension\" rel=\"nofollow noreferrer\">one-point compactification</a> $X$ of an uncountable discrete space is a very nice space in many ways, since it’s compact and Hausdorff, but it’s not metrizable: every metric space is first countable, but the point at infinity in $X$ does not have a countable local base.</p></li>\n<li><p>The <a href=\"https://en.wikipedia.org/wiki/Box_topology\" rel=\"nofollow noreferrer\">box topology</a> on the product of infinitely many non-trivial spaces is never metrizable, because it’s never first countable.</p></li>\n<li><p>An uncountable ordinal $\\alpha$ with the linear order topology is never metrizable. If $\\alpha&gt;\\omega_1$, it isn’t even first countable, and $\\omega_1$, the smallest uncountable ordinal, is countably compact but not compact. Since a metric space is compact if and only if it is countably compact, $\\omega_1$ cannot be metrizable.</p></li>\n</ul>\n" }, { "answer_id": 2086016, "score": 5, "text": "<p>In a finite metric space the natural associated topology is always discrete: define $$r:=\\min\\{d(x,y):x,y\\in X, x \\neq y\\}&gt;0.$$\nThen $B(x,r)=\\{x\\}$ for every $x\\in X$, therefore singletons are open.</p>\n" }, { "answer_id": 2086015, "score": 5, "text": "<p>In general the answer is <em>no</em>. A topology for which such a metric <em>does</em> exist is said to be <em>metrizable</em>, and many topologies are not metrizable, even on finite sets. In fact, the only $T_1$ topology on a finite set is the discrete topology, which of course is metrizable; since a metric space is automatically $T_1$, this means that there is only one metrizable topology on a finite set, the discrete topology. And if the set has more than one element, there are other topologies, starting with the indiscrete topology.</p>\n\n<p>One of the nicer non-metrizable topologies on the set $[n]=\\{1,\\ldots,n\\}$ is the one whose open sets are initial segments: </p>\n\n<p>$$\\tau=\\{[k]:k\\in[n]\\}\\cup\\{\\varnothing\\}\\;.$$</p>\n\n<p>Since this isn’t $T_1$, it clearly cannot be metrizable. I’ll call it a <em>nest topology</em> on $[n]$.</p>\n\n<p>More generally, if $X$ is any finite set, let $\\mathscr{P}$ be a partition of $X$ having at least one part with more one element. Give each $P\\in\\mathscr{P}$ a nest topology, and make each $P\\in\\mathscr{P}$ an open set in $X$. The resulting topology is $T_0$ but not $T_1$. The closest you can get to a metrizable space finite space without actually making the topology discrete is a special case of this, in which one part has two points, and the rest have one each. This is what you get if you start with a discrete space and replace one of the points with an open copy of the <a href=\"https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space\" rel=\"nofollow noreferrer\">Sierpiński space</a>. You now have a space with (say) $n+1$ points, $n$ of which are isolated; the remaining point has a smallest nbhd consisting of itself and one of the other points. </p>\n\n<p>The situation with infinite spaces is much more interesting. You may know some of this already, but this seems a nice opportunity to list some common examples of (mostly quite nice) non-metrizable spaces. All of them are at least $T_1$. The spaces in the second, third, and fifth bullet points are $T_5$ spaces, i.e., $T_1$ and hereditarily normal. And the spaces in the fourth bullet point are Tikhonov provided that the factor spaces are Tikhonov.</p>\n\n<ul>\n<li><p>The cofinite topology on an infinite set is not metrizable. One way to see this is to note that every metric space is Hausdorff, but the cofinite topology on an infinite set is not Hausdorff. For similar reasons the co-countable topology on an uncountable set is not metrizable.</p></li>\n<li><p>The <a href=\"https://en.wikipedia.org/wiki/Lower_limit_topology\" rel=\"nofollow noreferrer\">Sorgenfrey line</a> $S$ is not metrizable. One way to see this is to show that $S\\times S$, the <a href=\"https://en.wikipedia.org/wiki/Sorgenfrey_plane\" rel=\"nofollow noreferrer\">Sorgenfrey plane</a>, is not normal; since every metric space is normal, and the product of two metric spaces is metrizable, it follows that $S$ cannot be metrizable.</p></li>\n<li><p>The <a href=\"https://en.wikipedia.org/wiki/Alexandroff_extension\" rel=\"nofollow noreferrer\">one-point compactification</a> $X$ of an uncountable discrete space is a very nice space in many ways, since it’s compact and Hausdorff, but it’s not metrizable: every metric space is first countable, but the point at infinity in $X$ does not have a countable local base.</p></li>\n<li><p>The <a href=\"https://en.wikipedia.org/wiki/Box_topology\" rel=\"nofollow noreferrer\">box topology</a> on the product of infinitely many non-trivial spaces is never metrizable, because it’s never first countable.</p></li>\n<li><p>An uncountable ordinal $\\alpha$ with the linear order topology is never metrizable. If $\\alpha&gt;\\omega_1$, it isn’t even first countable, and $\\omega_1$, the smallest uncountable ordinal, is countably compact but not compact. Since a metric space is compact if and only if it is countably compact, $\\omega_1$ cannot be metrizable.</p></li>\n</ul>\n" }, { "answer_id": 2086016, "score": 5, "text": "<p>In a finite metric space the natural associated topology is always discrete: define $$r:=\\min\\{d(x,y):x,y\\in X, x \\neq y\\}&gt;0.$$\nThen $B(x,r)=\\{x\\}$ for every $x\\in X$, therefore singletons are open.</p>\n" }, { "answer_id": 2086015, "score": 5, "text": "<p>In general the answer is <em>no</em>. A topology for which such a metric <em>does</em> exist is said to be <em>metrizable</em>, and many topologies are not metrizable, even on finite sets. In fact, the only $T_1$ topology on a finite set is the discrete topology, which of course is metrizable; since a metric space is automatically $T_1$, this means that there is only one metrizable topology on a finite set, the discrete topology. And if the set has more than one element, there are other topologies, starting with the indiscrete topology.</p>\n\n<p>One of the nicer non-metrizable topologies on the set $[n]=\\{1,\\ldots,n\\}$ is the one whose open sets are initial segments: </p>\n\n<p>$$\\tau=\\{[k]:k\\in[n]\\}\\cup\\{\\varnothing\\}\\;.$$</p>\n\n<p>Since this isn’t $T_1$, it clearly cannot be metrizable. I’ll call it a <em>nest topology</em> on $[n]$.</p>\n\n<p>More generally, if $X$ is any finite set, let $\\mathscr{P}$ be a partition of $X$ having at least one part with more one element. Give each $P\\in\\mathscr{P}$ a nest topology, and make each $P\\in\\mathscr{P}$ an open set in $X$. The resulting topology is $T_0$ but not $T_1$. The closest you can get to a metrizable space finite space without actually making the topology discrete is a special case of this, in which one part has two points, and the rest have one each. This is what you get if you start with a discrete space and replace one of the points with an open copy of the <a href=\"https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space\" rel=\"nofollow noreferrer\">Sierpiński space</a>. You now have a space with (say) $n+1$ points, $n$ of which are isolated; the remaining point has a smallest nbhd consisting of itself and one of the other points. </p>\n\n<p>The situation with infinite spaces is much more interesting. You may know some of this already, but this seems a nice opportunity to list some common examples of (mostly quite nice) non-metrizable spaces. All of them are at least $T_1$. The spaces in the second, third, and fifth bullet points are $T_5$ spaces, i.e., $T_1$ and hereditarily normal. And the spaces in the fourth bullet point are Tikhonov provided that the factor spaces are Tikhonov.</p>\n\n<ul>\n<li><p>The cofinite topology on an infinite set is not metrizable. One way to see this is to note that every metric space is Hausdorff, but the cofinite topology on an infinite set is not Hausdorff. For similar reasons the co-countable topology on an uncountable set is not metrizable.</p></li>\n<li><p>The <a href=\"https://en.wikipedia.org/wiki/Lower_limit_topology\" rel=\"nofollow noreferrer\">Sorgenfrey line</a> $S$ is not metrizable. One way to see this is to show that $S\\times S$, the <a href=\"https://en.wikipedia.org/wiki/Sorgenfrey_plane\" rel=\"nofollow noreferrer\">Sorgenfrey plane</a>, is not normal; since every metric space is normal, and the product of two metric spaces is metrizable, it follows that $S$ cannot be metrizable.</p></li>\n<li><p>The <a href=\"https://en.wikipedia.org/wiki/Alexandroff_extension\" rel=\"nofollow noreferrer\">one-point compactification</a> $X$ of an uncountable discrete space is a very nice space in many ways, since it’s compact and Hausdorff, but it’s not metrizable: every metric space is first countable, but the point at infinity in $X$ does not have a countable local base.</p></li>\n<li><p>The <a href=\"https://en.wikipedia.org/wiki/Box_topology\" rel=\"nofollow noreferrer\">box topology</a> on the product of infinitely many non-trivial spaces is never metrizable, because it’s never first countable.</p></li>\n<li><p>An uncountable ordinal $\\alpha$ with the linear order topology is never metrizable. If $\\alpha&gt;\\omega_1$, it isn’t even first countable, and $\\omega_1$, the smallest uncountable ordinal, is countably compact but not compact. Since a metric space is compact if and only if it is countably compact, $\\omega_1$ cannot be metrizable.</p></li>\n</ul>\n" }, { "answer_id": 2086016, "score": 5, "text": "<p>In a finite metric space the natural associated topology is always discrete: define $$r:=\\min\\{d(x,y):x,y\\in X, x \\neq y\\}&gt;0.$$\nThen $B(x,r)=\\{x\\}$ for every $x\\in X$, therefore singletons are open.</p>\n" } ]
Question: Given a non-discrete topology on a finite set $X$, can you define a metric? Answer 1: <p>In general the answer is <em>no</em>. A topology for which such a metric <em>does</em> exist is said to be <em>metrizable</em>, and many topologies are not metrizable, even on finite sets. In fact, the only $T_1$ topology on a finite set is the discrete topology, which of course is metrizable; since a metric space is automatically $T_1$, this means that there is only one metrizable topology on a finite set, the discrete topology. And if the set has more than one element, there are other topologies, starting with the indiscrete topology.</p> <p>One of the nicer non-metrizable topologies on the set $[n]=\{1,\ldots,n\}$ is the one whose open sets are initial segments: </p> <p>$$\tau=\{[k]:k\in[n]\}\cup\{\varnothing\}\;.$$</p> <p>Since this isn’t $T_1$, it clearly cannot be metrizable. I’ll call it a <em>nest topology</em> on $[n]$.</p> <p>More generally, if $X$ is any finite set, let $\mathscr{P}$ be a partition of $X$ having at least one part with more one element. Give each $P\in\mathscr{P}$ a nest topology, and make each $P\in\mathscr{P}$ an open set in $X$. The resulting topology is $T_0$ but not $T_1$. The closest you can get to a metrizable space finite space without actually making the topology discrete is a special case of this, in which one part has two points, and the rest have one each. This is what you get if you start with a discrete space and replace one of the points with an open copy of the <a href="https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space" rel="nofollow noreferrer">Sierpiński space</a>. You now have a space with (say) $n+1$ points, $n$ of which are isolated; the remaining point has a smallest nbhd consisting of itself and one of the other points. </p> <p>The situation with infinite spaces is much more interesting. You may know some of this already, but this seems a nice opportunity to list some common examples of (mostly quite nice) non-metrizable spaces. All of them are at least $T_1$. The spaces in the second, third, and fifth bullet points are $T_5$ spaces, i.e., $T_1$ and hereditarily normal. And the spaces in the fourth bullet point are Tikhonov provided that the factor spaces are Tikhonov.</p> <ul> <li><p>The cofinite topology on an infinite set is not metrizable. One way to see this is to note that every metric space is Hausdorff, but the cofinite topology on an infinite set is not Hausdorff. For similar reasons the co-countable topology on an uncountable set is not metrizable.</p></li> <li><p>The <a href="https://en.wikipedia.org/wiki/Lower_limit_topology" rel="nofollow noreferrer">Sorgenfrey line</a> $S$ is not metrizable. One way to see this is to show that $S\times S$, the <a href="https://en.wikipedia.org/wiki/Sorgenfrey_plane" rel="nofollow noreferrer">Sorgenfrey plane</a>, is not normal; since every metric space is normal, and the product of two metric spaces is metrizable, it follows that $S$ cannot be metrizable.</p></li> <li><p>The <a href="https://en.wikipedia.org/wiki/Alexandroff_extension" rel="nofollow noreferrer">one-point compactification</a> $X$ of an uncountable discrete space is a very nice space in many ways, since it’s compact and Hausdorff, but it’s not metrizable: every metric space is first countable, but the point at infinity in $X$ does not have a countable local base.</p></li> <li><p>The <a href="https://en.wikipedia.org/wiki/Box_topology" rel="nofollow noreferrer">box topology</a> on the product of infinitely many non-trivial spaces is never metrizable, because it’s never first countable.</p></li> <li><p>An uncountable ordinal $\alpha$ with the linear order topology is never metrizable. If $\alpha&gt;\omega_1$, it isn’t even first countable, and $\omega_1$, the smallest uncountable ordinal, is countably compact but not compact. Since a metric space is compact if and only if it is countably compact, $\omega_1$ cannot be metrizable.</p></li> </ul> Answer 2: <p>In a finite metric space the natural associated topology is always discrete: define $$r:=\min\{d(x,y):x,y\in X, x \neq y\}&gt;0.$$ Then $B(x,r)=\{x\}$ for every $x\in X$, therefore singletons are open.</p> Answer 3: <p>In general the answer is <em>no</em>. A topology for which such a metric <em>does</em> exist is said to be <em>metrizable</em>, and many topologies are not metrizable, even on finite sets. In fact, the only $T_1$ topology on a finite set is the discrete topology, which of course is metrizable; since a metric space is automatically $T_1$, this means that there is only one metrizable topology on a finite set, the discrete topology. And if the set has more than one element, there are other topologies, starting with the indiscrete topology.</p> <p>One of the nicer non-metrizable topologies on the set $[n]=\{1,\ldots,n\}$ is the one whose open sets are initial segments: </p> <p>$$\tau=\{[k]:k\in[n]\}\cup\{\varnothing\}\;.$$</p> <p>Since this isn’t $T_1$, it clearly cannot be metrizable. I’ll call it a <em>nest topology</em> on $[n]$.</p> <p>More generally, if $X$ is any finite set, let $\mathscr{P}$ be a partition of $X$ having at least one part with more one element. Give each $P\in\mathscr{P}$ a nest topology, and make each $P\in\mathscr{P}$ an open set in $X$. The resulting topology is $T_0$ but not $T_1$. The closest you can get to a metrizable space finite space without actually making the topology discrete is a special case of this, in which one part has two points, and the rest have one each. This is what you get if you start with a discrete space and replace one of the points with an open copy of the <a href="https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space" rel="nofollow noreferrer">Sierpiński space</a>. You now have a space with (say) $n+1$ points, $n$ of which are isolated; the remaining point has a smallest nbhd consisting of itself and one of the other points. </p> <p>The situation with infinite spaces is much more interesting. You may know some of this already, but this seems a nice opportunity to list some common examples of (mostly quite nice) non-metrizable spaces. All of them are at least $T_1$. The spaces in the second, third, and fifth bullet points are $T_5$ spaces, i.e., $T_1$ and hereditarily normal. And the spaces in the fourth bullet point are Tikhonov provided that the factor spaces are Tikhonov.</p> <ul> <li><p>The cofinite topology on an infinite set is not metrizable. One way to see this is to note that every metric space is Hausdorff, but the cofinite topology on an infinite set is not Hausdorff. For similar reasons the co-countable topology on an uncountable set is not metrizable.</p></li> <li><p>The <a href="https://en.wikipedia.org/wiki/Lower_limit_topology" rel="nofollow noreferrer">Sorgenfrey line</a> $S$ is not metrizable. One way to see this is to show that $S\times S$, the <a href="https://en.wikipedia.org/wiki/Sorgenfrey_plane" rel="nofollow noreferrer">Sorgenfrey plane</a>, is not normal; since every metric space is normal, and the product of two metric spaces is metrizable, it follows that $S$ cannot be metrizable.</p></li> <li><p>The <a href="https://en.wikipedia.org/wiki/Alexandroff_extension" rel="nofollow noreferrer">one-point compactification</a> $X$ of an uncountable discrete space is a very nice space in many ways, since it’s compact and Hausdorff, but it’s not metrizable: every metric space is first countable, but the point at infinity in $X$ does not have a countable local base.</p></li> <li><p>The <a href="https://en.wikipedia.org/wiki/Box_topology" rel="nofollow noreferrer">box topology</a> on the product of infinitely many non-trivial spaces is never metrizable, because it’s never first countable.</p></li> <li><p>An uncountable ordinal $\alpha$ with the linear order topology is never metrizable. If $\alpha&gt;\omega_1$, it isn’t even first countable, and $\omega_1$, the smallest uncountable ordinal, is countably compact but not compact. Since a metric space is compact if and only if it is countably compact, $\omega_1$ cannot be metrizable.</p></li> </ul> Answer 4: <p>In a finite metric space the natural associated topology is always discrete: define $$r:=\min\{d(x,y):x,y\in X, x \neq y\}&gt;0.$$ Then $B(x,r)=\{x\}$ for every $x\in X$, therefore singletons are open.</p> Answer 5: <p>In general the answer is <em>no</em>. A topology for which such a metric <em>does</em> exist is said to be <em>metrizable</em>, and many topologies are not metrizable, even on finite sets. In fact, the only $T_1$ topology on a finite set is the discrete topology, which of course is metrizable; since a metric space is automatically $T_1$, this means that there is only one metrizable topology on a finite set, the discrete topology. And if the set has more than one element, there are other topologies, starting with the indiscrete topology.</p> <p>One of the nicer non-metrizable topologies on the set $[n]=\{1,\ldots,n\}$ is the one whose open sets are initial segments: </p> <p>$$\tau=\{[k]:k\in[n]\}\cup\{\varnothing\}\;.$$</p> <p>Since this isn’t $T_1$, it clearly cannot be metrizable. I’ll call it a <em>nest topology</em> on $[n]$.</p> <p>More generally, if $X$ is any finite set, let $\mathscr{P}$ be a partition of $X$ having at least one part with more one element. Give each $P\in\mathscr{P}$ a nest topology, and make each $P\in\mathscr{P}$ an open set in $X$. The resulting topology is $T_0$ but not $T_1$. The closest you can get to a metrizable space finite space without actually making the topology discrete is a special case of this, in which one part has two points, and the rest have one each. This is what you get if you start with a discrete space and replace one of the points with an open copy of the <a href="https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space" rel="nofollow noreferrer">Sierpiński space</a>. You now have a space with (say) $n+1$ points, $n$ of which are isolated; the remaining point has a smallest nbhd consisting of itself and one of the other points. </p> <p>The situation with infinite spaces is much more interesting. You may know some of this already, but this seems a nice opportunity to list some common examples of (mostly quite nice) non-metrizable spaces. All of them are at least $T_1$. The spaces in the second, third, and fifth bullet points are $T_5$ spaces, i.e., $T_1$ and hereditarily normal. And the spaces in the fourth bullet point are Tikhonov provided that the factor spaces are Tikhonov.</p> <ul> <li><p>The cofinite topology on an infinite set is not metrizable. One way to see this is to note that every metric space is Hausdorff, but the cofinite topology on an infinite set is not Hausdorff. For similar reasons the co-countable topology on an uncountable set is not metrizable.</p></li> <li><p>The <a href="https://en.wikipedia.org/wiki/Lower_limit_topology" rel="nofollow noreferrer">Sorgenfrey line</a> $S$ is not metrizable. One way to see this is to show that $S\times S$, the <a href="https://en.wikipedia.org/wiki/Sorgenfrey_plane" rel="nofollow noreferrer">Sorgenfrey plane</a>, is not normal; since every metric space is normal, and the product of two metric spaces is metrizable, it follows that $S$ cannot be metrizable.</p></li> <li><p>The <a href="https://en.wikipedia.org/wiki/Alexandroff_extension" rel="nofollow noreferrer">one-point compactification</a> $X$ of an uncountable discrete space is a very nice space in many ways, since it’s compact and Hausdorff, but it’s not metrizable: every metric space is first countable, but the point at infinity in $X$ does not have a countable local base.</p></li> <li><p>The <a href="https://en.wikipedia.org/wiki/Box_topology" rel="nofollow noreferrer">box topology</a> on the product of infinitely many non-trivial spaces is never metrizable, because it’s never first countable.</p></li> <li><p>An uncountable ordinal $\alpha$ with the linear order topology is never metrizable. If $\alpha&gt;\omega_1$, it isn’t even first countable, and $\omega_1$, the smallest uncountable ordinal, is countably compact but not compact. Since a metric space is compact if and only if it is countably compact, $\omega_1$ cannot be metrizable.</p></li> </ul> Answer 6: <p>In a finite metric space the natural associated topology is always discrete: define $$r:=\min\{d(x,y):x,y\in X, x \neq y\}&gt;0.$$ Then $B(x,r)=\{x\}$ for every $x\in X$, therefore singletons are open.</p>
Help in understanding a simple proof.
2,086,428
4
https://math.stackexchange.com/questions/2086428/help-in-understanding-a-simple-proof
<p>Let $Ax + By + C = 0$ be a general equation of a line and $x\cos \alpha + y\sin \alpha - p = 0$ be the normal form of the equation. </p> <p>Then,</p> <p>$${-p\over C } = { \cos \alpha\over A} = { \sin\alpha\over B}\tag{1}$$ $${p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} \tag{2}$$ $${p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} = {\sqrt{\sin^2 \alpha + \cos^2 \alpha}\over \sqrt{A^2 + B^2}} = {1\over \sqrt{A^2 + B^2}} \tag{3}$$</p> <p>$$\therefore \bbox[ #FFFDD0, 10px, Border:2px solid #DC143C]{p = {C\over \sqrt{A^2 + B^2}}, \cos \alpha = {-A\over \sqrt{A^2 + B^2}},\sin\alpha = {-B\over \sqrt{A^2 + B^2}}} $$</p> <p>I did not get the $(3)$ part. Where does $\displaystyle{\sqrt{\sin^2 \alpha + \cos^2 \alpha}\over \sqrt{A^2 + B^2}}$ come from ?</p>
[ { "answer_id": 2086438, "score": 3, "text": "<p>$$\n{p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} \n\\\\\n\\implies \\frac{p^2}{C^2}=\\frac{\\cos^2(\\alpha)}{A^2}=\\frac{\\sin^2(\\alpha)}{B^2}\n$$</p>\n\n<p>If \n$$\n\\frac{a}{b}=\\frac{c}{d}=k\n\\\\\n\\implies a=bk, c=dk\n$$\nSo the value of :\n$$\n\\frac{a+c}{b+d}=\\frac{bk+dk}{b+d}=\\frac{k(b+d)}{b+d}=k=\\frac{a}{b}=\\frac{c}{d}\n$$\nSimply put, in a proportion of two or more equal ratios, the sum of the antecedents divided by the sum of the consequents is equal to any one of the ratios.</p>\n\n<p>In the same way :\n$$\n\\frac{p^2}{C^2}=\\frac{\\cos^2(\\alpha)}{A^2}=\\frac{\\sin^2(\\alpha)}{B^2}={{\\sin^2 \\alpha + \\cos^2 \\alpha}\\over {A^2 + B^2}}\n$$\nHence :\n$$\n\\frac{p}{C}={\\sqrt{\\sin^2 \\alpha + \\cos^2 \\alpha}\\over \\sqrt{A^2 + B^2}}\n$$</p>\n" }, { "answer_id": 2086446, "score": 1, "text": "<p>I'd do it differently: since $A$ and $B$ are not both zero, we can divide by $-\\sqrt{A^2+B^2}$, getting\n$$\n\\frac{-A}{\\sqrt{A^2+B^2}}x+\\frac{-B}{\\sqrt{A^2+B^2}}-\\frac{C}{\\sqrt{A^2+B^2}}=0\n$$\nand we can choose $\\alpha$ so that\n$$\n\\begin{cases}\n\\cos\\alpha=-\\dfrac{A}{\\sqrt{A^2+B^2}}\\\\[6px]\n\\sin\\alpha=-\\dfrac{B}{\\sqrt{A^2+B^2}}\n\\end{cases}\n$$\nand set\n$$\np=\\frac{C}{\\sqrt{A^2+B^2}}\n$$</p>\n\n<hr>\n\n<p>For the mysterious formula, observe that, if\n$$\nk=\\frac{\\cos\\alpha}{-A}=\\frac{\\sin\\alpha}{-B}\n$$\nthen\n$$\nA^2k^2=\\cos^2\\alpha,\\quad B^2k^2=\\sin^2\\alpha\n$$\nso also\n$$\n(A^2+B^2)k^2=1\n$$\nand\n$$\n|k|=\\frac{1}{\\sqrt{A^2+B^2}}\n$$\nOf course one needs to be <em>very</em> precise about the choice of $p$, using this method. It is implied that $p&lt;0$ if $C&lt;0$ and $p&gt;0$ if $C&gt;0$; a special case should be made when $C=0$.</p>\n\n<p>The method above is independent of these considerations.</p>\n" }, { "answer_id": 2086486, "score": 1, "text": "<p>Let's modify ${p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} $</p>\n\n<p>to ${p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} =K$</p>\n\n<p>And ${p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} = {\\sqrt{\\sin^2 \\alpha + \\cos^2 \\alpha}\\over \\sqrt{A^2 + B^2}} = {1\\over \\sqrt{A^2 + B^2}} \\tag{3}$</p>\n\n<p>to $K = {p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} = {K\\sqrt{A^2 +B^2}\\over \\sqrt{A^2 + B^2}}={\\sqrt{(-AK)^2 +(-BK)^2}\\over \\sqrt{A^2 + B^2}}={\\sqrt{\\sin^2 \\alpha + \\cos^2 \\alpha}\\over \\sqrt{A^2 + B^2}} = {1\\over \\sqrt{A^2 + B^2}} \\tag{3}$</p>\n" }, { "answer_id": 2086487, "score": 1, "text": "<p>Let $\\frac{p}{C}=\\frac{\\cos\\alpha}{A}=\\frac{\\sin\\alpha}{B} = k$</p>\n\n<p>Then we have,</p>\n\n<p>$\\cos\\alpha = kA$</p>\n\n<p>On squaring we have,</p>\n\n<p>$\\cos^2\\alpha = k^2A^2$</p>\n\n<p>Also $\\sin\\alpha = kB$</p>\n\n<p>On squaring $\\sin^2\\alpha = k^2B^2$</p>\n\n<p>Adding these two squared equations,</p>\n\n<p>$\\sin^2\\alpha + \\cos^2\\alpha = k^2A^2 + k^2B^2$</p>\n\n<p>$\\sin^2\\alpha + \\cos^2\\alpha= k^2(A^2 + B^2)$</p>\n\n<p>$\\frac{\\sin^2\\alpha + \\cos^2\\alpha}{(A^2 + B^2)} = k^2$</p>\n\n<p>$\\frac{\\sqrt{\\sin^2\\alpha + \\cos^2\\alpha}}{\\sqrt{(A^2 + B^2)}} = k$</p>\n" }, { "answer_id": 2086438, "score": 3, "text": "<p>$$\n{p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} \n\\\\\n\\implies \\frac{p^2}{C^2}=\\frac{\\cos^2(\\alpha)}{A^2}=\\frac{\\sin^2(\\alpha)}{B^2}\n$$</p>\n\n<p>If \n$$\n\\frac{a}{b}=\\frac{c}{d}=k\n\\\\\n\\implies a=bk, c=dk\n$$\nSo the value of :\n$$\n\\frac{a+c}{b+d}=\\frac{bk+dk}{b+d}=\\frac{k(b+d)}{b+d}=k=\\frac{a}{b}=\\frac{c}{d}\n$$\nSimply put, in a proportion of two or more equal ratios, the sum of the antecedents divided by the sum of the consequents is equal to any one of the ratios.</p>\n\n<p>In the same way :\n$$\n\\frac{p^2}{C^2}=\\frac{\\cos^2(\\alpha)}{A^2}=\\frac{\\sin^2(\\alpha)}{B^2}={{\\sin^2 \\alpha + \\cos^2 \\alpha}\\over {A^2 + B^2}}\n$$\nHence :\n$$\n\\frac{p}{C}={\\sqrt{\\sin^2 \\alpha + \\cos^2 \\alpha}\\over \\sqrt{A^2 + B^2}}\n$$</p>\n" }, { "answer_id": 2086446, "score": 1, "text": "<p>I'd do it differently: since $A$ and $B$ are not both zero, we can divide by $-\\sqrt{A^2+B^2}$, getting\n$$\n\\frac{-A}{\\sqrt{A^2+B^2}}x+\\frac{-B}{\\sqrt{A^2+B^2}}-\\frac{C}{\\sqrt{A^2+B^2}}=0\n$$\nand we can choose $\\alpha$ so that\n$$\n\\begin{cases}\n\\cos\\alpha=-\\dfrac{A}{\\sqrt{A^2+B^2}}\\\\[6px]\n\\sin\\alpha=-\\dfrac{B}{\\sqrt{A^2+B^2}}\n\\end{cases}\n$$\nand set\n$$\np=\\frac{C}{\\sqrt{A^2+B^2}}\n$$</p>\n\n<hr>\n\n<p>For the mysterious formula, observe that, if\n$$\nk=\\frac{\\cos\\alpha}{-A}=\\frac{\\sin\\alpha}{-B}\n$$\nthen\n$$\nA^2k^2=\\cos^2\\alpha,\\quad B^2k^2=\\sin^2\\alpha\n$$\nso also\n$$\n(A^2+B^2)k^2=1\n$$\nand\n$$\n|k|=\\frac{1}{\\sqrt{A^2+B^2}}\n$$\nOf course one needs to be <em>very</em> precise about the choice of $p$, using this method. It is implied that $p&lt;0$ if $C&lt;0$ and $p&gt;0$ if $C&gt;0$; a special case should be made when $C=0$.</p>\n\n<p>The method above is independent of these considerations.</p>\n" }, { "answer_id": 2086486, "score": 1, "text": "<p>Let's modify ${p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} $</p>\n\n<p>to ${p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} =K$</p>\n\n<p>And ${p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} = {\\sqrt{\\sin^2 \\alpha + \\cos^2 \\alpha}\\over \\sqrt{A^2 + B^2}} = {1\\over \\sqrt{A^2 + B^2}} \\tag{3}$</p>\n\n<p>to $K = {p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} = {K\\sqrt{A^2 +B^2}\\over \\sqrt{A^2 + B^2}}={\\sqrt{(-AK)^2 +(-BK)^2}\\over \\sqrt{A^2 + B^2}}={\\sqrt{\\sin^2 \\alpha + \\cos^2 \\alpha}\\over \\sqrt{A^2 + B^2}} = {1\\over \\sqrt{A^2 + B^2}} \\tag{3}$</p>\n" }, { "answer_id": 2086487, "score": 1, "text": "<p>Let $\\frac{p}{C}=\\frac{\\cos\\alpha}{A}=\\frac{\\sin\\alpha}{B} = k$</p>\n\n<p>Then we have,</p>\n\n<p>$\\cos\\alpha = kA$</p>\n\n<p>On squaring we have,</p>\n\n<p>$\\cos^2\\alpha = k^2A^2$</p>\n\n<p>Also $\\sin\\alpha = kB$</p>\n\n<p>On squaring $\\sin^2\\alpha = k^2B^2$</p>\n\n<p>Adding these two squared equations,</p>\n\n<p>$\\sin^2\\alpha + \\cos^2\\alpha = k^2A^2 + k^2B^2$</p>\n\n<p>$\\sin^2\\alpha + \\cos^2\\alpha= k^2(A^2 + B^2)$</p>\n\n<p>$\\frac{\\sin^2\\alpha + \\cos^2\\alpha}{(A^2 + B^2)} = k^2$</p>\n\n<p>$\\frac{\\sqrt{\\sin^2\\alpha + \\cos^2\\alpha}}{\\sqrt{(A^2 + B^2)}} = k$</p>\n" }, { "answer_id": 2086438, "score": 3, "text": "<p>$$\n{p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} \n\\\\\n\\implies \\frac{p^2}{C^2}=\\frac{\\cos^2(\\alpha)}{A^2}=\\frac{\\sin^2(\\alpha)}{B^2}\n$$</p>\n\n<p>If \n$$\n\\frac{a}{b}=\\frac{c}{d}=k\n\\\\\n\\implies a=bk, c=dk\n$$\nSo the value of :\n$$\n\\frac{a+c}{b+d}=\\frac{bk+dk}{b+d}=\\frac{k(b+d)}{b+d}=k=\\frac{a}{b}=\\frac{c}{d}\n$$\nSimply put, in a proportion of two or more equal ratios, the sum of the antecedents divided by the sum of the consequents is equal to any one of the ratios.</p>\n\n<p>In the same way :\n$$\n\\frac{p^2}{C^2}=\\frac{\\cos^2(\\alpha)}{A^2}=\\frac{\\sin^2(\\alpha)}{B^2}={{\\sin^2 \\alpha + \\cos^2 \\alpha}\\over {A^2 + B^2}}\n$$\nHence :\n$$\n\\frac{p}{C}={\\sqrt{\\sin^2 \\alpha + \\cos^2 \\alpha}\\over \\sqrt{A^2 + B^2}}\n$$</p>\n" }, { "answer_id": 2086446, "score": 1, "text": "<p>I'd do it differently: since $A$ and $B$ are not both zero, we can divide by $-\\sqrt{A^2+B^2}$, getting\n$$\n\\frac{-A}{\\sqrt{A^2+B^2}}x+\\frac{-B}{\\sqrt{A^2+B^2}}-\\frac{C}{\\sqrt{A^2+B^2}}=0\n$$\nand we can choose $\\alpha$ so that\n$$\n\\begin{cases}\n\\cos\\alpha=-\\dfrac{A}{\\sqrt{A^2+B^2}}\\\\[6px]\n\\sin\\alpha=-\\dfrac{B}{\\sqrt{A^2+B^2}}\n\\end{cases}\n$$\nand set\n$$\np=\\frac{C}{\\sqrt{A^2+B^2}}\n$$</p>\n\n<hr>\n\n<p>For the mysterious formula, observe that, if\n$$\nk=\\frac{\\cos\\alpha}{-A}=\\frac{\\sin\\alpha}{-B}\n$$\nthen\n$$\nA^2k^2=\\cos^2\\alpha,\\quad B^2k^2=\\sin^2\\alpha\n$$\nso also\n$$\n(A^2+B^2)k^2=1\n$$\nand\n$$\n|k|=\\frac{1}{\\sqrt{A^2+B^2}}\n$$\nOf course one needs to be <em>very</em> precise about the choice of $p$, using this method. It is implied that $p&lt;0$ if $C&lt;0$ and $p&gt;0$ if $C&gt;0$; a special case should be made when $C=0$.</p>\n\n<p>The method above is independent of these considerations.</p>\n" }, { "answer_id": 2086486, "score": 1, "text": "<p>Let's modify ${p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} $</p>\n\n<p>to ${p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} =K$</p>\n\n<p>And ${p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} = {\\sqrt{\\sin^2 \\alpha + \\cos^2 \\alpha}\\over \\sqrt{A^2 + B^2}} = {1\\over \\sqrt{A^2 + B^2}} \\tag{3}$</p>\n\n<p>to $K = {p\\over C } = { \\cos \\alpha\\over -A} = { \\sin\\alpha\\over -B} = {K\\sqrt{A^2 +B^2}\\over \\sqrt{A^2 + B^2}}={\\sqrt{(-AK)^2 +(-BK)^2}\\over \\sqrt{A^2 + B^2}}={\\sqrt{\\sin^2 \\alpha + \\cos^2 \\alpha}\\over \\sqrt{A^2 + B^2}} = {1\\over \\sqrt{A^2 + B^2}} \\tag{3}$</p>\n" }, { "answer_id": 2086487, "score": 1, "text": "<p>Let $\\frac{p}{C}=\\frac{\\cos\\alpha}{A}=\\frac{\\sin\\alpha}{B} = k$</p>\n\n<p>Then we have,</p>\n\n<p>$\\cos\\alpha = kA$</p>\n\n<p>On squaring we have,</p>\n\n<p>$\\cos^2\\alpha = k^2A^2$</p>\n\n<p>Also $\\sin\\alpha = kB$</p>\n\n<p>On squaring $\\sin^2\\alpha = k^2B^2$</p>\n\n<p>Adding these two squared equations,</p>\n\n<p>$\\sin^2\\alpha + \\cos^2\\alpha = k^2A^2 + k^2B^2$</p>\n\n<p>$\\sin^2\\alpha + \\cos^2\\alpha= k^2(A^2 + B^2)$</p>\n\n<p>$\\frac{\\sin^2\\alpha + \\cos^2\\alpha}{(A^2 + B^2)} = k^2$</p>\n\n<p>$\\frac{\\sqrt{\\sin^2\\alpha + \\cos^2\\alpha}}{\\sqrt{(A^2 + B^2)}} = k$</p>\n" } ]
Question: Help in understanding a simple proof. Answer 1: <p>$$ {p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} \\ \implies \frac{p^2}{C^2}=\frac{\cos^2(\alpha)}{A^2}=\frac{\sin^2(\alpha)}{B^2} $$</p> <p>If $$ \frac{a}{b}=\frac{c}{d}=k \\ \implies a=bk, c=dk $$ So the value of : $$ \frac{a+c}{b+d}=\frac{bk+dk}{b+d}=\frac{k(b+d)}{b+d}=k=\frac{a}{b}=\frac{c}{d} $$ Simply put, in a proportion of two or more equal ratios, the sum of the antecedents divided by the sum of the consequents is equal to any one of the ratios.</p> <p>In the same way : $$ \frac{p^2}{C^2}=\frac{\cos^2(\alpha)}{A^2}=\frac{\sin^2(\alpha)}{B^2}={{\sin^2 \alpha + \cos^2 \alpha}\over {A^2 + B^2}} $$ Hence : $$ \frac{p}{C}={\sqrt{\sin^2 \alpha + \cos^2 \alpha}\over \sqrt{A^2 + B^2}} $$</p> Answer 2: <p>I'd do it differently: since $A$ and $B$ are not both zero, we can divide by $-\sqrt{A^2+B^2}$, getting $$ \frac{-A}{\sqrt{A^2+B^2}}x+\frac{-B}{\sqrt{A^2+B^2}}-\frac{C}{\sqrt{A^2+B^2}}=0 $$ and we can choose $\alpha$ so that $$ \begin{cases} \cos\alpha=-\dfrac{A}{\sqrt{A^2+B^2}}\\[6px] \sin\alpha=-\dfrac{B}{\sqrt{A^2+B^2}} \end{cases} $$ and set $$ p=\frac{C}{\sqrt{A^2+B^2}} $$</p> <hr> <p>For the mysterious formula, observe that, if $$ k=\frac{\cos\alpha}{-A}=\frac{\sin\alpha}{-B} $$ then $$ A^2k^2=\cos^2\alpha,\quad B^2k^2=\sin^2\alpha $$ so also $$ (A^2+B^2)k^2=1 $$ and $$ |k|=\frac{1}{\sqrt{A^2+B^2}} $$ Of course one needs to be <em>very</em> precise about the choice of $p$, using this method. It is implied that $p&lt;0$ if $C&lt;0$ and $p&gt;0$ if $C&gt;0$; a special case should be made when $C=0$.</p> <p>The method above is independent of these considerations.</p> Answer 3: <p>Let's modify ${p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} $</p> <p>to ${p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} =K$</p> <p>And ${p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} = {\sqrt{\sin^2 \alpha + \cos^2 \alpha}\over \sqrt{A^2 + B^2}} = {1\over \sqrt{A^2 + B^2}} \tag{3}$</p> <p>to $K = {p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} = {K\sqrt{A^2 +B^2}\over \sqrt{A^2 + B^2}}={\sqrt{(-AK)^2 +(-BK)^2}\over \sqrt{A^2 + B^2}}={\sqrt{\sin^2 \alpha + \cos^2 \alpha}\over \sqrt{A^2 + B^2}} = {1\over \sqrt{A^2 + B^2}} \tag{3}$</p> Answer 4: <p>Let $\frac{p}{C}=\frac{\cos\alpha}{A}=\frac{\sin\alpha}{B} = k$</p> <p>Then we have,</p> <p>$\cos\alpha = kA$</p> <p>On squaring we have,</p> <p>$\cos^2\alpha = k^2A^2$</p> <p>Also $\sin\alpha = kB$</p> <p>On squaring $\sin^2\alpha = k^2B^2$</p> <p>Adding these two squared equations,</p> <p>$\sin^2\alpha + \cos^2\alpha = k^2A^2 + k^2B^2$</p> <p>$\sin^2\alpha + \cos^2\alpha= k^2(A^2 + B^2)$</p> <p>$\frac{\sin^2\alpha + \cos^2\alpha}{(A^2 + B^2)} = k^2$</p> <p>$\frac{\sqrt{\sin^2\alpha + \cos^2\alpha}}{\sqrt{(A^2 + B^2)}} = k$</p> Answer 5: <p>$$ {p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} \\ \implies \frac{p^2}{C^2}=\frac{\cos^2(\alpha)}{A^2}=\frac{\sin^2(\alpha)}{B^2} $$</p> <p>If $$ \frac{a}{b}=\frac{c}{d}=k \\ \implies a=bk, c=dk $$ So the value of : $$ \frac{a+c}{b+d}=\frac{bk+dk}{b+d}=\frac{k(b+d)}{b+d}=k=\frac{a}{b}=\frac{c}{d} $$ Simply put, in a proportion of two or more equal ratios, the sum of the antecedents divided by the sum of the consequents is equal to any one of the ratios.</p> <p>In the same way : $$ \frac{p^2}{C^2}=\frac{\cos^2(\alpha)}{A^2}=\frac{\sin^2(\alpha)}{B^2}={{\sin^2 \alpha + \cos^2 \alpha}\over {A^2 + B^2}} $$ Hence : $$ \frac{p}{C}={\sqrt{\sin^2 \alpha + \cos^2 \alpha}\over \sqrt{A^2 + B^2}} $$</p> Answer 6: <p>I'd do it differently: since $A$ and $B$ are not both zero, we can divide by $-\sqrt{A^2+B^2}$, getting $$ \frac{-A}{\sqrt{A^2+B^2}}x+\frac{-B}{\sqrt{A^2+B^2}}-\frac{C}{\sqrt{A^2+B^2}}=0 $$ and we can choose $\alpha$ so that $$ \begin{cases} \cos\alpha=-\dfrac{A}{\sqrt{A^2+B^2}}\\[6px] \sin\alpha=-\dfrac{B}{\sqrt{A^2+B^2}} \end{cases} $$ and set $$ p=\frac{C}{\sqrt{A^2+B^2}} $$</p> <hr> <p>For the mysterious formula, observe that, if $$ k=\frac{\cos\alpha}{-A}=\frac{\sin\alpha}{-B} $$ then $$ A^2k^2=\cos^2\alpha,\quad B^2k^2=\sin^2\alpha $$ so also $$ (A^2+B^2)k^2=1 $$ and $$ |k|=\frac{1}{\sqrt{A^2+B^2}} $$ Of course one needs to be <em>very</em> precise about the choice of $p$, using this method. It is implied that $p&lt;0$ if $C&lt;0$ and $p&gt;0$ if $C&gt;0$; a special case should be made when $C=0$.</p> <p>The method above is independent of these considerations.</p> Answer 7: <p>Let's modify ${p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} $</p> <p>to ${p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} =K$</p> <p>And ${p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} = {\sqrt{\sin^2 \alpha + \cos^2 \alpha}\over \sqrt{A^2 + B^2}} = {1\over \sqrt{A^2 + B^2}} \tag{3}$</p> <p>to $K = {p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} = {K\sqrt{A^2 +B^2}\over \sqrt{A^2 + B^2}}={\sqrt{(-AK)^2 +(-BK)^2}\over \sqrt{A^2 + B^2}}={\sqrt{\sin^2 \alpha + \cos^2 \alpha}\over \sqrt{A^2 + B^2}} = {1\over \sqrt{A^2 + B^2}} \tag{3}$</p> Answer 8: <p>Let $\frac{p}{C}=\frac{\cos\alpha}{A}=\frac{\sin\alpha}{B} = k$</p> <p>Then we have,</p> <p>$\cos\alpha = kA$</p> <p>On squaring we have,</p> <p>$\cos^2\alpha = k^2A^2$</p> <p>Also $\sin\alpha = kB$</p> <p>On squaring $\sin^2\alpha = k^2B^2$</p> <p>Adding these two squared equations,</p> <p>$\sin^2\alpha + \cos^2\alpha = k^2A^2 + k^2B^2$</p> <p>$\sin^2\alpha + \cos^2\alpha= k^2(A^2 + B^2)$</p> <p>$\frac{\sin^2\alpha + \cos^2\alpha}{(A^2 + B^2)} = k^2$</p> <p>$\frac{\sqrt{\sin^2\alpha + \cos^2\alpha}}{\sqrt{(A^2 + B^2)}} = k$</p> Answer 9: <p>$$ {p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} \\ \implies \frac{p^2}{C^2}=\frac{\cos^2(\alpha)}{A^2}=\frac{\sin^2(\alpha)}{B^2} $$</p> <p>If $$ \frac{a}{b}=\frac{c}{d}=k \\ \implies a=bk, c=dk $$ So the value of : $$ \frac{a+c}{b+d}=\frac{bk+dk}{b+d}=\frac{k(b+d)}{b+d}=k=\frac{a}{b}=\frac{c}{d} $$ Simply put, in a proportion of two or more equal ratios, the sum of the antecedents divided by the sum of the consequents is equal to any one of the ratios.</p> <p>In the same way : $$ \frac{p^2}{C^2}=\frac{\cos^2(\alpha)}{A^2}=\frac{\sin^2(\alpha)}{B^2}={{\sin^2 \alpha + \cos^2 \alpha}\over {A^2 + B^2}} $$ Hence : $$ \frac{p}{C}={\sqrt{\sin^2 \alpha + \cos^2 \alpha}\over \sqrt{A^2 + B^2}} $$</p> Answer 10: <p>I'd do it differently: since $A$ and $B$ are not both zero, we can divide by $-\sqrt{A^2+B^2}$, getting $$ \frac{-A}{\sqrt{A^2+B^2}}x+\frac{-B}{\sqrt{A^2+B^2}}-\frac{C}{\sqrt{A^2+B^2}}=0 $$ and we can choose $\alpha$ so that $$ \begin{cases} \cos\alpha=-\dfrac{A}{\sqrt{A^2+B^2}}\\[6px] \sin\alpha=-\dfrac{B}{\sqrt{A^2+B^2}} \end{cases} $$ and set $$ p=\frac{C}{\sqrt{A^2+B^2}} $$</p> <hr> <p>For the mysterious formula, observe that, if $$ k=\frac{\cos\alpha}{-A}=\frac{\sin\alpha}{-B} $$ then $$ A^2k^2=\cos^2\alpha,\quad B^2k^2=\sin^2\alpha $$ so also $$ (A^2+B^2)k^2=1 $$ and $$ |k|=\frac{1}{\sqrt{A^2+B^2}} $$ Of course one needs to be <em>very</em> precise about the choice of $p$, using this method. It is implied that $p&lt;0$ if $C&lt;0$ and $p&gt;0$ if $C&gt;0$; a special case should be made when $C=0$.</p> <p>The method above is independent of these considerations.</p> Answer 11: <p>Let's modify ${p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} $</p> <p>to ${p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} =K$</p> <p>And ${p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} = {\sqrt{\sin^2 \alpha + \cos^2 \alpha}\over \sqrt{A^2 + B^2}} = {1\over \sqrt{A^2 + B^2}} \tag{3}$</p> <p>to $K = {p\over C } = { \cos \alpha\over -A} = { \sin\alpha\over -B} = {K\sqrt{A^2 +B^2}\over \sqrt{A^2 + B^2}}={\sqrt{(-AK)^2 +(-BK)^2}\over \sqrt{A^2 + B^2}}={\sqrt{\sin^2 \alpha + \cos^2 \alpha}\over \sqrt{A^2 + B^2}} = {1\over \sqrt{A^2 + B^2}} \tag{3}$</p> Answer 12: <p>Let $\frac{p}{C}=\frac{\cos\alpha}{A}=\frac{\sin\alpha}{B} = k$</p> <p>Then we have,</p> <p>$\cos\alpha = kA$</p> <p>On squaring we have,</p> <p>$\cos^2\alpha = k^2A^2$</p> <p>Also $\sin\alpha = kB$</p> <p>On squaring $\sin^2\alpha = k^2B^2$</p> <p>Adding these two squared equations,</p> <p>$\sin^2\alpha + \cos^2\alpha = k^2A^2 + k^2B^2$</p> <p>$\sin^2\alpha + \cos^2\alpha= k^2(A^2 + B^2)$</p> <p>$\frac{\sin^2\alpha + \cos^2\alpha}{(A^2 + B^2)} = k^2$</p> <p>$\frac{\sqrt{\sin^2\alpha + \cos^2\alpha}}{\sqrt{(A^2 + B^2)}} = k$</p>
Polygon can be covered by circle.
2,086,483
4
https://math.stackexchange.com/questions/2086483/polygon-can-be-covered-by-circle
<p>I saw this problem from a math forum but it has been left unanswered for months.</p> <p>The problem states:</p> <p>"Prove that every polygon with perimeter $2004$ can be covered by a circle with diameter $1002$ "</p> <p>I have tried the following methods but i keep failing, Any hints for a possible method are appreciated:</p> <p>$1)$ I tried proving that all triangles with perimeter $2004$ can be covered by circle with diameter $1002$ and then use strong induction to say that all such $n$-gons can be covered and then try to prove it for alla the $n+1$-gons</p> <p>$2)$ I tried to use contradiction but also failed.</p>
[ { "answer_id": 2086512, "score": 0, "text": "<p>Hints: Say we have two points on the perimeter of the polygon that are separated by 1002+$\\epsilon$ units ($\\epsilon&gt;0$). Say these points are $\\mathbf{x}_1$ and $\\mathbf{x}_2$. Draw a line $L$ through these two points. Assuming the segment between these two points isn't a face of the polygon (the other case follows similarly), we have that there exist line segments, which aren't parallel to $L$ that cross $L$ and go through points $\\mathbf{x}_1$ and $\\mathbf{x}_2$ defining distinct faces of the polygon. What is the minimum perimeter of a polygon that closes the four end points of these segments? Can it possibly have a perimeter of 2004?</p>\n" }, { "answer_id": 2086532, "score": 0, "text": "<p><a href=\"https://i.stack.imgur.com/8yPIP.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8yPIP.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let's say that $D$ is the biggest distance between two vertex.</p>\n\n<p>Just for better undestanding let's consider that upside of $AE$ has only four sides (it doesn't change anything). By triangle inequality we have:</p>\n\n<p>$$d_1&lt;a_1+a_2$$\n$$d_2&lt;d_1+a_3 \\rightarrow d_2&lt;a_1+a_2+a_3$$\n$$D&lt;d_2+a_4 \\rightarrow D&lt;a_1+a_2+a_3+a_4$$</p>\n\n<p>We can make the same idea for the downside and conclude that:</p>\n\n<p>$$2D&lt;a_1+a_2+...+a_n \\rightarrow D&lt;\\frac{a_1+a_2+...+a_n}{2} \\rightarrow D&lt; 1002$$</p>\n\n<p>Then take a circle such that the diameter contain $AE$ and once the diameter $1002$, no other distance can goes outside of that circle. </p>\n" }, { "answer_id": 2086512, "score": 0, "text": "<p>Hints: Say we have two points on the perimeter of the polygon that are separated by 1002+$\\epsilon$ units ($\\epsilon&gt;0$). Say these points are $\\mathbf{x}_1$ and $\\mathbf{x}_2$. Draw a line $L$ through these two points. Assuming the segment between these two points isn't a face of the polygon (the other case follows similarly), we have that there exist line segments, which aren't parallel to $L$ that cross $L$ and go through points $\\mathbf{x}_1$ and $\\mathbf{x}_2$ defining distinct faces of the polygon. What is the minimum perimeter of a polygon that closes the four end points of these segments? Can it possibly have a perimeter of 2004?</p>\n" }, { "answer_id": 2086532, "score": 0, "text": "<p><a href=\"https://i.stack.imgur.com/8yPIP.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8yPIP.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let's say that $D$ is the biggest distance between two vertex.</p>\n\n<p>Just for better undestanding let's consider that upside of $AE$ has only four sides (it doesn't change anything). By triangle inequality we have:</p>\n\n<p>$$d_1&lt;a_1+a_2$$\n$$d_2&lt;d_1+a_3 \\rightarrow d_2&lt;a_1+a_2+a_3$$\n$$D&lt;d_2+a_4 \\rightarrow D&lt;a_1+a_2+a_3+a_4$$</p>\n\n<p>We can make the same idea for the downside and conclude that:</p>\n\n<p>$$2D&lt;a_1+a_2+...+a_n \\rightarrow D&lt;\\frac{a_1+a_2+...+a_n}{2} \\rightarrow D&lt; 1002$$</p>\n\n<p>Then take a circle such that the diameter contain $AE$ and once the diameter $1002$, no other distance can goes outside of that circle. </p>\n" }, { "answer_id": 2086512, "score": 0, "text": "<p>Hints: Say we have two points on the perimeter of the polygon that are separated by 1002+$\\epsilon$ units ($\\epsilon&gt;0$). Say these points are $\\mathbf{x}_1$ and $\\mathbf{x}_2$. Draw a line $L$ through these two points. Assuming the segment between these two points isn't a face of the polygon (the other case follows similarly), we have that there exist line segments, which aren't parallel to $L$ that cross $L$ and go through points $\\mathbf{x}_1$ and $\\mathbf{x}_2$ defining distinct faces of the polygon. What is the minimum perimeter of a polygon that closes the four end points of these segments? Can it possibly have a perimeter of 2004?</p>\n" }, { "answer_id": 2086532, "score": 0, "text": "<p><a href=\"https://i.stack.imgur.com/8yPIP.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8yPIP.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let's say that $D$ is the biggest distance between two vertex.</p>\n\n<p>Just for better undestanding let's consider that upside of $AE$ has only four sides (it doesn't change anything). By triangle inequality we have:</p>\n\n<p>$$d_1&lt;a_1+a_2$$\n$$d_2&lt;d_1+a_3 \\rightarrow d_2&lt;a_1+a_2+a_3$$\n$$D&lt;d_2+a_4 \\rightarrow D&lt;a_1+a_2+a_3+a_4$$</p>\n\n<p>We can make the same idea for the downside and conclude that:</p>\n\n<p>$$2D&lt;a_1+a_2+...+a_n \\rightarrow D&lt;\\frac{a_1+a_2+...+a_n}{2} \\rightarrow D&lt; 1002$$</p>\n\n<p>Then take a circle such that the diameter contain $AE$ and once the diameter $1002$, no other distance can goes outside of that circle. </p>\n" } ]
Question: Polygon can be covered by circle. Answer 1: <p>Hints: Say we have two points on the perimeter of the polygon that are separated by 1002+$\epsilon$ units ($\epsilon&gt;0$). Say these points are $\mathbf{x}_1$ and $\mathbf{x}_2$. Draw a line $L$ through these two points. Assuming the segment between these two points isn't a face of the polygon (the other case follows similarly), we have that there exist line segments, which aren't parallel to $L$ that cross $L$ and go through points $\mathbf{x}_1$ and $\mathbf{x}_2$ defining distinct faces of the polygon. What is the minimum perimeter of a polygon that closes the four end points of these segments? Can it possibly have a perimeter of 2004?</p> Answer 2: <p><a href="https://i.stack.imgur.com/8yPIP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8yPIP.png" alt="enter image description here"></a></p> <p>Let's say that $D$ is the biggest distance between two vertex.</p> <p>Just for better undestanding let's consider that upside of $AE$ has only four sides (it doesn't change anything). By triangle inequality we have:</p> <p>$$d_1&lt;a_1+a_2$$ $$d_2&lt;d_1+a_3 \rightarrow d_2&lt;a_1+a_2+a_3$$ $$D&lt;d_2+a_4 \rightarrow D&lt;a_1+a_2+a_3+a_4$$</p> <p>We can make the same idea for the downside and conclude that:</p> <p>$$2D&lt;a_1+a_2+...+a_n \rightarrow D&lt;\frac{a_1+a_2+...+a_n}{2} \rightarrow D&lt; 1002$$</p> <p>Then take a circle such that the diameter contain $AE$ and once the diameter $1002$, no other distance can goes outside of that circle. </p> Answer 3: <p>Hints: Say we have two points on the perimeter of the polygon that are separated by 1002+$\epsilon$ units ($\epsilon&gt;0$). Say these points are $\mathbf{x}_1$ and $\mathbf{x}_2$. Draw a line $L$ through these two points. Assuming the segment between these two points isn't a face of the polygon (the other case follows similarly), we have that there exist line segments, which aren't parallel to $L$ that cross $L$ and go through points $\mathbf{x}_1$ and $\mathbf{x}_2$ defining distinct faces of the polygon. What is the minimum perimeter of a polygon that closes the four end points of these segments? Can it possibly have a perimeter of 2004?</p> Answer 4: <p><a href="https://i.stack.imgur.com/8yPIP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8yPIP.png" alt="enter image description here"></a></p> <p>Let's say that $D$ is the biggest distance between two vertex.</p> <p>Just for better undestanding let's consider that upside of $AE$ has only four sides (it doesn't change anything). By triangle inequality we have:</p> <p>$$d_1&lt;a_1+a_2$$ $$d_2&lt;d_1+a_3 \rightarrow d_2&lt;a_1+a_2+a_3$$ $$D&lt;d_2+a_4 \rightarrow D&lt;a_1+a_2+a_3+a_4$$</p> <p>We can make the same idea for the downside and conclude that:</p> <p>$$2D&lt;a_1+a_2+...+a_n \rightarrow D&lt;\frac{a_1+a_2+...+a_n}{2} \rightarrow D&lt; 1002$$</p> <p>Then take a circle such that the diameter contain $AE$ and once the diameter $1002$, no other distance can goes outside of that circle. </p> Answer 5: <p>Hints: Say we have two points on the perimeter of the polygon that are separated by 1002+$\epsilon$ units ($\epsilon&gt;0$). Say these points are $\mathbf{x}_1$ and $\mathbf{x}_2$. Draw a line $L$ through these two points. Assuming the segment between these two points isn't a face of the polygon (the other case follows similarly), we have that there exist line segments, which aren't parallel to $L$ that cross $L$ and go through points $\mathbf{x}_1$ and $\mathbf{x}_2$ defining distinct faces of the polygon. What is the minimum perimeter of a polygon that closes the four end points of these segments? Can it possibly have a perimeter of 2004?</p> Answer 6: <p><a href="https://i.stack.imgur.com/8yPIP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8yPIP.png" alt="enter image description here"></a></p> <p>Let's say that $D$ is the biggest distance between two vertex.</p> <p>Just for better undestanding let's consider that upside of $AE$ has only four sides (it doesn't change anything). By triangle inequality we have:</p> <p>$$d_1&lt;a_1+a_2$$ $$d_2&lt;d_1+a_3 \rightarrow d_2&lt;a_1+a_2+a_3$$ $$D&lt;d_2+a_4 \rightarrow D&lt;a_1+a_2+a_3+a_4$$</p> <p>We can make the same idea for the downside and conclude that:</p> <p>$$2D&lt;a_1+a_2+...+a_n \rightarrow D&lt;\frac{a_1+a_2+...+a_n}{2} \rightarrow D&lt; 1002$$</p> <p>Then take a circle such that the diameter contain $AE$ and once the diameter $1002$, no other distance can goes outside of that circle. </p>
Optimizing over matrices in the Loewner ordering
2,086,871
4
https://math.stackexchange.com/questions/2086871/optimizing-over-matrices-in-the-loewner-ordering
<p>The so-called <a href="https://en.wikipedia.org/wiki/Loewner_order" rel="nofollow noreferrer">Loewner ordering</a> introduces a partial ordering to the set of Hermitian matrices:</p> <ul> <li><p><span class="math-container">$X \geq Y$</span> if <span class="math-container">$X - Y$</span> is positive semidefinite</p> </li> <li><p><span class="math-container">$X &gt; Y$</span> if <span class="math-container">$X - Y$</span> is positive definite.</p> </li> </ul> <p>Consider then the following two problems:</p> <p><span class="math-container">$$\max \left\{ X \,|\, X \text{ satisfies some conditions } \right\}\tag{1}\label{opt1}$$</span></p> <p><span class="math-container">$$\max \left\{ \operatorname{Tr}X \,|\, X \text{ satisfies some conditions } \right\}\tag{2}\label{opt2}$$</span></p> <p>where <span class="math-container">$X$</span> is Hermitian, and the maximum in \eqref{opt1} is with respect to the Loewner ordering. Are the problems equivalent?</p>
[ { "answer_id": 2086981, "score": 3, "text": "<p>Firstly, notice that there might be matrices $X$ and $X'$ which are neither $X\\geq X'$ nor $X' \\geq X$ as the Loewner order is a partial order.</p>\n\n<p>For problems of the type \n$$\\max \\{X: X\\in \\mathcal{Z}\\},\\tag{1}\\label{eq:1}$$ \nwhere $\\mathcal{Z}$ is a collection of symmetric matrices (also positive definite?), \nyou should first check whether every <em>chain</em> in $\\mathcal{Z}$ has an upper bound, that is, whether for every net $\\{X_\\alpha\\}_{\\alpha \\in A}\\subseteq \\mathcal{Z}$ which is a chain (i.e., for $\\alpha_1, \\alpha_2\\in A$, with $\\alpha_1\\leq \\alpha_2$ it is $X_{\\alpha_1}\\leq X_{\\alpha_2}$), there is an $\\alpha^*\\in A$ so that $X_\\alpha \\leq X_{\\alpha^*}$ for all $\\alpha \\in A$. If so, by Zorn's lemma, there will be a <em>maximal</em> element in $\\mathcal{Z}$, that is, an element $X^\\star\\in\\mathcal{Z}$ so that if $X\\in\\mathcal{Z}$ and $X\\geq X^\\star$, then $X=X^\\star$.</p>\n\n<p>This said, problem \\eqref{eq:1} should be interpreted as the problem of finding such a <em>maximal</em> element and then the problem is well posed.</p>\n\n<p>Consider now the problem\n$$\\max \\{\\mathrm{trace}\\, X: X\\in\\mathcal{Z}\\}\\tag{2}\\label{eq:2}$$\nand assume that $\\mathcal{Z}$ is such that the problem has a solution, say $X^{\\star\\star}$. This means that for all $X\\in\\mathcal{Z}$, $\\mathrm{trace}\\, X \\leq \\mathrm{trace}\\, X^{\\star\\star}$. Take $Y\\in\\mathcal{Z}$ with $Y\\geq X^{\\star\\star}$. Then $\\mathrm{trace}Y\\leq \\mathrm{trace} X^{\\star\\star}$. We have that $Y-X^{\\star\\star}$ is positive definite but it has a nonnegative trace, so it has zero trace, therefore, $Y=X^{\\star\\star}$. Therefore, the solutions of \\eqref{eq:2} are also solutions of \\eqref{eq:1}.</p>\n\n<p>Conversely, let $X^\\star$ be a solution of \\eqref{eq:1}: for all $X\\in\\mathcal{Z}$ with $X\\geq X^\\star$ it is $X=X^\\star$. Then, I'm not sure whether $X^\\star$ is also a solution of \\eqref{eq:2}. I would say that problem \\eqref{eq:2} recovers some - but perhaps not all - solutions of problem \\eqref{eq:1}.</p>\n\n<p>As a side note, such problems may be solved with manifold optimization methods where $\\mathcal{Z}$ is the manifold of symmetric (positive definite) matrices. You may have a look at software such as <a href=\"http://www.manopt.org/tutorial.html#manifolds\" rel=\"nofollow noreferrer\">manopt</a>.</p>\n" }, { "answer_id": 2735865, "score": 2, "text": "<p>The two problems are, in general, not equivalent.</p>\n\n<p>For example, consider $2\\times 2$ matrices. Assume \"some conditions\" means that the matrix is of the form $\\begin{pmatrix}a&amp;0\\\\0&amp;-2a\\end{pmatrix}$. Let $Z$ denote the set of all such matrices.</p>\n\n<p>Now, for $M,N\\in Z$ with $M\\neq N$, we have that $M-N$ is of the form $\\begin{pmatrix}a&amp;0\\\\0&amp;-2a\\end{pmatrix}$ with $a\\neq 0$, and thus $M-N$ is not positive semidefinit, and thus $M\\nleq N$ and $M\\ngeq N$. So every $M$ is maximal in the set $Z$.</p>\n\n<p>Thus, every $M$ is a solution to (1).</p>\n\n<p>But it is easy to see that (2) is maximized by $M=0$.</p>\n" }, { "answer_id": 2086981, "score": 3, "text": "<p>Firstly, notice that there might be matrices $X$ and $X'$ which are neither $X\\geq X'$ nor $X' \\geq X$ as the Loewner order is a partial order.</p>\n\n<p>For problems of the type \n$$\\max \\{X: X\\in \\mathcal{Z}\\},\\tag{1}\\label{eq:1}$$ \nwhere $\\mathcal{Z}$ is a collection of symmetric matrices (also positive definite?), \nyou should first check whether every <em>chain</em> in $\\mathcal{Z}$ has an upper bound, that is, whether for every net $\\{X_\\alpha\\}_{\\alpha \\in A}\\subseteq \\mathcal{Z}$ which is a chain (i.e., for $\\alpha_1, \\alpha_2\\in A$, with $\\alpha_1\\leq \\alpha_2$ it is $X_{\\alpha_1}\\leq X_{\\alpha_2}$), there is an $\\alpha^*\\in A$ so that $X_\\alpha \\leq X_{\\alpha^*}$ for all $\\alpha \\in A$. If so, by Zorn's lemma, there will be a <em>maximal</em> element in $\\mathcal{Z}$, that is, an element $X^\\star\\in\\mathcal{Z}$ so that if $X\\in\\mathcal{Z}$ and $X\\geq X^\\star$, then $X=X^\\star$.</p>\n\n<p>This said, problem \\eqref{eq:1} should be interpreted as the problem of finding such a <em>maximal</em> element and then the problem is well posed.</p>\n\n<p>Consider now the problem\n$$\\max \\{\\mathrm{trace}\\, X: X\\in\\mathcal{Z}\\}\\tag{2}\\label{eq:2}$$\nand assume that $\\mathcal{Z}$ is such that the problem has a solution, say $X^{\\star\\star}$. This means that for all $X\\in\\mathcal{Z}$, $\\mathrm{trace}\\, X \\leq \\mathrm{trace}\\, X^{\\star\\star}$. Take $Y\\in\\mathcal{Z}$ with $Y\\geq X^{\\star\\star}$. Then $\\mathrm{trace}Y\\leq \\mathrm{trace} X^{\\star\\star}$. We have that $Y-X^{\\star\\star}$ is positive definite but it has a nonnegative trace, so it has zero trace, therefore, $Y=X^{\\star\\star}$. Therefore, the solutions of \\eqref{eq:2} are also solutions of \\eqref{eq:1}.</p>\n\n<p>Conversely, let $X^\\star$ be a solution of \\eqref{eq:1}: for all $X\\in\\mathcal{Z}$ with $X\\geq X^\\star$ it is $X=X^\\star$. Then, I'm not sure whether $X^\\star$ is also a solution of \\eqref{eq:2}. I would say that problem \\eqref{eq:2} recovers some - but perhaps not all - solutions of problem \\eqref{eq:1}.</p>\n\n<p>As a side note, such problems may be solved with manifold optimization methods where $\\mathcal{Z}$ is the manifold of symmetric (positive definite) matrices. You may have a look at software such as <a href=\"http://www.manopt.org/tutorial.html#manifolds\" rel=\"nofollow noreferrer\">manopt</a>.</p>\n" }, { "answer_id": 2735865, "score": 2, "text": "<p>The two problems are, in general, not equivalent.</p>\n\n<p>For example, consider $2\\times 2$ matrices. Assume \"some conditions\" means that the matrix is of the form $\\begin{pmatrix}a&amp;0\\\\0&amp;-2a\\end{pmatrix}$. Let $Z$ denote the set of all such matrices.</p>\n\n<p>Now, for $M,N\\in Z$ with $M\\neq N$, we have that $M-N$ is of the form $\\begin{pmatrix}a&amp;0\\\\0&amp;-2a\\end{pmatrix}$ with $a\\neq 0$, and thus $M-N$ is not positive semidefinit, and thus $M\\nleq N$ and $M\\ngeq N$. So every $M$ is maximal in the set $Z$.</p>\n\n<p>Thus, every $M$ is a solution to (1).</p>\n\n<p>But it is easy to see that (2) is maximized by $M=0$.</p>\n" }, { "answer_id": 2086981, "score": 3, "text": "<p>Firstly, notice that there might be matrices $X$ and $X'$ which are neither $X\\geq X'$ nor $X' \\geq X$ as the Loewner order is a partial order.</p>\n\n<p>For problems of the type \n$$\\max \\{X: X\\in \\mathcal{Z}\\},\\tag{1}\\label{eq:1}$$ \nwhere $\\mathcal{Z}$ is a collection of symmetric matrices (also positive definite?), \nyou should first check whether every <em>chain</em> in $\\mathcal{Z}$ has an upper bound, that is, whether for every net $\\{X_\\alpha\\}_{\\alpha \\in A}\\subseteq \\mathcal{Z}$ which is a chain (i.e., for $\\alpha_1, \\alpha_2\\in A$, with $\\alpha_1\\leq \\alpha_2$ it is $X_{\\alpha_1}\\leq X_{\\alpha_2}$), there is an $\\alpha^*\\in A$ so that $X_\\alpha \\leq X_{\\alpha^*}$ for all $\\alpha \\in A$. If so, by Zorn's lemma, there will be a <em>maximal</em> element in $\\mathcal{Z}$, that is, an element $X^\\star\\in\\mathcal{Z}$ so that if $X\\in\\mathcal{Z}$ and $X\\geq X^\\star$, then $X=X^\\star$.</p>\n\n<p>This said, problem \\eqref{eq:1} should be interpreted as the problem of finding such a <em>maximal</em> element and then the problem is well posed.</p>\n\n<p>Consider now the problem\n$$\\max \\{\\mathrm{trace}\\, X: X\\in\\mathcal{Z}\\}\\tag{2}\\label{eq:2}$$\nand assume that $\\mathcal{Z}$ is such that the problem has a solution, say $X^{\\star\\star}$. This means that for all $X\\in\\mathcal{Z}$, $\\mathrm{trace}\\, X \\leq \\mathrm{trace}\\, X^{\\star\\star}$. Take $Y\\in\\mathcal{Z}$ with $Y\\geq X^{\\star\\star}$. Then $\\mathrm{trace}Y\\leq \\mathrm{trace} X^{\\star\\star}$. We have that $Y-X^{\\star\\star}$ is positive definite but it has a nonnegative trace, so it has zero trace, therefore, $Y=X^{\\star\\star}$. Therefore, the solutions of \\eqref{eq:2} are also solutions of \\eqref{eq:1}.</p>\n\n<p>Conversely, let $X^\\star$ be a solution of \\eqref{eq:1}: for all $X\\in\\mathcal{Z}$ with $X\\geq X^\\star$ it is $X=X^\\star$. Then, I'm not sure whether $X^\\star$ is also a solution of \\eqref{eq:2}. I would say that problem \\eqref{eq:2} recovers some - but perhaps not all - solutions of problem \\eqref{eq:1}.</p>\n\n<p>As a side note, such problems may be solved with manifold optimization methods where $\\mathcal{Z}$ is the manifold of symmetric (positive definite) matrices. You may have a look at software such as <a href=\"http://www.manopt.org/tutorial.html#manifolds\" rel=\"nofollow noreferrer\">manopt</a>.</p>\n" }, { "answer_id": 2735865, "score": 2, "text": "<p>The two problems are, in general, not equivalent.</p>\n\n<p>For example, consider $2\\times 2$ matrices. Assume \"some conditions\" means that the matrix is of the form $\\begin{pmatrix}a&amp;0\\\\0&amp;-2a\\end{pmatrix}$. Let $Z$ denote the set of all such matrices.</p>\n\n<p>Now, for $M,N\\in Z$ with $M\\neq N$, we have that $M-N$ is of the form $\\begin{pmatrix}a&amp;0\\\\0&amp;-2a\\end{pmatrix}$ with $a\\neq 0$, and thus $M-N$ is not positive semidefinit, and thus $M\\nleq N$ and $M\\ngeq N$. So every $M$ is maximal in the set $Z$.</p>\n\n<p>Thus, every $M$ is a solution to (1).</p>\n\n<p>But it is easy to see that (2) is maximized by $M=0$.</p>\n" } ]
Question: Optimizing over matrices in the Loewner ordering Answer 1: <p>Firstly, notice that there might be matrices $X$ and $X'$ which are neither $X\geq X'$ nor $X' \geq X$ as the Loewner order is a partial order.</p> <p>For problems of the type $$\max \{X: X\in \mathcal{Z}\},\tag{1}\label{eq:1}$$ where $\mathcal{Z}$ is a collection of symmetric matrices (also positive definite?), you should first check whether every <em>chain</em> in $\mathcal{Z}$ has an upper bound, that is, whether for every net $\{X_\alpha\}_{\alpha \in A}\subseteq \mathcal{Z}$ which is a chain (i.e., for $\alpha_1, \alpha_2\in A$, with $\alpha_1\leq \alpha_2$ it is $X_{\alpha_1}\leq X_{\alpha_2}$), there is an $\alpha^*\in A$ so that $X_\alpha \leq X_{\alpha^*}$ for all $\alpha \in A$. If so, by Zorn's lemma, there will be a <em>maximal</em> element in $\mathcal{Z}$, that is, an element $X^\star\in\mathcal{Z}$ so that if $X\in\mathcal{Z}$ and $X\geq X^\star$, then $X=X^\star$.</p> <p>This said, problem \eqref{eq:1} should be interpreted as the problem of finding such a <em>maximal</em> element and then the problem is well posed.</p> <p>Consider now the problem $$\max \{\mathrm{trace}\, X: X\in\mathcal{Z}\}\tag{2}\label{eq:2}$$ and assume that $\mathcal{Z}$ is such that the problem has a solution, say $X^{\star\star}$. This means that for all $X\in\mathcal{Z}$, $\mathrm{trace}\, X \leq \mathrm{trace}\, X^{\star\star}$. Take $Y\in\mathcal{Z}$ with $Y\geq X^{\star\star}$. Then $\mathrm{trace}Y\leq \mathrm{trace} X^{\star\star}$. We have that $Y-X^{\star\star}$ is positive definite but it has a nonnegative trace, so it has zero trace, therefore, $Y=X^{\star\star}$. Therefore, the solutions of \eqref{eq:2} are also solutions of \eqref{eq:1}.</p> <p>Conversely, let $X^\star$ be a solution of \eqref{eq:1}: for all $X\in\mathcal{Z}$ with $X\geq X^\star$ it is $X=X^\star$. Then, I'm not sure whether $X^\star$ is also a solution of \eqref{eq:2}. I would say that problem \eqref{eq:2} recovers some - but perhaps not all - solutions of problem \eqref{eq:1}.</p> <p>As a side note, such problems may be solved with manifold optimization methods where $\mathcal{Z}$ is the manifold of symmetric (positive definite) matrices. You may have a look at software such as <a href="http://www.manopt.org/tutorial.html#manifolds" rel="nofollow noreferrer">manopt</a>.</p> Answer 2: <p>The two problems are, in general, not equivalent.</p> <p>For example, consider $2\times 2$ matrices. Assume "some conditions" means that the matrix is of the form $\begin{pmatrix}a&amp;0\\0&amp;-2a\end{pmatrix}$. Let $Z$ denote the set of all such matrices.</p> <p>Now, for $M,N\in Z$ with $M\neq N$, we have that $M-N$ is of the form $\begin{pmatrix}a&amp;0\\0&amp;-2a\end{pmatrix}$ with $a\neq 0$, and thus $M-N$ is not positive semidefinit, and thus $M\nleq N$ and $M\ngeq N$. So every $M$ is maximal in the set $Z$.</p> <p>Thus, every $M$ is a solution to (1).</p> <p>But it is easy to see that (2) is maximized by $M=0$.</p> Answer 3: <p>Firstly, notice that there might be matrices $X$ and $X'$ which are neither $X\geq X'$ nor $X' \geq X$ as the Loewner order is a partial order.</p> <p>For problems of the type $$\max \{X: X\in \mathcal{Z}\},\tag{1}\label{eq:1}$$ where $\mathcal{Z}$ is a collection of symmetric matrices (also positive definite?), you should first check whether every <em>chain</em> in $\mathcal{Z}$ has an upper bound, that is, whether for every net $\{X_\alpha\}_{\alpha \in A}\subseteq \mathcal{Z}$ which is a chain (i.e., for $\alpha_1, \alpha_2\in A$, with $\alpha_1\leq \alpha_2$ it is $X_{\alpha_1}\leq X_{\alpha_2}$), there is an $\alpha^*\in A$ so that $X_\alpha \leq X_{\alpha^*}$ for all $\alpha \in A$. If so, by Zorn's lemma, there will be a <em>maximal</em> element in $\mathcal{Z}$, that is, an element $X^\star\in\mathcal{Z}$ so that if $X\in\mathcal{Z}$ and $X\geq X^\star$, then $X=X^\star$.</p> <p>This said, problem \eqref{eq:1} should be interpreted as the problem of finding such a <em>maximal</em> element and then the problem is well posed.</p> <p>Consider now the problem $$\max \{\mathrm{trace}\, X: X\in\mathcal{Z}\}\tag{2}\label{eq:2}$$ and assume that $\mathcal{Z}$ is such that the problem has a solution, say $X^{\star\star}$. This means that for all $X\in\mathcal{Z}$, $\mathrm{trace}\, X \leq \mathrm{trace}\, X^{\star\star}$. Take $Y\in\mathcal{Z}$ with $Y\geq X^{\star\star}$. Then $\mathrm{trace}Y\leq \mathrm{trace} X^{\star\star}$. We have that $Y-X^{\star\star}$ is positive definite but it has a nonnegative trace, so it has zero trace, therefore, $Y=X^{\star\star}$. Therefore, the solutions of \eqref{eq:2} are also solutions of \eqref{eq:1}.</p> <p>Conversely, let $X^\star$ be a solution of \eqref{eq:1}: for all $X\in\mathcal{Z}$ with $X\geq X^\star$ it is $X=X^\star$. Then, I'm not sure whether $X^\star$ is also a solution of \eqref{eq:2}. I would say that problem \eqref{eq:2} recovers some - but perhaps not all - solutions of problem \eqref{eq:1}.</p> <p>As a side note, such problems may be solved with manifold optimization methods where $\mathcal{Z}$ is the manifold of symmetric (positive definite) matrices. You may have a look at software such as <a href="http://www.manopt.org/tutorial.html#manifolds" rel="nofollow noreferrer">manopt</a>.</p> Answer 4: <p>The two problems are, in general, not equivalent.</p> <p>For example, consider $2\times 2$ matrices. Assume "some conditions" means that the matrix is of the form $\begin{pmatrix}a&amp;0\\0&amp;-2a\end{pmatrix}$. Let $Z$ denote the set of all such matrices.</p> <p>Now, for $M,N\in Z$ with $M\neq N$, we have that $M-N$ is of the form $\begin{pmatrix}a&amp;0\\0&amp;-2a\end{pmatrix}$ with $a\neq 0$, and thus $M-N$ is not positive semidefinit, and thus $M\nleq N$ and $M\ngeq N$. So every $M$ is maximal in the set $Z$.</p> <p>Thus, every $M$ is a solution to (1).</p> <p>But it is easy to see that (2) is maximized by $M=0$.</p> Answer 5: <p>Firstly, notice that there might be matrices $X$ and $X'$ which are neither $X\geq X'$ nor $X' \geq X$ as the Loewner order is a partial order.</p> <p>For problems of the type $$\max \{X: X\in \mathcal{Z}\},\tag{1}\label{eq:1}$$ where $\mathcal{Z}$ is a collection of symmetric matrices (also positive definite?), you should first check whether every <em>chain</em> in $\mathcal{Z}$ has an upper bound, that is, whether for every net $\{X_\alpha\}_{\alpha \in A}\subseteq \mathcal{Z}$ which is a chain (i.e., for $\alpha_1, \alpha_2\in A$, with $\alpha_1\leq \alpha_2$ it is $X_{\alpha_1}\leq X_{\alpha_2}$), there is an $\alpha^*\in A$ so that $X_\alpha \leq X_{\alpha^*}$ for all $\alpha \in A$. If so, by Zorn's lemma, there will be a <em>maximal</em> element in $\mathcal{Z}$, that is, an element $X^\star\in\mathcal{Z}$ so that if $X\in\mathcal{Z}$ and $X\geq X^\star$, then $X=X^\star$.</p> <p>This said, problem \eqref{eq:1} should be interpreted as the problem of finding such a <em>maximal</em> element and then the problem is well posed.</p> <p>Consider now the problem $$\max \{\mathrm{trace}\, X: X\in\mathcal{Z}\}\tag{2}\label{eq:2}$$ and assume that $\mathcal{Z}$ is such that the problem has a solution, say $X^{\star\star}$. This means that for all $X\in\mathcal{Z}$, $\mathrm{trace}\, X \leq \mathrm{trace}\, X^{\star\star}$. Take $Y\in\mathcal{Z}$ with $Y\geq X^{\star\star}$. Then $\mathrm{trace}Y\leq \mathrm{trace} X^{\star\star}$. We have that $Y-X^{\star\star}$ is positive definite but it has a nonnegative trace, so it has zero trace, therefore, $Y=X^{\star\star}$. Therefore, the solutions of \eqref{eq:2} are also solutions of \eqref{eq:1}.</p> <p>Conversely, let $X^\star$ be a solution of \eqref{eq:1}: for all $X\in\mathcal{Z}$ with $X\geq X^\star$ it is $X=X^\star$. Then, I'm not sure whether $X^\star$ is also a solution of \eqref{eq:2}. I would say that problem \eqref{eq:2} recovers some - but perhaps not all - solutions of problem \eqref{eq:1}.</p> <p>As a side note, such problems may be solved with manifold optimization methods where $\mathcal{Z}$ is the manifold of symmetric (positive definite) matrices. You may have a look at software such as <a href="http://www.manopt.org/tutorial.html#manifolds" rel="nofollow noreferrer">manopt</a>.</p> Answer 6: <p>The two problems are, in general, not equivalent.</p> <p>For example, consider $2\times 2$ matrices. Assume "some conditions" means that the matrix is of the form $\begin{pmatrix}a&amp;0\\0&amp;-2a\end{pmatrix}$. Let $Z$ denote the set of all such matrices.</p> <p>Now, for $M,N\in Z$ with $M\neq N$, we have that $M-N$ is of the form $\begin{pmatrix}a&amp;0\\0&amp;-2a\end{pmatrix}$ with $a\neq 0$, and thus $M-N$ is not positive semidefinit, and thus $M\nleq N$ and $M\ngeq N$. So every $M$ is maximal in the set $Z$.</p> <p>Thus, every $M$ is a solution to (1).</p> <p>But it is easy to see that (2) is maximized by $M=0$.</p>
Fundamental theorem of arithmetic for 2
2,086,923
4
https://math.stackexchange.com/questions/2086923/fundamental-theorem-of-arithmetic-for-2
<p>Is says every positive whole number can be written as a unique product of primes. Since $1$ is not a prime what is the product for $2$? Is it just $2$? But then it wouldn't be a product. </p>
[ { "answer_id": 2087864, "score": 3, "text": "<p>I am so old that I was actually taught as a child that $1$ is a prime number. It's not, and though I had a subconscious feeling I had been taught incorrectly, decades passed before I actually understood.</p>\n\n<p>As you already know, $6 = 2 \\times 3$ is the product of two distinct primes. By the way $3 \\times 2$ is not a distinct factorization. We agree that reordering the prime factors does not change the factorization, even though we agree to write the primes in ascending order.</p>\n\n<p>Now, look at the powers of $2$. You also know that $4 = 2^2$, also the product of two primes, but they're not distinct primes, they're both $2$. And also, $2^3 = 8$, $2^4 = 16$, $2^5 = 32$, etc.</p>\n\n<p>Going in the other direction we have $2^1 = 2$. No surprise there. And $2^0 = 1$. And, as I hinted in a comment to another answer, $$2^{-1} = {1 \\over 2}.$$</p>\n\n<p>Clearly $${32 \\over 2} = 16, {16 \\over 2} = 8, {8 \\over 2} = 4, {4 \\over 2} = 2, {2 \\over 2} = 1, {1 \\over 2} = {1 \\over 2}, {{1 \\over 2} \\over 2} = {1 \\over 4}, {{1 \\over 4} \\over 2} = {1 \\over 8}, {{1 \\over 8} \\over 2} = {1 \\over 16}, \\ldots$$</p>\n\n<p>So, if $p^\\alpha$ is the product of multiplying a prime $p$ by itself $\\alpha - 1$ times, then $1$ is the product of multiplying a prime by itself $0$ times... wait, I seem to have painted myself into a corner: since $p$ can be any prime, or a composite number for that matter, that would mean that the factorization of $1$ is not actually distinct... But the point is that $1$ is nevertheless a product nonetheless.</p>\n\n<p>Let me try to jump over the wet paint. You also know that $(-2)^2 = 4$. But why doesn't this count as a unique factorization? Because $-1$ is a unit, a number that does not multiplicatively change the norm of a number, which we agree in this domain to be the absolute value function.</p>\n\n<p>Thus $(-2)^2$ is not a distinct factorization of $4$, and neither is $-1 \\times 2^2$, $(-1)^3 \\times 2^2$, $(-1)^5 \\times 2^2$, etc. Neither is $1^\\alpha 2^2$, whatever $\\alpha$ happens to be, as long as it be a number. Whether you consider $-2$ to be a prime number or not, that's another can of worms. If you don't, you can choose to view it as the prime number $2$ multiplied by the unit $-1$.</p>\n\n<p>Which means that neither order nor multiplication by units makes for a distinct factorization. You can view $1$ as the product of one unit times no primes. Or as the square of the unit $-1$. You can throw as many units in as you want, as long as it lands the product on the desired side of the number line, or the desired quadrant of the complex plane.</p>\n\n<p>These concepts will be important if you go on to study algebraic number theory. For example, $(10 - 7 \\sqrt 2)^2 (10 + 7 \\sqrt 2)^2$ and $(\\sqrt 2)^4$ are not distinct factorizations of $4$ in a certain domain of algebraic numbers that includes $\\sqrt 2$ as a prime number and $2$ as correspondingly composite. But in that domain, $1$ and $-1$ are also units, and unique factorization also holds.</p>\n" }, { "answer_id": 2086927, "score": 2, "text": "<p>What it means is that any number can be factored into a product of primes. That is, every integer has a prime factorization. Of course, with prime numbers, their prime factorization product is just themselves. </p>\n" }, { "answer_id": 2087776, "score": 2, "text": "<p>1 is the product of no primes at all, it is a unit. And in that way, it is the most unique among the positive integers, among all numbers, actually. All other positive integers are each the product of at least one prime number (itself, in the case of a prime number).</p>\n\n<p>It might be helpful to review the concept of \"empty sum\" and \"empty product\". Both of these are identities, too.</p>\n\n<p>The empty sum is 0, right? You add 0 to a number $x$ and you still have $x$. You added nothing, you added emptiness. $x + 0 = 0$.</p>\n\n<p>But 0 is not the empty product. Unless $x = 0$, we have $x \\times 0 \\neq x$. The empty product has to be something that does not change $x$. It has to be something that is the same as not performing multiplication, that is, empty multiplication. So $x \\times 1 = x$. So 1 is the empty product.</p>\n\n<p>Viewed in this way, then:</p>\n\n<ul>\n<li>1 is the empty product of no primes at all, zero primes.</li>\n<li>A prime number $p$ is its own product, the product of a single prime.</li>\n<li>A semiprime is the product of two prime numbers, not necessarily distinct.</li>\n<li>Sphenic numbers are the product of three distinct primes. Cubes of primes also have three prime factors, though not distinct. Also in this bullet point, numbers of the form $p^2 q$, where $p$ and $q$ are distinct primes.</li>\n<li>Products of four primes, etc.</li>\n</ul>\n\n<p>To really bend your mind, ask: what is the product of $-1$ primes?</p>\n" }, { "answer_id": 2087864, "score": 3, "text": "<p>I am so old that I was actually taught as a child that $1$ is a prime number. It's not, and though I had a subconscious feeling I had been taught incorrectly, decades passed before I actually understood.</p>\n\n<p>As you already know, $6 = 2 \\times 3$ is the product of two distinct primes. By the way $3 \\times 2$ is not a distinct factorization. We agree that reordering the prime factors does not change the factorization, even though we agree to write the primes in ascending order.</p>\n\n<p>Now, look at the powers of $2$. You also know that $4 = 2^2$, also the product of two primes, but they're not distinct primes, they're both $2$. And also, $2^3 = 8$, $2^4 = 16$, $2^5 = 32$, etc.</p>\n\n<p>Going in the other direction we have $2^1 = 2$. No surprise there. And $2^0 = 1$. And, as I hinted in a comment to another answer, $$2^{-1} = {1 \\over 2}.$$</p>\n\n<p>Clearly $${32 \\over 2} = 16, {16 \\over 2} = 8, {8 \\over 2} = 4, {4 \\over 2} = 2, {2 \\over 2} = 1, {1 \\over 2} = {1 \\over 2}, {{1 \\over 2} \\over 2} = {1 \\over 4}, {{1 \\over 4} \\over 2} = {1 \\over 8}, {{1 \\over 8} \\over 2} = {1 \\over 16}, \\ldots$$</p>\n\n<p>So, if $p^\\alpha$ is the product of multiplying a prime $p$ by itself $\\alpha - 1$ times, then $1$ is the product of multiplying a prime by itself $0$ times... wait, I seem to have painted myself into a corner: since $p$ can be any prime, or a composite number for that matter, that would mean that the factorization of $1$ is not actually distinct... But the point is that $1$ is nevertheless a product nonetheless.</p>\n\n<p>Let me try to jump over the wet paint. You also know that $(-2)^2 = 4$. But why doesn't this count as a unique factorization? Because $-1$ is a unit, a number that does not multiplicatively change the norm of a number, which we agree in this domain to be the absolute value function.</p>\n\n<p>Thus $(-2)^2$ is not a distinct factorization of $4$, and neither is $-1 \\times 2^2$, $(-1)^3 \\times 2^2$, $(-1)^5 \\times 2^2$, etc. Neither is $1^\\alpha 2^2$, whatever $\\alpha$ happens to be, as long as it be a number. Whether you consider $-2$ to be a prime number or not, that's another can of worms. If you don't, you can choose to view it as the prime number $2$ multiplied by the unit $-1$.</p>\n\n<p>Which means that neither order nor multiplication by units makes for a distinct factorization. You can view $1$ as the product of one unit times no primes. Or as the square of the unit $-1$. You can throw as many units in as you want, as long as it lands the product on the desired side of the number line, or the desired quadrant of the complex plane.</p>\n\n<p>These concepts will be important if you go on to study algebraic number theory. For example, $(10 - 7 \\sqrt 2)^2 (10 + 7 \\sqrt 2)^2$ and $(\\sqrt 2)^4$ are not distinct factorizations of $4$ in a certain domain of algebraic numbers that includes $\\sqrt 2$ as a prime number and $2$ as correspondingly composite. But in that domain, $1$ and $-1$ are also units, and unique factorization also holds.</p>\n" }, { "answer_id": 2086927, "score": 2, "text": "<p>What it means is that any number can be factored into a product of primes. That is, every integer has a prime factorization. Of course, with prime numbers, their prime factorization product is just themselves. </p>\n" }, { "answer_id": 2087776, "score": 2, "text": "<p>1 is the product of no primes at all, it is a unit. And in that way, it is the most unique among the positive integers, among all numbers, actually. All other positive integers are each the product of at least one prime number (itself, in the case of a prime number).</p>\n\n<p>It might be helpful to review the concept of \"empty sum\" and \"empty product\". Both of these are identities, too.</p>\n\n<p>The empty sum is 0, right? You add 0 to a number $x$ and you still have $x$. You added nothing, you added emptiness. $x + 0 = 0$.</p>\n\n<p>But 0 is not the empty product. Unless $x = 0$, we have $x \\times 0 \\neq x$. The empty product has to be something that does not change $x$. It has to be something that is the same as not performing multiplication, that is, empty multiplication. So $x \\times 1 = x$. So 1 is the empty product.</p>\n\n<p>Viewed in this way, then:</p>\n\n<ul>\n<li>1 is the empty product of no primes at all, zero primes.</li>\n<li>A prime number $p$ is its own product, the product of a single prime.</li>\n<li>A semiprime is the product of two prime numbers, not necessarily distinct.</li>\n<li>Sphenic numbers are the product of three distinct primes. Cubes of primes also have three prime factors, though not distinct. Also in this bullet point, numbers of the form $p^2 q$, where $p$ and $q$ are distinct primes.</li>\n<li>Products of four primes, etc.</li>\n</ul>\n\n<p>To really bend your mind, ask: what is the product of $-1$ primes?</p>\n" }, { "answer_id": 2087864, "score": 3, "text": "<p>I am so old that I was actually taught as a child that $1$ is a prime number. It's not, and though I had a subconscious feeling I had been taught incorrectly, decades passed before I actually understood.</p>\n\n<p>As you already know, $6 = 2 \\times 3$ is the product of two distinct primes. By the way $3 \\times 2$ is not a distinct factorization. We agree that reordering the prime factors does not change the factorization, even though we agree to write the primes in ascending order.</p>\n\n<p>Now, look at the powers of $2$. You also know that $4 = 2^2$, also the product of two primes, but they're not distinct primes, they're both $2$. And also, $2^3 = 8$, $2^4 = 16$, $2^5 = 32$, etc.</p>\n\n<p>Going in the other direction we have $2^1 = 2$. No surprise there. And $2^0 = 1$. And, as I hinted in a comment to another answer, $$2^{-1} = {1 \\over 2}.$$</p>\n\n<p>Clearly $${32 \\over 2} = 16, {16 \\over 2} = 8, {8 \\over 2} = 4, {4 \\over 2} = 2, {2 \\over 2} = 1, {1 \\over 2} = {1 \\over 2}, {{1 \\over 2} \\over 2} = {1 \\over 4}, {{1 \\over 4} \\over 2} = {1 \\over 8}, {{1 \\over 8} \\over 2} = {1 \\over 16}, \\ldots$$</p>\n\n<p>So, if $p^\\alpha$ is the product of multiplying a prime $p$ by itself $\\alpha - 1$ times, then $1$ is the product of multiplying a prime by itself $0$ times... wait, I seem to have painted myself into a corner: since $p$ can be any prime, or a composite number for that matter, that would mean that the factorization of $1$ is not actually distinct... But the point is that $1$ is nevertheless a product nonetheless.</p>\n\n<p>Let me try to jump over the wet paint. You also know that $(-2)^2 = 4$. But why doesn't this count as a unique factorization? Because $-1$ is a unit, a number that does not multiplicatively change the norm of a number, which we agree in this domain to be the absolute value function.</p>\n\n<p>Thus $(-2)^2$ is not a distinct factorization of $4$, and neither is $-1 \\times 2^2$, $(-1)^3 \\times 2^2$, $(-1)^5 \\times 2^2$, etc. Neither is $1^\\alpha 2^2$, whatever $\\alpha$ happens to be, as long as it be a number. Whether you consider $-2$ to be a prime number or not, that's another can of worms. If you don't, you can choose to view it as the prime number $2$ multiplied by the unit $-1$.</p>\n\n<p>Which means that neither order nor multiplication by units makes for a distinct factorization. You can view $1$ as the product of one unit times no primes. Or as the square of the unit $-1$. You can throw as many units in as you want, as long as it lands the product on the desired side of the number line, or the desired quadrant of the complex plane.</p>\n\n<p>These concepts will be important if you go on to study algebraic number theory. For example, $(10 - 7 \\sqrt 2)^2 (10 + 7 \\sqrt 2)^2$ and $(\\sqrt 2)^4$ are not distinct factorizations of $4$ in a certain domain of algebraic numbers that includes $\\sqrt 2$ as a prime number and $2$ as correspondingly composite. But in that domain, $1$ and $-1$ are also units, and unique factorization also holds.</p>\n" }, { "answer_id": 2086927, "score": 2, "text": "<p>What it means is that any number can be factored into a product of primes. That is, every integer has a prime factorization. Of course, with prime numbers, their prime factorization product is just themselves. </p>\n" }, { "answer_id": 2087776, "score": 2, "text": "<p>1 is the product of no primes at all, it is a unit. And in that way, it is the most unique among the positive integers, among all numbers, actually. All other positive integers are each the product of at least one prime number (itself, in the case of a prime number).</p>\n\n<p>It might be helpful to review the concept of \"empty sum\" and \"empty product\". Both of these are identities, too.</p>\n\n<p>The empty sum is 0, right? You add 0 to a number $x$ and you still have $x$. You added nothing, you added emptiness. $x + 0 = 0$.</p>\n\n<p>But 0 is not the empty product. Unless $x = 0$, we have $x \\times 0 \\neq x$. The empty product has to be something that does not change $x$. It has to be something that is the same as not performing multiplication, that is, empty multiplication. So $x \\times 1 = x$. So 1 is the empty product.</p>\n\n<p>Viewed in this way, then:</p>\n\n<ul>\n<li>1 is the empty product of no primes at all, zero primes.</li>\n<li>A prime number $p$ is its own product, the product of a single prime.</li>\n<li>A semiprime is the product of two prime numbers, not necessarily distinct.</li>\n<li>Sphenic numbers are the product of three distinct primes. Cubes of primes also have three prime factors, though not distinct. Also in this bullet point, numbers of the form $p^2 q$, where $p$ and $q$ are distinct primes.</li>\n<li>Products of four primes, etc.</li>\n</ul>\n\n<p>To really bend your mind, ask: what is the product of $-1$ primes?</p>\n" } ]
Question: Fundamental theorem of arithmetic for 2 Answer 1: <p>I am so old that I was actually taught as a child that $1$ is a prime number. It's not, and though I had a subconscious feeling I had been taught incorrectly, decades passed before I actually understood.</p> <p>As you already know, $6 = 2 \times 3$ is the product of two distinct primes. By the way $3 \times 2$ is not a distinct factorization. We agree that reordering the prime factors does not change the factorization, even though we agree to write the primes in ascending order.</p> <p>Now, look at the powers of $2$. You also know that $4 = 2^2$, also the product of two primes, but they're not distinct primes, they're both $2$. And also, $2^3 = 8$, $2^4 = 16$, $2^5 = 32$, etc.</p> <p>Going in the other direction we have $2^1 = 2$. No surprise there. And $2^0 = 1$. And, as I hinted in a comment to another answer, $$2^{-1} = {1 \over 2}.$$</p> <p>Clearly $${32 \over 2} = 16, {16 \over 2} = 8, {8 \over 2} = 4, {4 \over 2} = 2, {2 \over 2} = 1, {1 \over 2} = {1 \over 2}, {{1 \over 2} \over 2} = {1 \over 4}, {{1 \over 4} \over 2} = {1 \over 8}, {{1 \over 8} \over 2} = {1 \over 16}, \ldots$$</p> <p>So, if $p^\alpha$ is the product of multiplying a prime $p$ by itself $\alpha - 1$ times, then $1$ is the product of multiplying a prime by itself $0$ times... wait, I seem to have painted myself into a corner: since $p$ can be any prime, or a composite number for that matter, that would mean that the factorization of $1$ is not actually distinct... But the point is that $1$ is nevertheless a product nonetheless.</p> <p>Let me try to jump over the wet paint. You also know that $(-2)^2 = 4$. But why doesn't this count as a unique factorization? Because $-1$ is a unit, a number that does not multiplicatively change the norm of a number, which we agree in this domain to be the absolute value function.</p> <p>Thus $(-2)^2$ is not a distinct factorization of $4$, and neither is $-1 \times 2^2$, $(-1)^3 \times 2^2$, $(-1)^5 \times 2^2$, etc. Neither is $1^\alpha 2^2$, whatever $\alpha$ happens to be, as long as it be a number. Whether you consider $-2$ to be a prime number or not, that's another can of worms. If you don't, you can choose to view it as the prime number $2$ multiplied by the unit $-1$.</p> <p>Which means that neither order nor multiplication by units makes for a distinct factorization. You can view $1$ as the product of one unit times no primes. Or as the square of the unit $-1$. You can throw as many units in as you want, as long as it lands the product on the desired side of the number line, or the desired quadrant of the complex plane.</p> <p>These concepts will be important if you go on to study algebraic number theory. For example, $(10 - 7 \sqrt 2)^2 (10 + 7 \sqrt 2)^2$ and $(\sqrt 2)^4$ are not distinct factorizations of $4$ in a certain domain of algebraic numbers that includes $\sqrt 2$ as a prime number and $2$ as correspondingly composite. But in that domain, $1$ and $-1$ are also units, and unique factorization also holds.</p> Answer 2: <p>What it means is that any number can be factored into a product of primes. That is, every integer has a prime factorization. Of course, with prime numbers, their prime factorization product is just themselves. </p> Answer 3: <p>1 is the product of no primes at all, it is a unit. And in that way, it is the most unique among the positive integers, among all numbers, actually. All other positive integers are each the product of at least one prime number (itself, in the case of a prime number).</p> <p>It might be helpful to review the concept of "empty sum" and "empty product". Both of these are identities, too.</p> <p>The empty sum is 0, right? You add 0 to a number $x$ and you still have $x$. You added nothing, you added emptiness. $x + 0 = 0$.</p> <p>But 0 is not the empty product. Unless $x = 0$, we have $x \times 0 \neq x$. The empty product has to be something that does not change $x$. It has to be something that is the same as not performing multiplication, that is, empty multiplication. So $x \times 1 = x$. So 1 is the empty product.</p> <p>Viewed in this way, then:</p> <ul> <li>1 is the empty product of no primes at all, zero primes.</li> <li>A prime number $p$ is its own product, the product of a single prime.</li> <li>A semiprime is the product of two prime numbers, not necessarily distinct.</li> <li>Sphenic numbers are the product of three distinct primes. Cubes of primes also have three prime factors, though not distinct. Also in this bullet point, numbers of the form $p^2 q$, where $p$ and $q$ are distinct primes.</li> <li>Products of four primes, etc.</li> </ul> <p>To really bend your mind, ask: what is the product of $-1$ primes?</p> Answer 4: <p>I am so old that I was actually taught as a child that $1$ is a prime number. It's not, and though I had a subconscious feeling I had been taught incorrectly, decades passed before I actually understood.</p> <p>As you already know, $6 = 2 \times 3$ is the product of two distinct primes. By the way $3 \times 2$ is not a distinct factorization. We agree that reordering the prime factors does not change the factorization, even though we agree to write the primes in ascending order.</p> <p>Now, look at the powers of $2$. You also know that $4 = 2^2$, also the product of two primes, but they're not distinct primes, they're both $2$. And also, $2^3 = 8$, $2^4 = 16$, $2^5 = 32$, etc.</p> <p>Going in the other direction we have $2^1 = 2$. No surprise there. And $2^0 = 1$. And, as I hinted in a comment to another answer, $$2^{-1} = {1 \over 2}.$$</p> <p>Clearly $${32 \over 2} = 16, {16 \over 2} = 8, {8 \over 2} = 4, {4 \over 2} = 2, {2 \over 2} = 1, {1 \over 2} = {1 \over 2}, {{1 \over 2} \over 2} = {1 \over 4}, {{1 \over 4} \over 2} = {1 \over 8}, {{1 \over 8} \over 2} = {1 \over 16}, \ldots$$</p> <p>So, if $p^\alpha$ is the product of multiplying a prime $p$ by itself $\alpha - 1$ times, then $1$ is the product of multiplying a prime by itself $0$ times... wait, I seem to have painted myself into a corner: since $p$ can be any prime, or a composite number for that matter, that would mean that the factorization of $1$ is not actually distinct... But the point is that $1$ is nevertheless a product nonetheless.</p> <p>Let me try to jump over the wet paint. You also know that $(-2)^2 = 4$. But why doesn't this count as a unique factorization? Because $-1$ is a unit, a number that does not multiplicatively change the norm of a number, which we agree in this domain to be the absolute value function.</p> <p>Thus $(-2)^2$ is not a distinct factorization of $4$, and neither is $-1 \times 2^2$, $(-1)^3 \times 2^2$, $(-1)^5 \times 2^2$, etc. Neither is $1^\alpha 2^2$, whatever $\alpha$ happens to be, as long as it be a number. Whether you consider $-2$ to be a prime number or not, that's another can of worms. If you don't, you can choose to view it as the prime number $2$ multiplied by the unit $-1$.</p> <p>Which means that neither order nor multiplication by units makes for a distinct factorization. You can view $1$ as the product of one unit times no primes. Or as the square of the unit $-1$. You can throw as many units in as you want, as long as it lands the product on the desired side of the number line, or the desired quadrant of the complex plane.</p> <p>These concepts will be important if you go on to study algebraic number theory. For example, $(10 - 7 \sqrt 2)^2 (10 + 7 \sqrt 2)^2$ and $(\sqrt 2)^4$ are not distinct factorizations of $4$ in a certain domain of algebraic numbers that includes $\sqrt 2$ as a prime number and $2$ as correspondingly composite. But in that domain, $1$ and $-1$ are also units, and unique factorization also holds.</p> Answer 5: <p>What it means is that any number can be factored into a product of primes. That is, every integer has a prime factorization. Of course, with prime numbers, their prime factorization product is just themselves. </p> Answer 6: <p>1 is the product of no primes at all, it is a unit. And in that way, it is the most unique among the positive integers, among all numbers, actually. All other positive integers are each the product of at least one prime number (itself, in the case of a prime number).</p> <p>It might be helpful to review the concept of "empty sum" and "empty product". Both of these are identities, too.</p> <p>The empty sum is 0, right? You add 0 to a number $x$ and you still have $x$. You added nothing, you added emptiness. $x + 0 = 0$.</p> <p>But 0 is not the empty product. Unless $x = 0$, we have $x \times 0 \neq x$. The empty product has to be something that does not change $x$. It has to be something that is the same as not performing multiplication, that is, empty multiplication. So $x \times 1 = x$. So 1 is the empty product.</p> <p>Viewed in this way, then:</p> <ul> <li>1 is the empty product of no primes at all, zero primes.</li> <li>A prime number $p$ is its own product, the product of a single prime.</li> <li>A semiprime is the product of two prime numbers, not necessarily distinct.</li> <li>Sphenic numbers are the product of three distinct primes. Cubes of primes also have three prime factors, though not distinct. Also in this bullet point, numbers of the form $p^2 q$, where $p$ and $q$ are distinct primes.</li> <li>Products of four primes, etc.</li> </ul> <p>To really bend your mind, ask: what is the product of $-1$ primes?</p> Answer 7: <p>I am so old that I was actually taught as a child that $1$ is a prime number. It's not, and though I had a subconscious feeling I had been taught incorrectly, decades passed before I actually understood.</p> <p>As you already know, $6 = 2 \times 3$ is the product of two distinct primes. By the way $3 \times 2$ is not a distinct factorization. We agree that reordering the prime factors does not change the factorization, even though we agree to write the primes in ascending order.</p> <p>Now, look at the powers of $2$. You also know that $4 = 2^2$, also the product of two primes, but they're not distinct primes, they're both $2$. And also, $2^3 = 8$, $2^4 = 16$, $2^5 = 32$, etc.</p> <p>Going in the other direction we have $2^1 = 2$. No surprise there. And $2^0 = 1$. And, as I hinted in a comment to another answer, $$2^{-1} = {1 \over 2}.$$</p> <p>Clearly $${32 \over 2} = 16, {16 \over 2} = 8, {8 \over 2} = 4, {4 \over 2} = 2, {2 \over 2} = 1, {1 \over 2} = {1 \over 2}, {{1 \over 2} \over 2} = {1 \over 4}, {{1 \over 4} \over 2} = {1 \over 8}, {{1 \over 8} \over 2} = {1 \over 16}, \ldots$$</p> <p>So, if $p^\alpha$ is the product of multiplying a prime $p$ by itself $\alpha - 1$ times, then $1$ is the product of multiplying a prime by itself $0$ times... wait, I seem to have painted myself into a corner: since $p$ can be any prime, or a composite number for that matter, that would mean that the factorization of $1$ is not actually distinct... But the point is that $1$ is nevertheless a product nonetheless.</p> <p>Let me try to jump over the wet paint. You also know that $(-2)^2 = 4$. But why doesn't this count as a unique factorization? Because $-1$ is a unit, a number that does not multiplicatively change the norm of a number, which we agree in this domain to be the absolute value function.</p> <p>Thus $(-2)^2$ is not a distinct factorization of $4$, and neither is $-1 \times 2^2$, $(-1)^3 \times 2^2$, $(-1)^5 \times 2^2$, etc. Neither is $1^\alpha 2^2$, whatever $\alpha$ happens to be, as long as it be a number. Whether you consider $-2$ to be a prime number or not, that's another can of worms. If you don't, you can choose to view it as the prime number $2$ multiplied by the unit $-1$.</p> <p>Which means that neither order nor multiplication by units makes for a distinct factorization. You can view $1$ as the product of one unit times no primes. Or as the square of the unit $-1$. You can throw as many units in as you want, as long as it lands the product on the desired side of the number line, or the desired quadrant of the complex plane.</p> <p>These concepts will be important if you go on to study algebraic number theory. For example, $(10 - 7 \sqrt 2)^2 (10 + 7 \sqrt 2)^2$ and $(\sqrt 2)^4$ are not distinct factorizations of $4$ in a certain domain of algebraic numbers that includes $\sqrt 2$ as a prime number and $2$ as correspondingly composite. But in that domain, $1$ and $-1$ are also units, and unique factorization also holds.</p> Answer 8: <p>What it means is that any number can be factored into a product of primes. That is, every integer has a prime factorization. Of course, with prime numbers, their prime factorization product is just themselves. </p> Answer 9: <p>1 is the product of no primes at all, it is a unit. And in that way, it is the most unique among the positive integers, among all numbers, actually. All other positive integers are each the product of at least one prime number (itself, in the case of a prime number).</p> <p>It might be helpful to review the concept of "empty sum" and "empty product". Both of these are identities, too.</p> <p>The empty sum is 0, right? You add 0 to a number $x$ and you still have $x$. You added nothing, you added emptiness. $x + 0 = 0$.</p> <p>But 0 is not the empty product. Unless $x = 0$, we have $x \times 0 \neq x$. The empty product has to be something that does not change $x$. It has to be something that is the same as not performing multiplication, that is, empty multiplication. So $x \times 1 = x$. So 1 is the empty product.</p> <p>Viewed in this way, then:</p> <ul> <li>1 is the empty product of no primes at all, zero primes.</li> <li>A prime number $p$ is its own product, the product of a single prime.</li> <li>A semiprime is the product of two prime numbers, not necessarily distinct.</li> <li>Sphenic numbers are the product of three distinct primes. Cubes of primes also have three prime factors, though not distinct. Also in this bullet point, numbers of the form $p^2 q$, where $p$ and $q$ are distinct primes.</li> <li>Products of four primes, etc.</li> </ul> <p>To really bend your mind, ask: what is the product of $-1$ primes?</p>
Direct proof of the level four eta identity
2,086,969
4
https://math.stackexchange.com/questions/2086969/direct-proof-of-the-level-four-eta-identity
<p>Of Somos' <a href="http://eta.math.georgetown.edu/" rel="nofollow noreferrer">over 6200 eta identities,</a> the one of smallest level (maybe the simplest?) is the eta identity of level $4$ $$\eta(z)^{16} \eta(4z)^8 + 16 \eta(4z)^{16} \eta(z)^8 = \eta(2z)^{24}.$$</p> <p>As with all identities among modular forms, we can prove it by checking the first few coefficients. I wonder if it is possible to give a "real" proof of this straight from the product $$\eta(z) = q^{1/24} \prod_{n=1}^{\infty} (1 - q^n)$$ in the style of <a href="https://math.stackexchange.com/questions/483708/how-to-prove-that-frac-eta14q4-eta4q8-4-eta4q2-eta2q4">this answer</a>. The higher exponents in this problem seem to make it harder to use Jacobi's triple product.</p>
[ { "answer_id": 2171142, "score": 2, "text": "<p>Consider the product representations of the <a href=\"https://en.wikipedia.org/wiki/Theta_function\" rel=\"nofollow noreferrer\">Jacobi Thetanulls</a>\n(as functions of the period ratio $\\tau$):\n$$\\begin{align}\n \\varTheta_{00}(\\tau)\n &amp;= \\frac{\\eta(\\tau)^5}{\\eta\\left(\\frac{\\tau}{2}\\right)^2\\eta(2\\tau)^2}\n&amp; \\varTheta_{01}(\\tau)\n &amp;= \\frac{\\eta\\left(\\frac{\\tau}{2}\\right)^2}{\\eta(\\tau)}\n&amp; \\varTheta_{10}(\\tau)\n &amp;= \\frac{2\\eta(2\\tau)^2}{\\eta(\\tau)}\n\\end{align}$$\nand plug these into Jacobi's <em>Vierergleichung</em>\n$$\\varTheta_{00}^4 = \\varTheta_{01}^4 + \\varTheta_{10}^4$$\nThis results in\n$$\\frac{\\eta(\\tau)^{20}}{\\eta\\left(\\frac{\\tau}{2}\\right)^8\\eta(2\\tau)^8}\n= \\frac{\\eta\\left(\\frac{\\tau}{2}\\right)^8}{\\eta(\\tau)^4}\n+ \\frac{16\\,\\eta(2\\tau)^8}{\\eta(\\tau)^4}$$\nMultiplying out the denominators and setting $\\tau=2z$\nthen results in the identity you asked about.</p>\n\n<hr>\n\n<p>One way of proving the <em>Vierergleichung</em> works by first deducing\nthe period-doubling/-halving relations\n$$\\begin{align}\n\t\\varTheta_{00}(\\tau)^2\n\t&amp;= \\varTheta_{00}(2\\tau)^2+\\varTheta_{10}(2\\tau)^2\n&amp;\t\\varTheta_{00}(2\\tau)^2\n\t&amp;= \\frac{1}{2}\n\t\\left(\\varTheta_{00}(\\tau)^2+\\varTheta_{01}(\\tau)^2\\right)\n\\\\\t\\varTheta_{01}(\\tau)^2\n\t&amp;= \\varTheta_{00}(2\\tau)^2-\\varTheta_{10}(2\\tau)^2\n&amp;\t\\varTheta_{01}(2\\tau)^2\n\t&amp;= \\varTheta_{00}(\\tau)\\,\\varTheta_{01}(\\tau)\n\\\\\t\\varTheta_{10}(\\tau)^2\n\t&amp;= 2\\,\\varTheta_{00}(2\\tau)\\,\\varTheta_{10}(2\\tau)\n&amp;\t\\varTheta_{10}(2\\tau)^2\n\t&amp;= \\frac{1}{2}\n\t\\left(\\varTheta_{00}(\\tau)^2-\\varTheta_{01}(\\tau)^2\\right)\n\\end{align}$$\nThe middle left equation has been deduced in my answer to <a href=\"https://math.stackexchange.com/questions/483708\">the question you\nreferenced</a>.\nThe upper left then follows from replacing $\\tau$ by $\\tau+1$.\nAddition/subtraction reveals the upper and lower right identities.\nThe middle right identity easily follows from its left neighbor\nonce you plug in\n$$\\begin{align}\n \\varTheta_{00}(\\tau)\n\t&amp;= \\varTheta_{00}(4\\tau)+\\varTheta_{10}(4\\tau)\n\\\\ \\varTheta_{01}(\\tau)\n\t&amp;= \\varTheta_{00}(4\\tau)-\\varTheta_{10}(4\\tau)\n\\end{align}$$\nwhich is just a splitting of $q$-series into odd and even terms.\nFinally, plugging in the above two even/odd-splitting equations in the\nlower right identity yields the lower left identity.</p>\n\n<p>Anyway, once you have established the above set of identities, you can deduce\n$$\\begin{align}\n \\varTheta_{10}(\\tau)^4\n\t&amp;= 4\\,\\varTheta_{00}(2\\tau)^2\\,\\varTheta_{10}(2\\tau)^2\n\\\\\t&amp;= \\left(\\varTheta_{00}(\\tau)^2+\\varTheta_{01}(\\tau)^2\\right)\n\t\\left(\\varTheta_{00}(\\tau)^2-\\varTheta_{01}(\\tau)^2\\right)\n\\\\\t&amp;= \\varTheta_{00}(\\tau)^4-\\varTheta_{01}(\\tau)^4\n\\end{align}$$\nwhich is the <em>Vierergleichung</em>.</p>\n" }, { "answer_id": 2466492, "score": 2, "text": "<p>I am surprised at the idea that this needs \"real\" proof. The identity is the very first one in my list and the very first formulation of it is the \"aequatio identica satis abstrusa\" straight from Jacobi's <em>Fundamenta Nova Theoriae Functionum Ellipticarum</em>, page 90 equation (14) published in 1829. There is a reference to Bruce Berndt's editing of Ramanujan's Notebooks, Part III, page 40, Entry 25 (vii) which is exactly the same that is used in the parametrization of the Arithmetico-Geometric Mean by squares of the theta null functions. That is, $\\;\\theta_3^4(q) = \\theta_4^4(q) + \\theta_2^4(q)\\;$ is the $a^2=b^2+c^2$ of the AGM and this goes back to Gauss even before Jacobi.</p>\n" }, { "answer_id": 2170538, "score": 0, "text": "<p>This identity</p>\n\n<p>$$\\eta(z)^{16} \\eta(4z)^8 + 16 \\eta(4z)^{16} \\eta(z)^8 = \\eta(2z)^{24}$$</p>\n\n<p>written as</p>\n\n<p>$$\\bigg(\\frac{\\eta(z)}{\\eta(2z)}\\bigg)^{16}\\bigg(\\frac{\\eta(4z)}{\\eta(2z)}\\bigg)^8+16\\bigg(\\frac{\\eta(z)}{\\eta(2z)}\\bigg)^8\\bigg(\\frac{\\eta(4z)}{\\eta(2z)}\\bigg)^{16}=1$$</p>\n\n<p>is equal respectively, to</p>\n\n<p>$$\\beta_{4n}+\\alpha_{4n}=1.$$</p>\n\n<p>Using Ramanujan's notations for</p>\n\n<p>$\\eta(z)=\\frac{\\phi(q)}{\\sqrt{2}(G_{n})^2}$</p>\n\n<p>$\\eta(2z)=\\frac{\\phi(q^2)}{\\sqrt{2}(G_{4n})^2}$</p>\n\n<p>$\\eta(4z)=\\frac{\\phi(q^4)}{\\sqrt{2}(G_{16n})^2}$</p>\n\n<p>and that</p>\n\n<p>$\\frac{\\phi(q)}{\\phi(q^2)}=\\frac{\\beta_{4n}^{1/4}}{\\beta_{n}^{1/8}}$</p>\n\n<p>$\\frac{\\phi(q^2)}{\\phi(q^4)}=\\frac{\\beta_{16n}^{1/4}}{\\beta_{4n}^{1/8}}$</p>\n\n<p>and moreover</p>\n\n<p>$\\beta_{n}=\\frac{g_{n}^8}{G_{n}^8}$</p>\n\n<p>$\\beta_{4n}=\\frac{g_{4n}^8}{G_{4n}^8}$</p>\n\n<p>$\\beta_{16n}=\\frac{g_{16n}^8}{G_{16n}^8}$</p>\n\n<p>$$4g_{4n}^8 G_{4n}^8\\big(G_{4n}^8-g_{4n}^8\\big)=1\\tag1$$</p>\n\n<p>we arrive to</p>\n\n<p>$\\beta_{4n}=\\bigg(\\frac{\\eta(z)}{\\eta(2z)}\\bigg)^{16}\\bigg(\\frac{\\eta(4z)}{\\eta(2z)}\\bigg)^8=\\frac{\\beta_{4n}^{5}G_{4n}^{48}}{\\beta_{n}^2 G_{n}^{32} \\beta_{16n}^2 G_{16n}^{16}}=$</p>\n\n<p>$=\\frac{g_{4n}^{40} G_{4n}^8}{g{n}^{16} G_{n}^{16} g_{16n}^{16}}=$</p>\n\n<p>$=\\frac{g_{4n}^{40} G_{4n}^8}{g_{n}^{16} G_{n}^{16} 2^4 g_{4n}^{16} G_{4n}^{16}}=$</p>\n\n<p>$=\\frac{g_{4n}^{40} G_{4n}^8}{g_{4n}^{16} g_{4n}^{16} G_{4n}^{16}}$</p>\n\n<p>simplifying</p>\n\n<p>$\\beta_{4n}=\\frac{g_{4n}^8}{G_{4n}^8}$</p>\n\n<p>With analog considerations we have</p>\n\n<p>$$\\frac{g_{4n}^8}{ G_{4n}^8}+\\frac{1}{4g_{4n}^8 G_{4n}^{16}}=1$$</p>\n\n<p>or that is the same the equation (1).</p>\n" }, { "answer_id": 2171142, "score": 2, "text": "<p>Consider the product representations of the <a href=\"https://en.wikipedia.org/wiki/Theta_function\" rel=\"nofollow noreferrer\">Jacobi Thetanulls</a>\n(as functions of the period ratio $\\tau$):\n$$\\begin{align}\n \\varTheta_{00}(\\tau)\n &amp;= \\frac{\\eta(\\tau)^5}{\\eta\\left(\\frac{\\tau}{2}\\right)^2\\eta(2\\tau)^2}\n&amp; \\varTheta_{01}(\\tau)\n &amp;= \\frac{\\eta\\left(\\frac{\\tau}{2}\\right)^2}{\\eta(\\tau)}\n&amp; \\varTheta_{10}(\\tau)\n &amp;= \\frac{2\\eta(2\\tau)^2}{\\eta(\\tau)}\n\\end{align}$$\nand plug these into Jacobi's <em>Vierergleichung</em>\n$$\\varTheta_{00}^4 = \\varTheta_{01}^4 + \\varTheta_{10}^4$$\nThis results in\n$$\\frac{\\eta(\\tau)^{20}}{\\eta\\left(\\frac{\\tau}{2}\\right)^8\\eta(2\\tau)^8}\n= \\frac{\\eta\\left(\\frac{\\tau}{2}\\right)^8}{\\eta(\\tau)^4}\n+ \\frac{16\\,\\eta(2\\tau)^8}{\\eta(\\tau)^4}$$\nMultiplying out the denominators and setting $\\tau=2z$\nthen results in the identity you asked about.</p>\n\n<hr>\n\n<p>One way of proving the <em>Vierergleichung</em> works by first deducing\nthe period-doubling/-halving relations\n$$\\begin{align}\n\t\\varTheta_{00}(\\tau)^2\n\t&amp;= \\varTheta_{00}(2\\tau)^2+\\varTheta_{10}(2\\tau)^2\n&amp;\t\\varTheta_{00}(2\\tau)^2\n\t&amp;= \\frac{1}{2}\n\t\\left(\\varTheta_{00}(\\tau)^2+\\varTheta_{01}(\\tau)^2\\right)\n\\\\\t\\varTheta_{01}(\\tau)^2\n\t&amp;= \\varTheta_{00}(2\\tau)^2-\\varTheta_{10}(2\\tau)^2\n&amp;\t\\varTheta_{01}(2\\tau)^2\n\t&amp;= \\varTheta_{00}(\\tau)\\,\\varTheta_{01}(\\tau)\n\\\\\t\\varTheta_{10}(\\tau)^2\n\t&amp;= 2\\,\\varTheta_{00}(2\\tau)\\,\\varTheta_{10}(2\\tau)\n&amp;\t\\varTheta_{10}(2\\tau)^2\n\t&amp;= \\frac{1}{2}\n\t\\left(\\varTheta_{00}(\\tau)^2-\\varTheta_{01}(\\tau)^2\\right)\n\\end{align}$$\nThe middle left equation has been deduced in my answer to <a href=\"https://math.stackexchange.com/questions/483708\">the question you\nreferenced</a>.\nThe upper left then follows from replacing $\\tau$ by $\\tau+1$.\nAddition/subtraction reveals the upper and lower right identities.\nThe middle right identity easily follows from its left neighbor\nonce you plug in\n$$\\begin{align}\n \\varTheta_{00}(\\tau)\n\t&amp;= \\varTheta_{00}(4\\tau)+\\varTheta_{10}(4\\tau)\n\\\\ \\varTheta_{01}(\\tau)\n\t&amp;= \\varTheta_{00}(4\\tau)-\\varTheta_{10}(4\\tau)\n\\end{align}$$\nwhich is just a splitting of $q$-series into odd and even terms.\nFinally, plugging in the above two even/odd-splitting equations in the\nlower right identity yields the lower left identity.</p>\n\n<p>Anyway, once you have established the above set of identities, you can deduce\n$$\\begin{align}\n \\varTheta_{10}(\\tau)^4\n\t&amp;= 4\\,\\varTheta_{00}(2\\tau)^2\\,\\varTheta_{10}(2\\tau)^2\n\\\\\t&amp;= \\left(\\varTheta_{00}(\\tau)^2+\\varTheta_{01}(\\tau)^2\\right)\n\t\\left(\\varTheta_{00}(\\tau)^2-\\varTheta_{01}(\\tau)^2\\right)\n\\\\\t&amp;= \\varTheta_{00}(\\tau)^4-\\varTheta_{01}(\\tau)^4\n\\end{align}$$\nwhich is the <em>Vierergleichung</em>.</p>\n" }, { "answer_id": 2466492, "score": 2, "text": "<p>I am surprised at the idea that this needs \"real\" proof. The identity is the very first one in my list and the very first formulation of it is the \"aequatio identica satis abstrusa\" straight from Jacobi's <em>Fundamenta Nova Theoriae Functionum Ellipticarum</em>, page 90 equation (14) published in 1829. There is a reference to Bruce Berndt's editing of Ramanujan's Notebooks, Part III, page 40, Entry 25 (vii) which is exactly the same that is used in the parametrization of the Arithmetico-Geometric Mean by squares of the theta null functions. That is, $\\;\\theta_3^4(q) = \\theta_4^4(q) + \\theta_2^4(q)\\;$ is the $a^2=b^2+c^2$ of the AGM and this goes back to Gauss even before Jacobi.</p>\n" }, { "answer_id": 2170538, "score": 0, "text": "<p>This identity</p>\n\n<p>$$\\eta(z)^{16} \\eta(4z)^8 + 16 \\eta(4z)^{16} \\eta(z)^8 = \\eta(2z)^{24}$$</p>\n\n<p>written as</p>\n\n<p>$$\\bigg(\\frac{\\eta(z)}{\\eta(2z)}\\bigg)^{16}\\bigg(\\frac{\\eta(4z)}{\\eta(2z)}\\bigg)^8+16\\bigg(\\frac{\\eta(z)}{\\eta(2z)}\\bigg)^8\\bigg(\\frac{\\eta(4z)}{\\eta(2z)}\\bigg)^{16}=1$$</p>\n\n<p>is equal respectively, to</p>\n\n<p>$$\\beta_{4n}+\\alpha_{4n}=1.$$</p>\n\n<p>Using Ramanujan's notations for</p>\n\n<p>$\\eta(z)=\\frac{\\phi(q)}{\\sqrt{2}(G_{n})^2}$</p>\n\n<p>$\\eta(2z)=\\frac{\\phi(q^2)}{\\sqrt{2}(G_{4n})^2}$</p>\n\n<p>$\\eta(4z)=\\frac{\\phi(q^4)}{\\sqrt{2}(G_{16n})^2}$</p>\n\n<p>and that</p>\n\n<p>$\\frac{\\phi(q)}{\\phi(q^2)}=\\frac{\\beta_{4n}^{1/4}}{\\beta_{n}^{1/8}}$</p>\n\n<p>$\\frac{\\phi(q^2)}{\\phi(q^4)}=\\frac{\\beta_{16n}^{1/4}}{\\beta_{4n}^{1/8}}$</p>\n\n<p>and moreover</p>\n\n<p>$\\beta_{n}=\\frac{g_{n}^8}{G_{n}^8}$</p>\n\n<p>$\\beta_{4n}=\\frac{g_{4n}^8}{G_{4n}^8}$</p>\n\n<p>$\\beta_{16n}=\\frac{g_{16n}^8}{G_{16n}^8}$</p>\n\n<p>$$4g_{4n}^8 G_{4n}^8\\big(G_{4n}^8-g_{4n}^8\\big)=1\\tag1$$</p>\n\n<p>we arrive to</p>\n\n<p>$\\beta_{4n}=\\bigg(\\frac{\\eta(z)}{\\eta(2z)}\\bigg)^{16}\\bigg(\\frac{\\eta(4z)}{\\eta(2z)}\\bigg)^8=\\frac{\\beta_{4n}^{5}G_{4n}^{48}}{\\beta_{n}^2 G_{n}^{32} \\beta_{16n}^2 G_{16n}^{16}}=$</p>\n\n<p>$=\\frac{g_{4n}^{40} G_{4n}^8}{g{n}^{16} G_{n}^{16} g_{16n}^{16}}=$</p>\n\n<p>$=\\frac{g_{4n}^{40} G_{4n}^8}{g_{n}^{16} G_{n}^{16} 2^4 g_{4n}^{16} G_{4n}^{16}}=$</p>\n\n<p>$=\\frac{g_{4n}^{40} G_{4n}^8}{g_{4n}^{16} g_{4n}^{16} G_{4n}^{16}}$</p>\n\n<p>simplifying</p>\n\n<p>$\\beta_{4n}=\\frac{g_{4n}^8}{G_{4n}^8}$</p>\n\n<p>With analog considerations we have</p>\n\n<p>$$\\frac{g_{4n}^8}{ G_{4n}^8}+\\frac{1}{4g_{4n}^8 G_{4n}^{16}}=1$$</p>\n\n<p>or that is the same the equation (1).</p>\n" }, { "answer_id": 2171142, "score": 2, "text": "<p>Consider the product representations of the <a href=\"https://en.wikipedia.org/wiki/Theta_function\" rel=\"nofollow noreferrer\">Jacobi Thetanulls</a>\n(as functions of the period ratio $\\tau$):\n$$\\begin{align}\n \\varTheta_{00}(\\tau)\n &amp;= \\frac{\\eta(\\tau)^5}{\\eta\\left(\\frac{\\tau}{2}\\right)^2\\eta(2\\tau)^2}\n&amp; \\varTheta_{01}(\\tau)\n &amp;= \\frac{\\eta\\left(\\frac{\\tau}{2}\\right)^2}{\\eta(\\tau)}\n&amp; \\varTheta_{10}(\\tau)\n &amp;= \\frac{2\\eta(2\\tau)^2}{\\eta(\\tau)}\n\\end{align}$$\nand plug these into Jacobi's <em>Vierergleichung</em>\n$$\\varTheta_{00}^4 = \\varTheta_{01}^4 + \\varTheta_{10}^4$$\nThis results in\n$$\\frac{\\eta(\\tau)^{20}}{\\eta\\left(\\frac{\\tau}{2}\\right)^8\\eta(2\\tau)^8}\n= \\frac{\\eta\\left(\\frac{\\tau}{2}\\right)^8}{\\eta(\\tau)^4}\n+ \\frac{16\\,\\eta(2\\tau)^8}{\\eta(\\tau)^4}$$\nMultiplying out the denominators and setting $\\tau=2z$\nthen results in the identity you asked about.</p>\n\n<hr>\n\n<p>One way of proving the <em>Vierergleichung</em> works by first deducing\nthe period-doubling/-halving relations\n$$\\begin{align}\n\t\\varTheta_{00}(\\tau)^2\n\t&amp;= \\varTheta_{00}(2\\tau)^2+\\varTheta_{10}(2\\tau)^2\n&amp;\t\\varTheta_{00}(2\\tau)^2\n\t&amp;= \\frac{1}{2}\n\t\\left(\\varTheta_{00}(\\tau)^2+\\varTheta_{01}(\\tau)^2\\right)\n\\\\\t\\varTheta_{01}(\\tau)^2\n\t&amp;= \\varTheta_{00}(2\\tau)^2-\\varTheta_{10}(2\\tau)^2\n&amp;\t\\varTheta_{01}(2\\tau)^2\n\t&amp;= \\varTheta_{00}(\\tau)\\,\\varTheta_{01}(\\tau)\n\\\\\t\\varTheta_{10}(\\tau)^2\n\t&amp;= 2\\,\\varTheta_{00}(2\\tau)\\,\\varTheta_{10}(2\\tau)\n&amp;\t\\varTheta_{10}(2\\tau)^2\n\t&amp;= \\frac{1}{2}\n\t\\left(\\varTheta_{00}(\\tau)^2-\\varTheta_{01}(\\tau)^2\\right)\n\\end{align}$$\nThe middle left equation has been deduced in my answer to <a href=\"https://math.stackexchange.com/questions/483708\">the question you\nreferenced</a>.\nThe upper left then follows from replacing $\\tau$ by $\\tau+1$.\nAddition/subtraction reveals the upper and lower right identities.\nThe middle right identity easily follows from its left neighbor\nonce you plug in\n$$\\begin{align}\n \\varTheta_{00}(\\tau)\n\t&amp;= \\varTheta_{00}(4\\tau)+\\varTheta_{10}(4\\tau)\n\\\\ \\varTheta_{01}(\\tau)\n\t&amp;= \\varTheta_{00}(4\\tau)-\\varTheta_{10}(4\\tau)\n\\end{align}$$\nwhich is just a splitting of $q$-series into odd and even terms.\nFinally, plugging in the above two even/odd-splitting equations in the\nlower right identity yields the lower left identity.</p>\n\n<p>Anyway, once you have established the above set of identities, you can deduce\n$$\\begin{align}\n \\varTheta_{10}(\\tau)^4\n\t&amp;= 4\\,\\varTheta_{00}(2\\tau)^2\\,\\varTheta_{10}(2\\tau)^2\n\\\\\t&amp;= \\left(\\varTheta_{00}(\\tau)^2+\\varTheta_{01}(\\tau)^2\\right)\n\t\\left(\\varTheta_{00}(\\tau)^2-\\varTheta_{01}(\\tau)^2\\right)\n\\\\\t&amp;= \\varTheta_{00}(\\tau)^4-\\varTheta_{01}(\\tau)^4\n\\end{align}$$\nwhich is the <em>Vierergleichung</em>.</p>\n" }, { "answer_id": 2466492, "score": 2, "text": "<p>I am surprised at the idea that this needs \"real\" proof. The identity is the very first one in my list and the very first formulation of it is the \"aequatio identica satis abstrusa\" straight from Jacobi's <em>Fundamenta Nova Theoriae Functionum Ellipticarum</em>, page 90 equation (14) published in 1829. There is a reference to Bruce Berndt's editing of Ramanujan's Notebooks, Part III, page 40, Entry 25 (vii) which is exactly the same that is used in the parametrization of the Arithmetico-Geometric Mean by squares of the theta null functions. That is, $\\;\\theta_3^4(q) = \\theta_4^4(q) + \\theta_2^4(q)\\;$ is the $a^2=b^2+c^2$ of the AGM and this goes back to Gauss even before Jacobi.</p>\n" }, { "answer_id": 2170538, "score": 0, "text": "<p>This identity</p>\n\n<p>$$\\eta(z)^{16} \\eta(4z)^8 + 16 \\eta(4z)^{16} \\eta(z)^8 = \\eta(2z)^{24}$$</p>\n\n<p>written as</p>\n\n<p>$$\\bigg(\\frac{\\eta(z)}{\\eta(2z)}\\bigg)^{16}\\bigg(\\frac{\\eta(4z)}{\\eta(2z)}\\bigg)^8+16\\bigg(\\frac{\\eta(z)}{\\eta(2z)}\\bigg)^8\\bigg(\\frac{\\eta(4z)}{\\eta(2z)}\\bigg)^{16}=1$$</p>\n\n<p>is equal respectively, to</p>\n\n<p>$$\\beta_{4n}+\\alpha_{4n}=1.$$</p>\n\n<p>Using Ramanujan's notations for</p>\n\n<p>$\\eta(z)=\\frac{\\phi(q)}{\\sqrt{2}(G_{n})^2}$</p>\n\n<p>$\\eta(2z)=\\frac{\\phi(q^2)}{\\sqrt{2}(G_{4n})^2}$</p>\n\n<p>$\\eta(4z)=\\frac{\\phi(q^4)}{\\sqrt{2}(G_{16n})^2}$</p>\n\n<p>and that</p>\n\n<p>$\\frac{\\phi(q)}{\\phi(q^2)}=\\frac{\\beta_{4n}^{1/4}}{\\beta_{n}^{1/8}}$</p>\n\n<p>$\\frac{\\phi(q^2)}{\\phi(q^4)}=\\frac{\\beta_{16n}^{1/4}}{\\beta_{4n}^{1/8}}$</p>\n\n<p>and moreover</p>\n\n<p>$\\beta_{n}=\\frac{g_{n}^8}{G_{n}^8}$</p>\n\n<p>$\\beta_{4n}=\\frac{g_{4n}^8}{G_{4n}^8}$</p>\n\n<p>$\\beta_{16n}=\\frac{g_{16n}^8}{G_{16n}^8}$</p>\n\n<p>$$4g_{4n}^8 G_{4n}^8\\big(G_{4n}^8-g_{4n}^8\\big)=1\\tag1$$</p>\n\n<p>we arrive to</p>\n\n<p>$\\beta_{4n}=\\bigg(\\frac{\\eta(z)}{\\eta(2z)}\\bigg)^{16}\\bigg(\\frac{\\eta(4z)}{\\eta(2z)}\\bigg)^8=\\frac{\\beta_{4n}^{5}G_{4n}^{48}}{\\beta_{n}^2 G_{n}^{32} \\beta_{16n}^2 G_{16n}^{16}}=$</p>\n\n<p>$=\\frac{g_{4n}^{40} G_{4n}^8}{g{n}^{16} G_{n}^{16} g_{16n}^{16}}=$</p>\n\n<p>$=\\frac{g_{4n}^{40} G_{4n}^8}{g_{n}^{16} G_{n}^{16} 2^4 g_{4n}^{16} G_{4n}^{16}}=$</p>\n\n<p>$=\\frac{g_{4n}^{40} G_{4n}^8}{g_{4n}^{16} g_{4n}^{16} G_{4n}^{16}}$</p>\n\n<p>simplifying</p>\n\n<p>$\\beta_{4n}=\\frac{g_{4n}^8}{G_{4n}^8}$</p>\n\n<p>With analog considerations we have</p>\n\n<p>$$\\frac{g_{4n}^8}{ G_{4n}^8}+\\frac{1}{4g_{4n}^8 G_{4n}^{16}}=1$$</p>\n\n<p>or that is the same the equation (1).</p>\n" } ]
Question: Direct proof of the level four eta identity Answer 1: <p>Consider the product representations of the <a href="https://en.wikipedia.org/wiki/Theta_function" rel="nofollow noreferrer">Jacobi Thetanulls</a> (as functions of the period ratio $\tau$): $$\begin{align} \varTheta_{00}(\tau) &amp;= \frac{\eta(\tau)^5}{\eta\left(\frac{\tau}{2}\right)^2\eta(2\tau)^2} &amp; \varTheta_{01}(\tau) &amp;= \frac{\eta\left(\frac{\tau}{2}\right)^2}{\eta(\tau)} &amp; \varTheta_{10}(\tau) &amp;= \frac{2\eta(2\tau)^2}{\eta(\tau)} \end{align}$$ and plug these into Jacobi's <em>Vierergleichung</em> $$\varTheta_{00}^4 = \varTheta_{01}^4 + \varTheta_{10}^4$$ This results in $$\frac{\eta(\tau)^{20}}{\eta\left(\frac{\tau}{2}\right)^8\eta(2\tau)^8} = \frac{\eta\left(\frac{\tau}{2}\right)^8}{\eta(\tau)^4} + \frac{16\,\eta(2\tau)^8}{\eta(\tau)^4}$$ Multiplying out the denominators and setting $\tau=2z$ then results in the identity you asked about.</p> <hr> <p>One way of proving the <em>Vierergleichung</em> works by first deducing the period-doubling/-halving relations $$\begin{align} \varTheta_{00}(\tau)^2 &amp;= \varTheta_{00}(2\tau)^2+\varTheta_{10}(2\tau)^2 &amp; \varTheta_{00}(2\tau)^2 &amp;= \frac{1}{2} \left(\varTheta_{00}(\tau)^2+\varTheta_{01}(\tau)^2\right) \\ \varTheta_{01}(\tau)^2 &amp;= \varTheta_{00}(2\tau)^2-\varTheta_{10}(2\tau)^2 &amp; \varTheta_{01}(2\tau)^2 &amp;= \varTheta_{00}(\tau)\,\varTheta_{01}(\tau) \\ \varTheta_{10}(\tau)^2 &amp;= 2\,\varTheta_{00}(2\tau)\,\varTheta_{10}(2\tau) &amp; \varTheta_{10}(2\tau)^2 &amp;= \frac{1}{2} \left(\varTheta_{00}(\tau)^2-\varTheta_{01}(\tau)^2\right) \end{align}$$ The middle left equation has been deduced in my answer to <a href="https://math.stackexchange.com/questions/483708">the question you referenced</a>. The upper left then follows from replacing $\tau$ by $\tau+1$. Addition/subtraction reveals the upper and lower right identities. The middle right identity easily follows from its left neighbor once you plug in $$\begin{align} \varTheta_{00}(\tau) &amp;= \varTheta_{00}(4\tau)+\varTheta_{10}(4\tau) \\ \varTheta_{01}(\tau) &amp;= \varTheta_{00}(4\tau)-\varTheta_{10}(4\tau) \end{align}$$ which is just a splitting of $q$-series into odd and even terms. Finally, plugging in the above two even/odd-splitting equations in the lower right identity yields the lower left identity.</p> <p>Anyway, once you have established the above set of identities, you can deduce $$\begin{align} \varTheta_{10}(\tau)^4 &amp;= 4\,\varTheta_{00}(2\tau)^2\,\varTheta_{10}(2\tau)^2 \\ &amp;= \left(\varTheta_{00}(\tau)^2+\varTheta_{01}(\tau)^2\right) \left(\varTheta_{00}(\tau)^2-\varTheta_{01}(\tau)^2\right) \\ &amp;= \varTheta_{00}(\tau)^4-\varTheta_{01}(\tau)^4 \end{align}$$ which is the <em>Vierergleichung</em>.</p> Answer 2: <p>I am surprised at the idea that this needs "real" proof. The identity is the very first one in my list and the very first formulation of it is the "aequatio identica satis abstrusa" straight from Jacobi's <em>Fundamenta Nova Theoriae Functionum Ellipticarum</em>, page 90 equation (14) published in 1829. There is a reference to Bruce Berndt's editing of Ramanujan's Notebooks, Part III, page 40, Entry 25 (vii) which is exactly the same that is used in the parametrization of the Arithmetico-Geometric Mean by squares of the theta null functions. That is, $\;\theta_3^4(q) = \theta_4^4(q) + \theta_2^4(q)\;$ is the $a^2=b^2+c^2$ of the AGM and this goes back to Gauss even before Jacobi.</p> Answer 3: <p>This identity</p> <p>$$\eta(z)^{16} \eta(4z)^8 + 16 \eta(4z)^{16} \eta(z)^8 = \eta(2z)^{24}$$</p> <p>written as</p> <p>$$\bigg(\frac{\eta(z)}{\eta(2z)}\bigg)^{16}\bigg(\frac{\eta(4z)}{\eta(2z)}\bigg)^8+16\bigg(\frac{\eta(z)}{\eta(2z)}\bigg)^8\bigg(\frac{\eta(4z)}{\eta(2z)}\bigg)^{16}=1$$</p> <p>is equal respectively, to</p> <p>$$\beta_{4n}+\alpha_{4n}=1.$$</p> <p>Using Ramanujan's notations for</p> <p>$\eta(z)=\frac{\phi(q)}{\sqrt{2}(G_{n})^2}$</p> <p>$\eta(2z)=\frac{\phi(q^2)}{\sqrt{2}(G_{4n})^2}$</p> <p>$\eta(4z)=\frac{\phi(q^4)}{\sqrt{2}(G_{16n})^2}$</p> <p>and that</p> <p>$\frac{\phi(q)}{\phi(q^2)}=\frac{\beta_{4n}^{1/4}}{\beta_{n}^{1/8}}$</p> <p>$\frac{\phi(q^2)}{\phi(q^4)}=\frac{\beta_{16n}^{1/4}}{\beta_{4n}^{1/8}}$</p> <p>and moreover</p> <p>$\beta_{n}=\frac{g_{n}^8}{G_{n}^8}$</p> <p>$\beta_{4n}=\frac{g_{4n}^8}{G_{4n}^8}$</p> <p>$\beta_{16n}=\frac{g_{16n}^8}{G_{16n}^8}$</p> <p>$$4g_{4n}^8 G_{4n}^8\big(G_{4n}^8-g_{4n}^8\big)=1\tag1$$</p> <p>we arrive to</p> <p>$\beta_{4n}=\bigg(\frac{\eta(z)}{\eta(2z)}\bigg)^{16}\bigg(\frac{\eta(4z)}{\eta(2z)}\bigg)^8=\frac{\beta_{4n}^{5}G_{4n}^{48}}{\beta_{n}^2 G_{n}^{32} \beta_{16n}^2 G_{16n}^{16}}=$</p> <p>$=\frac{g_{4n}^{40} G_{4n}^8}{g{n}^{16} G_{n}^{16} g_{16n}^{16}}=$</p> <p>$=\frac{g_{4n}^{40} G_{4n}^8}{g_{n}^{16} G_{n}^{16} 2^4 g_{4n}^{16} G_{4n}^{16}}=$</p> <p>$=\frac{g_{4n}^{40} G_{4n}^8}{g_{4n}^{16} g_{4n}^{16} G_{4n}^{16}}$</p> <p>simplifying</p> <p>$\beta_{4n}=\frac{g_{4n}^8}{G_{4n}^8}$</p> <p>With analog considerations we have</p> <p>$$\frac{g_{4n}^8}{ G_{4n}^8}+\frac{1}{4g_{4n}^8 G_{4n}^{16}}=1$$</p> <p>or that is the same the equation (1).</p> Answer 4: <p>Consider the product representations of the <a href="https://en.wikipedia.org/wiki/Theta_function" rel="nofollow noreferrer">Jacobi Thetanulls</a> (as functions of the period ratio $\tau$): $$\begin{align} \varTheta_{00}(\tau) &amp;= \frac{\eta(\tau)^5}{\eta\left(\frac{\tau}{2}\right)^2\eta(2\tau)^2} &amp; \varTheta_{01}(\tau) &amp;= \frac{\eta\left(\frac{\tau}{2}\right)^2}{\eta(\tau)} &amp; \varTheta_{10}(\tau) &amp;= \frac{2\eta(2\tau)^2}{\eta(\tau)} \end{align}$$ and plug these into Jacobi's <em>Vierergleichung</em> $$\varTheta_{00}^4 = \varTheta_{01}^4 + \varTheta_{10}^4$$ This results in $$\frac{\eta(\tau)^{20}}{\eta\left(\frac{\tau}{2}\right)^8\eta(2\tau)^8} = \frac{\eta\left(\frac{\tau}{2}\right)^8}{\eta(\tau)^4} + \frac{16\,\eta(2\tau)^8}{\eta(\tau)^4}$$ Multiplying out the denominators and setting $\tau=2z$ then results in the identity you asked about.</p> <hr> <p>One way of proving the <em>Vierergleichung</em> works by first deducing the period-doubling/-halving relations $$\begin{align} \varTheta_{00}(\tau)^2 &amp;= \varTheta_{00}(2\tau)^2+\varTheta_{10}(2\tau)^2 &amp; \varTheta_{00}(2\tau)^2 &amp;= \frac{1}{2} \left(\varTheta_{00}(\tau)^2+\varTheta_{01}(\tau)^2\right) \\ \varTheta_{01}(\tau)^2 &amp;= \varTheta_{00}(2\tau)^2-\varTheta_{10}(2\tau)^2 &amp; \varTheta_{01}(2\tau)^2 &amp;= \varTheta_{00}(\tau)\,\varTheta_{01}(\tau) \\ \varTheta_{10}(\tau)^2 &amp;= 2\,\varTheta_{00}(2\tau)\,\varTheta_{10}(2\tau) &amp; \varTheta_{10}(2\tau)^2 &amp;= \frac{1}{2} \left(\varTheta_{00}(\tau)^2-\varTheta_{01}(\tau)^2\right) \end{align}$$ The middle left equation has been deduced in my answer to <a href="https://math.stackexchange.com/questions/483708">the question you referenced</a>. The upper left then follows from replacing $\tau$ by $\tau+1$. Addition/subtraction reveals the upper and lower right identities. The middle right identity easily follows from its left neighbor once you plug in $$\begin{align} \varTheta_{00}(\tau) &amp;= \varTheta_{00}(4\tau)+\varTheta_{10}(4\tau) \\ \varTheta_{01}(\tau) &amp;= \varTheta_{00}(4\tau)-\varTheta_{10}(4\tau) \end{align}$$ which is just a splitting of $q$-series into odd and even terms. Finally, plugging in the above two even/odd-splitting equations in the lower right identity yields the lower left identity.</p> <p>Anyway, once you have established the above set of identities, you can deduce $$\begin{align} \varTheta_{10}(\tau)^4 &amp;= 4\,\varTheta_{00}(2\tau)^2\,\varTheta_{10}(2\tau)^2 \\ &amp;= \left(\varTheta_{00}(\tau)^2+\varTheta_{01}(\tau)^2\right) \left(\varTheta_{00}(\tau)^2-\varTheta_{01}(\tau)^2\right) \\ &amp;= \varTheta_{00}(\tau)^4-\varTheta_{01}(\tau)^4 \end{align}$$ which is the <em>Vierergleichung</em>.</p> Answer 5: <p>I am surprised at the idea that this needs "real" proof. The identity is the very first one in my list and the very first formulation of it is the "aequatio identica satis abstrusa" straight from Jacobi's <em>Fundamenta Nova Theoriae Functionum Ellipticarum</em>, page 90 equation (14) published in 1829. There is a reference to Bruce Berndt's editing of Ramanujan's Notebooks, Part III, page 40, Entry 25 (vii) which is exactly the same that is used in the parametrization of the Arithmetico-Geometric Mean by squares of the theta null functions. That is, $\;\theta_3^4(q) = \theta_4^4(q) + \theta_2^4(q)\;$ is the $a^2=b^2+c^2$ of the AGM and this goes back to Gauss even before Jacobi.</p> Answer 6: <p>This identity</p> <p>$$\eta(z)^{16} \eta(4z)^8 + 16 \eta(4z)^{16} \eta(z)^8 = \eta(2z)^{24}$$</p> <p>written as</p> <p>$$\bigg(\frac{\eta(z)}{\eta(2z)}\bigg)^{16}\bigg(\frac{\eta(4z)}{\eta(2z)}\bigg)^8+16\bigg(\frac{\eta(z)}{\eta(2z)}\bigg)^8\bigg(\frac{\eta(4z)}{\eta(2z)}\bigg)^{16}=1$$</p> <p>is equal respectively, to</p> <p>$$\beta_{4n}+\alpha_{4n}=1.$$</p> <p>Using Ramanujan's notations for</p> <p>$\eta(z)=\frac{\phi(q)}{\sqrt{2}(G_{n})^2}$</p> <p>$\eta(2z)=\frac{\phi(q^2)}{\sqrt{2}(G_{4n})^2}$</p> <p>$\eta(4z)=\frac{\phi(q^4)}{\sqrt{2}(G_{16n})^2}$</p> <p>and that</p> <p>$\frac{\phi(q)}{\phi(q^2)}=\frac{\beta_{4n}^{1/4}}{\beta_{n}^{1/8}}$</p> <p>$\frac{\phi(q^2)}{\phi(q^4)}=\frac{\beta_{16n}^{1/4}}{\beta_{4n}^{1/8}}$</p> <p>and moreover</p> <p>$\beta_{n}=\frac{g_{n}^8}{G_{n}^8}$</p> <p>$\beta_{4n}=\frac{g_{4n}^8}{G_{4n}^8}$</p> <p>$\beta_{16n}=\frac{g_{16n}^8}{G_{16n}^8}$</p> <p>$$4g_{4n}^8 G_{4n}^8\big(G_{4n}^8-g_{4n}^8\big)=1\tag1$$</p> <p>we arrive to</p> <p>$\beta_{4n}=\bigg(\frac{\eta(z)}{\eta(2z)}\bigg)^{16}\bigg(\frac{\eta(4z)}{\eta(2z)}\bigg)^8=\frac{\beta_{4n}^{5}G_{4n}^{48}}{\beta_{n}^2 G_{n}^{32} \beta_{16n}^2 G_{16n}^{16}}=$</p> <p>$=\frac{g_{4n}^{40} G_{4n}^8}{g{n}^{16} G_{n}^{16} g_{16n}^{16}}=$</p> <p>$=\frac{g_{4n}^{40} G_{4n}^8}{g_{n}^{16} G_{n}^{16} 2^4 g_{4n}^{16} G_{4n}^{16}}=$</p> <p>$=\frac{g_{4n}^{40} G_{4n}^8}{g_{4n}^{16} g_{4n}^{16} G_{4n}^{16}}$</p> <p>simplifying</p> <p>$\beta_{4n}=\frac{g_{4n}^8}{G_{4n}^8}$</p> <p>With analog considerations we have</p> <p>$$\frac{g_{4n}^8}{ G_{4n}^8}+\frac{1}{4g_{4n}^8 G_{4n}^{16}}=1$$</p> <p>or that is the same the equation (1).</p> Answer 7: <p>Consider the product representations of the <a href="https://en.wikipedia.org/wiki/Theta_function" rel="nofollow noreferrer">Jacobi Thetanulls</a> (as functions of the period ratio $\tau$): $$\begin{align} \varTheta_{00}(\tau) &amp;= \frac{\eta(\tau)^5}{\eta\left(\frac{\tau}{2}\right)^2\eta(2\tau)^2} &amp; \varTheta_{01}(\tau) &amp;= \frac{\eta\left(\frac{\tau}{2}\right)^2}{\eta(\tau)} &amp; \varTheta_{10}(\tau) &amp;= \frac{2\eta(2\tau)^2}{\eta(\tau)} \end{align}$$ and plug these into Jacobi's <em>Vierergleichung</em> $$\varTheta_{00}^4 = \varTheta_{01}^4 + \varTheta_{10}^4$$ This results in $$\frac{\eta(\tau)^{20}}{\eta\left(\frac{\tau}{2}\right)^8\eta(2\tau)^8} = \frac{\eta\left(\frac{\tau}{2}\right)^8}{\eta(\tau)^4} + \frac{16\,\eta(2\tau)^8}{\eta(\tau)^4}$$ Multiplying out the denominators and setting $\tau=2z$ then results in the identity you asked about.</p> <hr> <p>One way of proving the <em>Vierergleichung</em> works by first deducing the period-doubling/-halving relations $$\begin{align} \varTheta_{00}(\tau)^2 &amp;= \varTheta_{00}(2\tau)^2+\varTheta_{10}(2\tau)^2 &amp; \varTheta_{00}(2\tau)^2 &amp;= \frac{1}{2} \left(\varTheta_{00}(\tau)^2+\varTheta_{01}(\tau)^2\right) \\ \varTheta_{01}(\tau)^2 &amp;= \varTheta_{00}(2\tau)^2-\varTheta_{10}(2\tau)^2 &amp; \varTheta_{01}(2\tau)^2 &amp;= \varTheta_{00}(\tau)\,\varTheta_{01}(\tau) \\ \varTheta_{10}(\tau)^2 &amp;= 2\,\varTheta_{00}(2\tau)\,\varTheta_{10}(2\tau) &amp; \varTheta_{10}(2\tau)^2 &amp;= \frac{1}{2} \left(\varTheta_{00}(\tau)^2-\varTheta_{01}(\tau)^2\right) \end{align}$$ The middle left equation has been deduced in my answer to <a href="https://math.stackexchange.com/questions/483708">the question you referenced</a>. The upper left then follows from replacing $\tau$ by $\tau+1$. Addition/subtraction reveals the upper and lower right identities. The middle right identity easily follows from its left neighbor once you plug in $$\begin{align} \varTheta_{00}(\tau) &amp;= \varTheta_{00}(4\tau)+\varTheta_{10}(4\tau) \\ \varTheta_{01}(\tau) &amp;= \varTheta_{00}(4\tau)-\varTheta_{10}(4\tau) \end{align}$$ which is just a splitting of $q$-series into odd and even terms. Finally, plugging in the above two even/odd-splitting equations in the lower right identity yields the lower left identity.</p> <p>Anyway, once you have established the above set of identities, you can deduce $$\begin{align} \varTheta_{10}(\tau)^4 &amp;= 4\,\varTheta_{00}(2\tau)^2\,\varTheta_{10}(2\tau)^2 \\ &amp;= \left(\varTheta_{00}(\tau)^2+\varTheta_{01}(\tau)^2\right) \left(\varTheta_{00}(\tau)^2-\varTheta_{01}(\tau)^2\right) \\ &amp;= \varTheta_{00}(\tau)^4-\varTheta_{01}(\tau)^4 \end{align}$$ which is the <em>Vierergleichung</em>.</p> Answer 8: <p>I am surprised at the idea that this needs "real" proof. The identity is the very first one in my list and the very first formulation of it is the "aequatio identica satis abstrusa" straight from Jacobi's <em>Fundamenta Nova Theoriae Functionum Ellipticarum</em>, page 90 equation (14) published in 1829. There is a reference to Bruce Berndt's editing of Ramanujan's Notebooks, Part III, page 40, Entry 25 (vii) which is exactly the same that is used in the parametrization of the Arithmetico-Geometric Mean by squares of the theta null functions. That is, $\;\theta_3^4(q) = \theta_4^4(q) + \theta_2^4(q)\;$ is the $a^2=b^2+c^2$ of the AGM and this goes back to Gauss even before Jacobi.</p> Answer 9: <p>This identity</p> <p>$$\eta(z)^{16} \eta(4z)^8 + 16 \eta(4z)^{16} \eta(z)^8 = \eta(2z)^{24}$$</p> <p>written as</p> <p>$$\bigg(\frac{\eta(z)}{\eta(2z)}\bigg)^{16}\bigg(\frac{\eta(4z)}{\eta(2z)}\bigg)^8+16\bigg(\frac{\eta(z)}{\eta(2z)}\bigg)^8\bigg(\frac{\eta(4z)}{\eta(2z)}\bigg)^{16}=1$$</p> <p>is equal respectively, to</p> <p>$$\beta_{4n}+\alpha_{4n}=1.$$</p> <p>Using Ramanujan's notations for</p> <p>$\eta(z)=\frac{\phi(q)}{\sqrt{2}(G_{n})^2}$</p> <p>$\eta(2z)=\frac{\phi(q^2)}{\sqrt{2}(G_{4n})^2}$</p> <p>$\eta(4z)=\frac{\phi(q^4)}{\sqrt{2}(G_{16n})^2}$</p> <p>and that</p> <p>$\frac{\phi(q)}{\phi(q^2)}=\frac{\beta_{4n}^{1/4}}{\beta_{n}^{1/8}}$</p> <p>$\frac{\phi(q^2)}{\phi(q^4)}=\frac{\beta_{16n}^{1/4}}{\beta_{4n}^{1/8}}$</p> <p>and moreover</p> <p>$\beta_{n}=\frac{g_{n}^8}{G_{n}^8}$</p> <p>$\beta_{4n}=\frac{g_{4n}^8}{G_{4n}^8}$</p> <p>$\beta_{16n}=\frac{g_{16n}^8}{G_{16n}^8}$</p> <p>$$4g_{4n}^8 G_{4n}^8\big(G_{4n}^8-g_{4n}^8\big)=1\tag1$$</p> <p>we arrive to</p> <p>$\beta_{4n}=\bigg(\frac{\eta(z)}{\eta(2z)}\bigg)^{16}\bigg(\frac{\eta(4z)}{\eta(2z)}\bigg)^8=\frac{\beta_{4n}^{5}G_{4n}^{48}}{\beta_{n}^2 G_{n}^{32} \beta_{16n}^2 G_{16n}^{16}}=$</p> <p>$=\frac{g_{4n}^{40} G_{4n}^8}{g{n}^{16} G_{n}^{16} g_{16n}^{16}}=$</p> <p>$=\frac{g_{4n}^{40} G_{4n}^8}{g_{n}^{16} G_{n}^{16} 2^4 g_{4n}^{16} G_{4n}^{16}}=$</p> <p>$=\frac{g_{4n}^{40} G_{4n}^8}{g_{4n}^{16} g_{4n}^{16} G_{4n}^{16}}$</p> <p>simplifying</p> <p>$\beta_{4n}=\frac{g_{4n}^8}{G_{4n}^8}$</p> <p>With analog considerations we have</p> <p>$$\frac{g_{4n}^8}{ G_{4n}^8}+\frac{1}{4g_{4n}^8 G_{4n}^{16}}=1$$</p> <p>or that is the same the equation (1).</p>
Alternative approaches to showing that $\gamma=\int_0^\infty \left(\frac{1}{1+x^a}-\frac{1}{e^x}\right)\,\frac1x\,dx$, $a&gt;0$
2,087,040
4
https://math.stackexchange.com/questions/2087040/alternative-approaches-to-showing-that-gamma-int-0-infty-left-frac11x
<p>Starting from the limit definition of the Euler-Mascheroni constant <span class="math-container">$\gamma$</span> as given by</p> <blockquote> <p><span class="math-container">$$\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)\tag 1$$</span></p> </blockquote> <p>we can show that <span class="math-container">$\gamma$</span> has an integral representation</p> <blockquote> <p><span class="math-container">$$\gamma=\int_0^\infty\left(\frac{1}{e^x-1}-\frac{1}{xe^x}\right)\,dx \tag 2$$</span></p> </blockquote> <hr /> <blockquote> <p><strong>Proof of <span class="math-container">$(2)$</span>: This is provided for completeness only and one can skip this part without losing context.</strong></p> </blockquote> <p>To show that the integral in <span class="math-container">$(2)$</span> is equivalent to <span class="math-container">$(1)$</span>, we can proceed as follows.</p> <p><span class="math-container">$$\begin{align} \int_0^\infty\left(\frac{1}{e^x-1}-\frac{1}{xe^x}\right)\,dx&amp;= \int_0^\infty \frac{e^{-x}}{1-e^{-x}}\left(1-\frac{1-e^{-x}}{x}\right)\,dx\\\\ &amp;=\sum_{k=1}^\infty \int_0^\infty\left(e^{-kx}-\frac{e^{-kx}-e^{-(k+1)x}}{x}\right)\,dx\\\\ &amp;=\sum_{k=1}^\infty \left(\frac{1}{k}-\log\left(\frac{k+1}{k}\right)\right)\\\\ &amp;=\lim_{n\to \infty}\sum_{k=1}^n \left(\frac1k -\log\left(\frac{k+1}{k}\right)\right)\\\\ &amp;=\lim_{n\to \infty}\left(-\log(n+1)+\sum_{k=1}^n\frac1k\right)\\\\ &amp;=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)\\\\ \end{align}$$</span></p> <hr /> <p>Another integral representation for <span class="math-container">$\gamma$</span> is given by</p> <blockquote> <p><span class="math-container">$$\gamma=\int_0^\infty \left(\frac{1}{x(1+x^a)}-\frac{1}{xe^x}\right) \,dx \tag 3$$</span></p> </blockquote> <p>for <span class="math-container">$a&gt;0$</span>.</p> <p>Equipped with <span class="math-container">$(2)$</span>, we can show the equivalence of <span class="math-container">$(3)$</span> with <span class="math-container">$(1)$</span> by showing that</p> <blockquote> <p><span class="math-container">$$\int_0^\infty \left(\frac{1}{x(1+x^a)}-\frac{1}{e^x-1}\right)\,dx=0\tag 4$$</span></p> </blockquote> <hr /> <p>To prove <span class="math-container">$(4)$</span>, I proceeded as follows.</p> <p><span class="math-container">$$\begin{align} \lim_{\epsilon\to 0^+}\int_{\epsilon}^\infty\left(\frac{1}{x(1+x^a)}-\frac{1}{e^x-1}\right)\,dx &amp;=\lim_{\epsilon\to 0^+}\left.\left(-\frac1a \log(1+x^{-a})-\log(1-e^{-x})\right)\right|_{\epsilon}^{\infty}\\\\ &amp;=\lim_{\epsilon\to 0^+}\left(\frac1a \log(1+\epsilon^{-a})+\log(1-e^{-\epsilon})\right)\\\\ &amp;=0 \end{align}$$</span></p> <p>And we are done!</p> <hr /> <p>This approach seemed a bit cumbersome and indirect.</p> <blockquote> <p><strong>QUESTION: So, what are alternative approaches to establishing equivalence of <span class="math-container">$(3)$</span> and <span class="math-container">$(1)$</span>?</strong></p> </blockquote>
[ { "answer_id": 2087077, "score": 7, "text": "<p>Integrating by parts,\n$$ \\int_0^{\\infty} \\left( \\frac{1}{1+x^a} - e^{-x} \\right) \\frac{dx}{x} = 0-0 + \\int_0^{\\infty} \\left( \\frac{ax^a}{x(1+x^a)^2} - e^{-x} \\right) \\log{x} \\, dx $$</p>\n\n<p>Of course, we recognise the second term as a familiar definition of/easy-to-derive formula for $\\gamma$. The first term we need to show is zero. But\n$$ \\int \\frac{ax^a\\log{x}}{x(1+x^a)^2} \\, dx = \\frac{x^a\\log{x}}{1+x^a} - \\frac{1}{a}\\log{(1+x^a)}, $$\nwhich is continuous and tends to zero at both endpoints since $a&gt;0$.</p>\n\n<hr>\n\n<p>The really interesting thing about this result in my opinion is that it shows the first term is a complete red herring: let $F$ be continuous and continuously differentiable on $(0,\\infty)$ with the following properties:</p>\n\n<ul>\n<li>$ F(x) = 1 + o(1/\\log{x})$ as $x \\downarrow 0 $,</li>\n<li>$F(x) = o(x^{-\\epsilon})$ as $x \\uparrow \\infty$ for some $\\epsilon&gt;0$,</li>\n<li>$\\int_0^{\\infty} F'(x) \\log{x} \\, dx = 0$</li>\n</ul>\n\n<p>Then\n$$ \\gamma = \\int_0^{\\infty} \\left( F(x) - e^{-x} \\right) \\frac{dx}{x}. $$\nThe proof is essentially identical to the above:\n$$ \\int_0^{\\infty} ( F(x) - e^{-x} ) \\frac{dx}{x} = [(F(x) - e^{-x}) \\log{x}]_0^{\\infty} - \\int_0^{\\infty} ( F'(x) + e^{-x} ) \\log{x} \\, dx = \\int_0^{\\infty} e^{-x} \\log{x} \\, dx, $$\nThe integral on the left exists by the first two conditions on $F$, which are also enough to ensure the boundary terms from the integration by parts go to zero.</p>\n" }, { "answer_id": 2087628, "score": 2, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present a supplement to the nice solution posted by @Chappers in order to have a self-contained way forward. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/1913325/real-analysis-methods-to-evaluate-int-0-infty-fracxa1x2-dx-a1/1913576#1913576\">THIS ANSWER</a>, I showed using the integral representation of the Gamma function</p>\n\n<p>$$\\Gamma(x)=\\int_0^\\infty s^{x-1}e^{-s}\\,ds \\tag 1$$</p>\n\n<p>that Gamma can be expressed as the limit</p>\n\n<p>$$\\Gamma(x)=\\lim_{n\\to \\infty}\\frac{n^x\\,n!}{x(x+1)(x+2)\\cdots (x+n)} \\tag 2$$</p>\n\n<p>Now, note that $(2)$ can be rewritten as </p>\n\n<p>$$\\begin{align}\n\\Gamma(x)&amp;=\\lim_{n\\to \\infty}\\frac{e^{x(\\log(n)-1-1/2-\\cdots -1/n)}\\, e^{x}e^{x/2}\\cdots e^{x/n}}{x(1+x)(1+x/2)\\cdots (1+x/n)} \\\\\\\\&amp;=\\frac{e^{-\\gamma x}}{x}\\prod_{n=1}^\\infty e^{x/n}\\left(1+\\frac xn\\right)^{-1}\\tag 3\n\\end{align}$$</p>\n\n<p>where $(3)$ gives the well-known Weierstrass product for Gamma. </p>\n\n<p>Differentiating the logarithm of $(3)$ and setting $x=1$ reveals</p>\n\n<p>$$\\Gamma'(1)=\\Gamma(1)\\left(-\\gamma -1+\\sum_{n=1}^\\infty \\left(\\frac1n-\\frac{1}{n+1}\\right)\\right)=-\\gamma \\tag 4$$</p>\n\n<p>Differentiating $(1)$ and setting $x=1$ yields</p>\n\n<p>$$\\Gamma'(1)=\\int_0^\\infty \\log(x)e^{-x}\\,dx \\tag 5$$</p>\n\n<p>whence comparing $(4)$ and $(5)$ we obtain the coveted result</p>\n\n<p>$$\\int_0^\\infty \\log(x)e^{-x}\\,dx =-\\gamma$$</p>\n\n<hr>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that that $\\gamma$ as given by $\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$ is equal to $\\gamma$ as expressed by the limit $\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$.</p>\n" }, { "answer_id": 2156662, "score": 2, "text": "<p>The same idea as in <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2141879#2141879\">my previous answer</a> applies. Notice that</p>\n\n<p>$$ f(x) = \\frac{1}{1+x} \\quad \\Rightarrow \\quad \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -\\log\\epsilon + \\log(1+\\epsilon) = -\\log\\epsilon + o(1) $$</p>\n\n<p>as $\\epsilon \\to 0^+$. Now from the linked answer above, we find that</p>\n\n<p>\\begin{align*}\nf(x) = e^{-x}\n&amp;\\quad \\Rightarrow \\quad c(f) = \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\frac{ds}{1+s} - \\log R \\right) - \\gamma = -\\gamma \\\\\n&amp;\\quad \\Rightarrow \\quad \\int_{\\epsilon}^{\\infty} \\frac{e^{-x}}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1).\n\\end{align*}</p>\n\n<p>It is worth to remark that the $\\gamma$ term above is computed from the identity $\\gamma = -\\int_{0}^{\\infty} e^{-x}\\log x \\, dx$, which you are already aware of. From this,</p>\n\n<p>\\begin{align*}\n\\int_{\\epsilon}^{\\infty} \\frac{1}{x^a + 1} \\, \\frac{dx}{x} - \\int_{\\epsilon}^{\\infty} e^{-x} \\, \\frac{dx}{x}\n&amp;= \\frac{1}{a}\\int_{\\epsilon^a}^{\\infty} \\frac{1}{x + 1} \\, \\frac{dx}{x} - \\int_{\\epsilon}^{\\infty} e^{-x} \\, \\frac{dx}{x}\\\\\n&amp;= \\gamma + o(1)\n\\end{align*}</p>\n\n<p>and taking $\\epsilon \\to 0^+$ gives the result.</p>\n\n<hr>\n\n<p>Using the quantity $c(f)$, you can compute various integrals (including all the integrals you have asked) together with some tabulated results for $c(f)$:</p>\n\n<p>\\begin{align*}\nc\\left\\{\\frac{1}{(1+x)^\\alpha}\\right\\} &amp;= -H_{\\alpha-1}, &amp;\nc\\{e^{-x}\\} &amp;= -\\gamma, \\\\\nc\\left\\{\\frac{x}{e^x-1}\\right\\} &amp;= 0, &amp;\nc\\{\\cos x\\} &amp;= -\\gamma,\n\\end{align*}</p>\n\n<p>where $H_n$ is the harmonic numbers. For instance, if $a &gt; 0$ then</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\left( \\frac{1}{\\sqrt{1+a x}} - e^{-x^2} \\right) \\frac{dx}{x}\n&amp;= c\\left\\{ \\frac{1}{\\sqrt{1+a x}} - e^{-x^2} \\right\\} \\\\\n&amp;= c\\left\\{ \\frac{1}{\\sqrt{1+a x}} \\right\\} - c\\{e^{-x^2}\\} \\\\\n&amp;= c\\left\\{ \\frac{1}{\\sqrt{1+x}} \\right\\} - \\log a - \\frac{1}{2}c\\{e^{-x}\\} \\\\\n&amp;= H_{-1/2} - \\log a + \\frac{\\gamma}{2} \\\\\n&amp;= \\frac{\\gamma}{2} - \\log(4a).\n\\end{align*}</p>\n" }, { "answer_id": 2178456, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n&amp;\\left.\\int_{0}^{\\infty}\\pars{{1 \\over 1 + x^{a}} - {1 \\over \\expo{x}}}\n\\,{\\dd x \\over x}\\,\\right\\vert_{\\ a\\ &gt;\\ 0} =\n\\lim_{\\epsilon \\to 0^{+}}\\pars{%\n\\int_{\\epsilon}^{\\infty}{\\dd x \\over \\pars{1 + x^{a}}x} -\n\\int_{\\epsilon}^{\\infty}{\\expo{-x} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n\\lim_{\\epsilon \\to 0^{+}}\\braces{%\n{1 \\over a}\\int_{\\epsilon^{a}}^{\\infty}{\\dd x \\over \\pars{1 + x}x} - \\bracks{-\\ln\\pars{\\epsilon}\\expo{-\\epsilon} +\n\\int_{\\epsilon}^{\\infty}\\ln\\pars{x}\\expo{-x}\\,\\dd x}}\n\\\\[5mm] = &amp;\\\n\\gamma +\n\\lim_{\\epsilon \\to 0^{+}}\\bracks{%\n-{1 \\over a}\\ln\\pars{\\epsilon^{a} \\over 1 + \\epsilon^{a}} +\n\\ln\\pars{\\epsilon}\\expo{-\\epsilon}}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>because\n $\\ds{\\int_{0}^{\\infty}\\ln\\pars{x}\\expo{-x}\\,\\dd x =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}x^{\\mu}\\expo{-x}\\,\\dd x\n\\,\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\n\\,\\right\\vert_{\\ \\mu\\ =\\ 0} =\\Psi\\pars{1} = -\\gamma}$.</p>\n</blockquote>\n\n<p>Expression \\eqref{1} becomes:\n\\begin{align}\n&amp;\\left.\\int_{0}^{\\infty}\\pars{{1 \\over 1 + x^{a}} - {1 \\over \\expo{x}}}\n\\,{\\dd x \\over x}\\,\\right\\vert_{\\ a\\ &gt;\\ 0} =\n\\gamma\\ +\\\n\\underbrace{\\lim_{\\epsilon \\to 0^{+}}\\bracks{%\n{1 \\over a}\\ln\\pars{1 + \\epsilon^{a}} -\n\\ln\\pars{\\epsilon}\\pars{1 - \\expo{-\\epsilon}}}}_{\\ds{=\\ 0}}\\ =\\\n\\bbox[15px,#ffe,border:1px dotted navy]{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2087077, "score": 7, "text": "<p>Integrating by parts,\n$$ \\int_0^{\\infty} \\left( \\frac{1}{1+x^a} - e^{-x} \\right) \\frac{dx}{x} = 0-0 + \\int_0^{\\infty} \\left( \\frac{ax^a}{x(1+x^a)^2} - e^{-x} \\right) \\log{x} \\, dx $$</p>\n\n<p>Of course, we recognise the second term as a familiar definition of/easy-to-derive formula for $\\gamma$. The first term we need to show is zero. But\n$$ \\int \\frac{ax^a\\log{x}}{x(1+x^a)^2} \\, dx = \\frac{x^a\\log{x}}{1+x^a} - \\frac{1}{a}\\log{(1+x^a)}, $$\nwhich is continuous and tends to zero at both endpoints since $a&gt;0$.</p>\n\n<hr>\n\n<p>The really interesting thing about this result in my opinion is that it shows the first term is a complete red herring: let $F$ be continuous and continuously differentiable on $(0,\\infty)$ with the following properties:</p>\n\n<ul>\n<li>$ F(x) = 1 + o(1/\\log{x})$ as $x \\downarrow 0 $,</li>\n<li>$F(x) = o(x^{-\\epsilon})$ as $x \\uparrow \\infty$ for some $\\epsilon&gt;0$,</li>\n<li>$\\int_0^{\\infty} F'(x) \\log{x} \\, dx = 0$</li>\n</ul>\n\n<p>Then\n$$ \\gamma = \\int_0^{\\infty} \\left( F(x) - e^{-x} \\right) \\frac{dx}{x}. $$\nThe proof is essentially identical to the above:\n$$ \\int_0^{\\infty} ( F(x) - e^{-x} ) \\frac{dx}{x} = [(F(x) - e^{-x}) \\log{x}]_0^{\\infty} - \\int_0^{\\infty} ( F'(x) + e^{-x} ) \\log{x} \\, dx = \\int_0^{\\infty} e^{-x} \\log{x} \\, dx, $$\nThe integral on the left exists by the first two conditions on $F$, which are also enough to ensure the boundary terms from the integration by parts go to zero.</p>\n" }, { "answer_id": 2087628, "score": 2, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present a supplement to the nice solution posted by @Chappers in order to have a self-contained way forward. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/1913325/real-analysis-methods-to-evaluate-int-0-infty-fracxa1x2-dx-a1/1913576#1913576\">THIS ANSWER</a>, I showed using the integral representation of the Gamma function</p>\n\n<p>$$\\Gamma(x)=\\int_0^\\infty s^{x-1}e^{-s}\\,ds \\tag 1$$</p>\n\n<p>that Gamma can be expressed as the limit</p>\n\n<p>$$\\Gamma(x)=\\lim_{n\\to \\infty}\\frac{n^x\\,n!}{x(x+1)(x+2)\\cdots (x+n)} \\tag 2$$</p>\n\n<p>Now, note that $(2)$ can be rewritten as </p>\n\n<p>$$\\begin{align}\n\\Gamma(x)&amp;=\\lim_{n\\to \\infty}\\frac{e^{x(\\log(n)-1-1/2-\\cdots -1/n)}\\, e^{x}e^{x/2}\\cdots e^{x/n}}{x(1+x)(1+x/2)\\cdots (1+x/n)} \\\\\\\\&amp;=\\frac{e^{-\\gamma x}}{x}\\prod_{n=1}^\\infty e^{x/n}\\left(1+\\frac xn\\right)^{-1}\\tag 3\n\\end{align}$$</p>\n\n<p>where $(3)$ gives the well-known Weierstrass product for Gamma. </p>\n\n<p>Differentiating the logarithm of $(3)$ and setting $x=1$ reveals</p>\n\n<p>$$\\Gamma'(1)=\\Gamma(1)\\left(-\\gamma -1+\\sum_{n=1}^\\infty \\left(\\frac1n-\\frac{1}{n+1}\\right)\\right)=-\\gamma \\tag 4$$</p>\n\n<p>Differentiating $(1)$ and setting $x=1$ yields</p>\n\n<p>$$\\Gamma'(1)=\\int_0^\\infty \\log(x)e^{-x}\\,dx \\tag 5$$</p>\n\n<p>whence comparing $(4)$ and $(5)$ we obtain the coveted result</p>\n\n<p>$$\\int_0^\\infty \\log(x)e^{-x}\\,dx =-\\gamma$$</p>\n\n<hr>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that that $\\gamma$ as given by $\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$ is equal to $\\gamma$ as expressed by the limit $\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$.</p>\n" }, { "answer_id": 2156662, "score": 2, "text": "<p>The same idea as in <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2141879#2141879\">my previous answer</a> applies. Notice that</p>\n\n<p>$$ f(x) = \\frac{1}{1+x} \\quad \\Rightarrow \\quad \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -\\log\\epsilon + \\log(1+\\epsilon) = -\\log\\epsilon + o(1) $$</p>\n\n<p>as $\\epsilon \\to 0^+$. Now from the linked answer above, we find that</p>\n\n<p>\\begin{align*}\nf(x) = e^{-x}\n&amp;\\quad \\Rightarrow \\quad c(f) = \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\frac{ds}{1+s} - \\log R \\right) - \\gamma = -\\gamma \\\\\n&amp;\\quad \\Rightarrow \\quad \\int_{\\epsilon}^{\\infty} \\frac{e^{-x}}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1).\n\\end{align*}</p>\n\n<p>It is worth to remark that the $\\gamma$ term above is computed from the identity $\\gamma = -\\int_{0}^{\\infty} e^{-x}\\log x \\, dx$, which you are already aware of. From this,</p>\n\n<p>\\begin{align*}\n\\int_{\\epsilon}^{\\infty} \\frac{1}{x^a + 1} \\, \\frac{dx}{x} - \\int_{\\epsilon}^{\\infty} e^{-x} \\, \\frac{dx}{x}\n&amp;= \\frac{1}{a}\\int_{\\epsilon^a}^{\\infty} \\frac{1}{x + 1} \\, \\frac{dx}{x} - \\int_{\\epsilon}^{\\infty} e^{-x} \\, \\frac{dx}{x}\\\\\n&amp;= \\gamma + o(1)\n\\end{align*}</p>\n\n<p>and taking $\\epsilon \\to 0^+$ gives the result.</p>\n\n<hr>\n\n<p>Using the quantity $c(f)$, you can compute various integrals (including all the integrals you have asked) together with some tabulated results for $c(f)$:</p>\n\n<p>\\begin{align*}\nc\\left\\{\\frac{1}{(1+x)^\\alpha}\\right\\} &amp;= -H_{\\alpha-1}, &amp;\nc\\{e^{-x}\\} &amp;= -\\gamma, \\\\\nc\\left\\{\\frac{x}{e^x-1}\\right\\} &amp;= 0, &amp;\nc\\{\\cos x\\} &amp;= -\\gamma,\n\\end{align*}</p>\n\n<p>where $H_n$ is the harmonic numbers. For instance, if $a &gt; 0$ then</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\left( \\frac{1}{\\sqrt{1+a x}} - e^{-x^2} \\right) \\frac{dx}{x}\n&amp;= c\\left\\{ \\frac{1}{\\sqrt{1+a x}} - e^{-x^2} \\right\\} \\\\\n&amp;= c\\left\\{ \\frac{1}{\\sqrt{1+a x}} \\right\\} - c\\{e^{-x^2}\\} \\\\\n&amp;= c\\left\\{ \\frac{1}{\\sqrt{1+x}} \\right\\} - \\log a - \\frac{1}{2}c\\{e^{-x}\\} \\\\\n&amp;= H_{-1/2} - \\log a + \\frac{\\gamma}{2} \\\\\n&amp;= \\frac{\\gamma}{2} - \\log(4a).\n\\end{align*}</p>\n" }, { "answer_id": 2178456, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n&amp;\\left.\\int_{0}^{\\infty}\\pars{{1 \\over 1 + x^{a}} - {1 \\over \\expo{x}}}\n\\,{\\dd x \\over x}\\,\\right\\vert_{\\ a\\ &gt;\\ 0} =\n\\lim_{\\epsilon \\to 0^{+}}\\pars{%\n\\int_{\\epsilon}^{\\infty}{\\dd x \\over \\pars{1 + x^{a}}x} -\n\\int_{\\epsilon}^{\\infty}{\\expo{-x} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n\\lim_{\\epsilon \\to 0^{+}}\\braces{%\n{1 \\over a}\\int_{\\epsilon^{a}}^{\\infty}{\\dd x \\over \\pars{1 + x}x} - \\bracks{-\\ln\\pars{\\epsilon}\\expo{-\\epsilon} +\n\\int_{\\epsilon}^{\\infty}\\ln\\pars{x}\\expo{-x}\\,\\dd x}}\n\\\\[5mm] = &amp;\\\n\\gamma +\n\\lim_{\\epsilon \\to 0^{+}}\\bracks{%\n-{1 \\over a}\\ln\\pars{\\epsilon^{a} \\over 1 + \\epsilon^{a}} +\n\\ln\\pars{\\epsilon}\\expo{-\\epsilon}}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>because\n $\\ds{\\int_{0}^{\\infty}\\ln\\pars{x}\\expo{-x}\\,\\dd x =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}x^{\\mu}\\expo{-x}\\,\\dd x\n\\,\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\n\\,\\right\\vert_{\\ \\mu\\ =\\ 0} =\\Psi\\pars{1} = -\\gamma}$.</p>\n</blockquote>\n\n<p>Expression \\eqref{1} becomes:\n\\begin{align}\n&amp;\\left.\\int_{0}^{\\infty}\\pars{{1 \\over 1 + x^{a}} - {1 \\over \\expo{x}}}\n\\,{\\dd x \\over x}\\,\\right\\vert_{\\ a\\ &gt;\\ 0} =\n\\gamma\\ +\\\n\\underbrace{\\lim_{\\epsilon \\to 0^{+}}\\bracks{%\n{1 \\over a}\\ln\\pars{1 + \\epsilon^{a}} -\n\\ln\\pars{\\epsilon}\\pars{1 - \\expo{-\\epsilon}}}}_{\\ds{=\\ 0}}\\ =\\\n\\bbox[15px,#ffe,border:1px dotted navy]{\\ds{\\gamma}}\n\\end{align}</p>\n" }, { "answer_id": 2087077, "score": 7, "text": "<p>Integrating by parts,\n$$ \\int_0^{\\infty} \\left( \\frac{1}{1+x^a} - e^{-x} \\right) \\frac{dx}{x} = 0-0 + \\int_0^{\\infty} \\left( \\frac{ax^a}{x(1+x^a)^2} - e^{-x} \\right) \\log{x} \\, dx $$</p>\n\n<p>Of course, we recognise the second term as a familiar definition of/easy-to-derive formula for $\\gamma$. The first term we need to show is zero. But\n$$ \\int \\frac{ax^a\\log{x}}{x(1+x^a)^2} \\, dx = \\frac{x^a\\log{x}}{1+x^a} - \\frac{1}{a}\\log{(1+x^a)}, $$\nwhich is continuous and tends to zero at both endpoints since $a&gt;0$.</p>\n\n<hr>\n\n<p>The really interesting thing about this result in my opinion is that it shows the first term is a complete red herring: let $F$ be continuous and continuously differentiable on $(0,\\infty)$ with the following properties:</p>\n\n<ul>\n<li>$ F(x) = 1 + o(1/\\log{x})$ as $x \\downarrow 0 $,</li>\n<li>$F(x) = o(x^{-\\epsilon})$ as $x \\uparrow \\infty$ for some $\\epsilon&gt;0$,</li>\n<li>$\\int_0^{\\infty} F'(x) \\log{x} \\, dx = 0$</li>\n</ul>\n\n<p>Then\n$$ \\gamma = \\int_0^{\\infty} \\left( F(x) - e^{-x} \\right) \\frac{dx}{x}. $$\nThe proof is essentially identical to the above:\n$$ \\int_0^{\\infty} ( F(x) - e^{-x} ) \\frac{dx}{x} = [(F(x) - e^{-x}) \\log{x}]_0^{\\infty} - \\int_0^{\\infty} ( F'(x) + e^{-x} ) \\log{x} \\, dx = \\int_0^{\\infty} e^{-x} \\log{x} \\, dx, $$\nThe integral on the left exists by the first two conditions on $F$, which are also enough to ensure the boundary terms from the integration by parts go to zero.</p>\n" }, { "answer_id": 2087628, "score": 2, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present a supplement to the nice solution posted by @Chappers in order to have a self-contained way forward. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/1913325/real-analysis-methods-to-evaluate-int-0-infty-fracxa1x2-dx-a1/1913576#1913576\">THIS ANSWER</a>, I showed using the integral representation of the Gamma function</p>\n\n<p>$$\\Gamma(x)=\\int_0^\\infty s^{x-1}e^{-s}\\,ds \\tag 1$$</p>\n\n<p>that Gamma can be expressed as the limit</p>\n\n<p>$$\\Gamma(x)=\\lim_{n\\to \\infty}\\frac{n^x\\,n!}{x(x+1)(x+2)\\cdots (x+n)} \\tag 2$$</p>\n\n<p>Now, note that $(2)$ can be rewritten as </p>\n\n<p>$$\\begin{align}\n\\Gamma(x)&amp;=\\lim_{n\\to \\infty}\\frac{e^{x(\\log(n)-1-1/2-\\cdots -1/n)}\\, e^{x}e^{x/2}\\cdots e^{x/n}}{x(1+x)(1+x/2)\\cdots (1+x/n)} \\\\\\\\&amp;=\\frac{e^{-\\gamma x}}{x}\\prod_{n=1}^\\infty e^{x/n}\\left(1+\\frac xn\\right)^{-1}\\tag 3\n\\end{align}$$</p>\n\n<p>where $(3)$ gives the well-known Weierstrass product for Gamma. </p>\n\n<p>Differentiating the logarithm of $(3)$ and setting $x=1$ reveals</p>\n\n<p>$$\\Gamma'(1)=\\Gamma(1)\\left(-\\gamma -1+\\sum_{n=1}^\\infty \\left(\\frac1n-\\frac{1}{n+1}\\right)\\right)=-\\gamma \\tag 4$$</p>\n\n<p>Differentiating $(1)$ and setting $x=1$ yields</p>\n\n<p>$$\\Gamma'(1)=\\int_0^\\infty \\log(x)e^{-x}\\,dx \\tag 5$$</p>\n\n<p>whence comparing $(4)$ and $(5)$ we obtain the coveted result</p>\n\n<p>$$\\int_0^\\infty \\log(x)e^{-x}\\,dx =-\\gamma$$</p>\n\n<hr>\n\n<p>In <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763\">THIS ANSWER</a>, I showed that that $\\gamma$ as given by $\\gamma=-\\int_0^\\infty e^{-x}\\,\\log(x)\\,dx$ is equal to $\\gamma$ as expressed by the limit $\\gamma=\\lim_{n\\to \\infty}\\left(-\\log(n)+\\sum_{k=1}^n\\frac1k\\right)$.</p>\n" }, { "answer_id": 2156662, "score": 2, "text": "<p>The same idea as in <a href=\"https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2141879#2141879\">my previous answer</a> applies. Notice that</p>\n\n<p>$$ f(x) = \\frac{1}{1+x} \\quad \\Rightarrow \\quad \\int_{\\epsilon}^{\\infty} \\frac{f(x)}{x} \\, dx = -\\log\\epsilon + \\log(1+\\epsilon) = -\\log\\epsilon + o(1) $$</p>\n\n<p>as $\\epsilon \\to 0^+$. Now from the linked answer above, we find that</p>\n\n<p>\\begin{align*}\nf(x) = e^{-x}\n&amp;\\quad \\Rightarrow \\quad c(f) = \\lim_{R\\to\\infty} \\left( \\int_{0}^{R} \\frac{ds}{1+s} - \\log R \\right) - \\gamma = -\\gamma \\\\\n&amp;\\quad \\Rightarrow \\quad \\int_{\\epsilon}^{\\infty} \\frac{e^{-x}}{x} \\, dx = -\\log\\epsilon - \\gamma + o(1).\n\\end{align*}</p>\n\n<p>It is worth to remark that the $\\gamma$ term above is computed from the identity $\\gamma = -\\int_{0}^{\\infty} e^{-x}\\log x \\, dx$, which you are already aware of. From this,</p>\n\n<p>\\begin{align*}\n\\int_{\\epsilon}^{\\infty} \\frac{1}{x^a + 1} \\, \\frac{dx}{x} - \\int_{\\epsilon}^{\\infty} e^{-x} \\, \\frac{dx}{x}\n&amp;= \\frac{1}{a}\\int_{\\epsilon^a}^{\\infty} \\frac{1}{x + 1} \\, \\frac{dx}{x} - \\int_{\\epsilon}^{\\infty} e^{-x} \\, \\frac{dx}{x}\\\\\n&amp;= \\gamma + o(1)\n\\end{align*}</p>\n\n<p>and taking $\\epsilon \\to 0^+$ gives the result.</p>\n\n<hr>\n\n<p>Using the quantity $c(f)$, you can compute various integrals (including all the integrals you have asked) together with some tabulated results for $c(f)$:</p>\n\n<p>\\begin{align*}\nc\\left\\{\\frac{1}{(1+x)^\\alpha}\\right\\} &amp;= -H_{\\alpha-1}, &amp;\nc\\{e^{-x}\\} &amp;= -\\gamma, \\\\\nc\\left\\{\\frac{x}{e^x-1}\\right\\} &amp;= 0, &amp;\nc\\{\\cos x\\} &amp;= -\\gamma,\n\\end{align*}</p>\n\n<p>where $H_n$ is the harmonic numbers. For instance, if $a &gt; 0$ then</p>\n\n<p>\\begin{align*}\n\\int_{0}^{\\infty} \\left( \\frac{1}{\\sqrt{1+a x}} - e^{-x^2} \\right) \\frac{dx}{x}\n&amp;= c\\left\\{ \\frac{1}{\\sqrt{1+a x}} - e^{-x^2} \\right\\} \\\\\n&amp;= c\\left\\{ \\frac{1}{\\sqrt{1+a x}} \\right\\} - c\\{e^{-x^2}\\} \\\\\n&amp;= c\\left\\{ \\frac{1}{\\sqrt{1+x}} \\right\\} - \\log a - \\frac{1}{2}c\\{e^{-x}\\} \\\\\n&amp;= H_{-1/2} - \\log a + \\frac{\\gamma}{2} \\\\\n&amp;= \\frac{\\gamma}{2} - \\log(4a).\n\\end{align*}</p>\n" }, { "answer_id": 2178456, "score": 1, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[8px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n&amp;\\left.\\int_{0}^{\\infty}\\pars{{1 \\over 1 + x^{a}} - {1 \\over \\expo{x}}}\n\\,{\\dd x \\over x}\\,\\right\\vert_{\\ a\\ &gt;\\ 0} =\n\\lim_{\\epsilon \\to 0^{+}}\\pars{%\n\\int_{\\epsilon}^{\\infty}{\\dd x \\over \\pars{1 + x^{a}}x} -\n\\int_{\\epsilon}^{\\infty}{\\expo{-x} \\over x}\\,\\dd x}\n\\\\[5mm] = &amp;\\\n\\lim_{\\epsilon \\to 0^{+}}\\braces{%\n{1 \\over a}\\int_{\\epsilon^{a}}^{\\infty}{\\dd x \\over \\pars{1 + x}x} - \\bracks{-\\ln\\pars{\\epsilon}\\expo{-\\epsilon} +\n\\int_{\\epsilon}^{\\infty}\\ln\\pars{x}\\expo{-x}\\,\\dd x}}\n\\\\[5mm] = &amp;\\\n\\gamma +\n\\lim_{\\epsilon \\to 0^{+}}\\bracks{%\n-{1 \\over a}\\ln\\pars{\\epsilon^{a} \\over 1 + \\epsilon^{a}} +\n\\ln\\pars{\\epsilon}\\expo{-\\epsilon}}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>because\n $\\ds{\\int_{0}^{\\infty}\\ln\\pars{x}\\expo{-x}\\,\\dd x =\n\\left.\\totald{}{\\mu}\\int_{0}^{\\infty}x^{\\mu}\\expo{-x}\\,\\dd x\n\\,\\right\\vert_{\\ \\mu\\ =\\ 0} =\n\\left.\\totald{\\Gamma\\pars{\\mu + 1}}{\\mu}\n\\,\\right\\vert_{\\ \\mu\\ =\\ 0} =\\Psi\\pars{1} = -\\gamma}$.</p>\n</blockquote>\n\n<p>Expression \\eqref{1} becomes:\n\\begin{align}\n&amp;\\left.\\int_{0}^{\\infty}\\pars{{1 \\over 1 + x^{a}} - {1 \\over \\expo{x}}}\n\\,{\\dd x \\over x}\\,\\right\\vert_{\\ a\\ &gt;\\ 0} =\n\\gamma\\ +\\\n\\underbrace{\\lim_{\\epsilon \\to 0^{+}}\\bracks{%\n{1 \\over a}\\ln\\pars{1 + \\epsilon^{a}} -\n\\ln\\pars{\\epsilon}\\pars{1 - \\expo{-\\epsilon}}}}_{\\ds{=\\ 0}}\\ =\\\n\\bbox[15px,#ffe,border:1px dotted navy]{\\ds{\\gamma}}\n\\end{align}</p>\n" } ]
Question: Alternative approaches to showing that $\gamma=\int_0^\infty \left(\frac{1}{1+x^a}-\frac{1}{e^x}\right)\,\frac1x\,dx$, $a&gt;0$ Answer 1: <p>Integrating by parts, $$ \int_0^{\infty} \left( \frac{1}{1+x^a} - e^{-x} \right) \frac{dx}{x} = 0-0 + \int_0^{\infty} \left( \frac{ax^a}{x(1+x^a)^2} - e^{-x} \right) \log{x} \, dx $$</p> <p>Of course, we recognise the second term as a familiar definition of/easy-to-derive formula for $\gamma$. The first term we need to show is zero. But $$ \int \frac{ax^a\log{x}}{x(1+x^a)^2} \, dx = \frac{x^a\log{x}}{1+x^a} - \frac{1}{a}\log{(1+x^a)}, $$ which is continuous and tends to zero at both endpoints since $a&gt;0$.</p> <hr> <p>The really interesting thing about this result in my opinion is that it shows the first term is a complete red herring: let $F$ be continuous and continuously differentiable on $(0,\infty)$ with the following properties:</p> <ul> <li>$ F(x) = 1 + o(1/\log{x})$ as $x \downarrow 0 $,</li> <li>$F(x) = o(x^{-\epsilon})$ as $x \uparrow \infty$ for some $\epsilon&gt;0$,</li> <li>$\int_0^{\infty} F'(x) \log{x} \, dx = 0$</li> </ul> <p>Then $$ \gamma = \int_0^{\infty} \left( F(x) - e^{-x} \right) \frac{dx}{x}. $$ The proof is essentially identical to the above: $$ \int_0^{\infty} ( F(x) - e^{-x} ) \frac{dx}{x} = [(F(x) - e^{-x}) \log{x}]_0^{\infty} - \int_0^{\infty} ( F'(x) + e^{-x} ) \log{x} \, dx = \int_0^{\infty} e^{-x} \log{x} \, dx, $$ The integral on the left exists by the first two conditions on $F$, which are also enough to ensure the boundary terms from the integration by parts go to zero.</p> Answer 2: <blockquote> <p><strong>I thought it might be instructive to present a supplement to the nice solution posted by @Chappers in order to have a self-contained way forward. To that end, we proceed.</strong></p> </blockquote> <hr> <p>In <a href="https://math.stackexchange.com/questions/1913325/real-analysis-methods-to-evaluate-int-0-infty-fracxa1x2-dx-a1/1913576#1913576">THIS ANSWER</a>, I showed using the integral representation of the Gamma function</p> <p>$$\Gamma(x)=\int_0^\infty s^{x-1}e^{-s}\,ds \tag 1$$</p> <p>that Gamma can be expressed as the limit</p> <p>$$\Gamma(x)=\lim_{n\to \infty}\frac{n^x\,n!}{x(x+1)(x+2)\cdots (x+n)} \tag 2$$</p> <p>Now, note that $(2)$ can be rewritten as </p> <p>$$\begin{align} \Gamma(x)&amp;=\lim_{n\to \infty}\frac{e^{x(\log(n)-1-1/2-\cdots -1/n)}\, e^{x}e^{x/2}\cdots e^{x/n}}{x(1+x)(1+x/2)\cdots (1+x/n)} \\\\&amp;=\frac{e^{-\gamma x}}{x}\prod_{n=1}^\infty e^{x/n}\left(1+\frac xn\right)^{-1}\tag 3 \end{align}$$</p> <p>where $(3)$ gives the well-known Weierstrass product for Gamma. </p> <p>Differentiating the logarithm of $(3)$ and setting $x=1$ reveals</p> <p>$$\Gamma'(1)=\Gamma(1)\left(-\gamma -1+\sum_{n=1}^\infty \left(\frac1n-\frac{1}{n+1}\right)\right)=-\gamma \tag 4$$</p> <p>Differentiating $(1)$ and setting $x=1$ yields</p> <p>$$\Gamma'(1)=\int_0^\infty \log(x)e^{-x}\,dx \tag 5$$</p> <p>whence comparing $(4)$ and $(5)$ we obtain the coveted result</p> <p>$$\int_0^\infty \log(x)e^{-x}\,dx =-\gamma$$</p> <hr> <p>In <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that that $\gamma$ as given by $\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$ is equal to $\gamma$ as expressed by the limit $\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$.</p> Answer 3: <p>The same idea as in <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2141879#2141879">my previous answer</a> applies. Notice that</p> <p>$$ f(x) = \frac{1}{1+x} \quad \Rightarrow \quad \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -\log\epsilon + \log(1+\epsilon) = -\log\epsilon + o(1) $$</p> <p>as $\epsilon \to 0^+$. Now from the linked answer above, we find that</p> <p>\begin{align*} f(x) = e^{-x} &amp;\quad \Rightarrow \quad c(f) = \lim_{R\to\infty} \left( \int_{0}^{R} \frac{ds}{1+s} - \log R \right) - \gamma = -\gamma \\ &amp;\quad \Rightarrow \quad \int_{\epsilon}^{\infty} \frac{e^{-x}}{x} \, dx = -\log\epsilon - \gamma + o(1). \end{align*}</p> <p>It is worth to remark that the $\gamma$ term above is computed from the identity $\gamma = -\int_{0}^{\infty} e^{-x}\log x \, dx$, which you are already aware of. From this,</p> <p>\begin{align*} \int_{\epsilon}^{\infty} \frac{1}{x^a + 1} \, \frac{dx}{x} - \int_{\epsilon}^{\infty} e^{-x} \, \frac{dx}{x} &amp;= \frac{1}{a}\int_{\epsilon^a}^{\infty} \frac{1}{x + 1} \, \frac{dx}{x} - \int_{\epsilon}^{\infty} e^{-x} \, \frac{dx}{x}\\ &amp;= \gamma + o(1) \end{align*}</p> <p>and taking $\epsilon \to 0^+$ gives the result.</p> <hr> <p>Using the quantity $c(f)$, you can compute various integrals (including all the integrals you have asked) together with some tabulated results for $c(f)$:</p> <p>\begin{align*} c\left\{\frac{1}{(1+x)^\alpha}\right\} &amp;= -H_{\alpha-1}, &amp; c\{e^{-x}\} &amp;= -\gamma, \\ c\left\{\frac{x}{e^x-1}\right\} &amp;= 0, &amp; c\{\cos x\} &amp;= -\gamma, \end{align*}</p> <p>where $H_n$ is the harmonic numbers. For instance, if $a &gt; 0$ then</p> <p>\begin{align*} \int_{0}^{\infty} \left( \frac{1}{\sqrt{1+a x}} - e^{-x^2} \right) \frac{dx}{x} &amp;= c\left\{ \frac{1}{\sqrt{1+a x}} - e^{-x^2} \right\} \\ &amp;= c\left\{ \frac{1}{\sqrt{1+a x}} \right\} - c\{e^{-x^2}\} \\ &amp;= c\left\{ \frac{1}{\sqrt{1+x}} \right\} - \log a - \frac{1}{2}c\{e^{-x}\} \\ &amp;= H_{-1/2} - \log a + \frac{\gamma}{2} \\ &amp;= \frac{\gamma}{2} - \log(4a). \end{align*}</p> Answer 4: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &amp;\left.\int_{0}^{\infty}\pars{{1 \over 1 + x^{a}} - {1 \over \expo{x}}} \,{\dd x \over x}\,\right\vert_{\ a\ &gt;\ 0} = \lim_{\epsilon \to 0^{+}}\pars{% \int_{\epsilon}^{\infty}{\dd x \over \pars{1 + x^{a}}x} - \int_{\epsilon}^{\infty}{\expo{-x} \over x}\,\dd x} \\[5mm] = &amp;\ \lim_{\epsilon \to 0^{+}}\braces{% {1 \over a}\int_{\epsilon^{a}}^{\infty}{\dd x \over \pars{1 + x}x} - \bracks{-\ln\pars{\epsilon}\expo{-\epsilon} + \int_{\epsilon}^{\infty}\ln\pars{x}\expo{-x}\,\dd x}} \\[5mm] = &amp;\ \gamma + \lim_{\epsilon \to 0^{+}}\bracks{% -{1 \over a}\ln\pars{\epsilon^{a} \over 1 + \epsilon^{a}} + \ln\pars{\epsilon}\expo{-\epsilon}}\label{1}\tag{1} \end{align}</p> <blockquote> <p>because $\ds{\int_{0}^{\infty}\ln\pars{x}\expo{-x}\,\dd x = \left.\totald{}{\mu}\int_{0}^{\infty}x^{\mu}\expo{-x}\,\dd x \,\right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu} \,\right\vert_{\ \mu\ =\ 0} =\Psi\pars{1} = -\gamma}$.</p> </blockquote> <p>Expression \eqref{1} becomes: \begin{align} &amp;\left.\int_{0}^{\infty}\pars{{1 \over 1 + x^{a}} - {1 \over \expo{x}}} \,{\dd x \over x}\,\right\vert_{\ a\ &gt;\ 0} = \gamma\ +\ \underbrace{\lim_{\epsilon \to 0^{+}}\bracks{% {1 \over a}\ln\pars{1 + \epsilon^{a}} - \ln\pars{\epsilon}\pars{1 - \expo{-\epsilon}}}}_{\ds{=\ 0}}\ =\ \bbox[15px,#ffe,border:1px dotted navy]{\ds{\gamma}} \end{align}</p> Answer 5: <p>Integrating by parts, $$ \int_0^{\infty} \left( \frac{1}{1+x^a} - e^{-x} \right) \frac{dx}{x} = 0-0 + \int_0^{\infty} \left( \frac{ax^a}{x(1+x^a)^2} - e^{-x} \right) \log{x} \, dx $$</p> <p>Of course, we recognise the second term as a familiar definition of/easy-to-derive formula for $\gamma$. The first term we need to show is zero. But $$ \int \frac{ax^a\log{x}}{x(1+x^a)^2} \, dx = \frac{x^a\log{x}}{1+x^a} - \frac{1}{a}\log{(1+x^a)}, $$ which is continuous and tends to zero at both endpoints since $a&gt;0$.</p> <hr> <p>The really interesting thing about this result in my opinion is that it shows the first term is a complete red herring: let $F$ be continuous and continuously differentiable on $(0,\infty)$ with the following properties:</p> <ul> <li>$ F(x) = 1 + o(1/\log{x})$ as $x \downarrow 0 $,</li> <li>$F(x) = o(x^{-\epsilon})$ as $x \uparrow \infty$ for some $\epsilon&gt;0$,</li> <li>$\int_0^{\infty} F'(x) \log{x} \, dx = 0$</li> </ul> <p>Then $$ \gamma = \int_0^{\infty} \left( F(x) - e^{-x} \right) \frac{dx}{x}. $$ The proof is essentially identical to the above: $$ \int_0^{\infty} ( F(x) - e^{-x} ) \frac{dx}{x} = [(F(x) - e^{-x}) \log{x}]_0^{\infty} - \int_0^{\infty} ( F'(x) + e^{-x} ) \log{x} \, dx = \int_0^{\infty} e^{-x} \log{x} \, dx, $$ The integral on the left exists by the first two conditions on $F$, which are also enough to ensure the boundary terms from the integration by parts go to zero.</p> Answer 6: <blockquote> <p><strong>I thought it might be instructive to present a supplement to the nice solution posted by @Chappers in order to have a self-contained way forward. To that end, we proceed.</strong></p> </blockquote> <hr> <p>In <a href="https://math.stackexchange.com/questions/1913325/real-analysis-methods-to-evaluate-int-0-infty-fracxa1x2-dx-a1/1913576#1913576">THIS ANSWER</a>, I showed using the integral representation of the Gamma function</p> <p>$$\Gamma(x)=\int_0^\infty s^{x-1}e^{-s}\,ds \tag 1$$</p> <p>that Gamma can be expressed as the limit</p> <p>$$\Gamma(x)=\lim_{n\to \infty}\frac{n^x\,n!}{x(x+1)(x+2)\cdots (x+n)} \tag 2$$</p> <p>Now, note that $(2)$ can be rewritten as </p> <p>$$\begin{align} \Gamma(x)&amp;=\lim_{n\to \infty}\frac{e^{x(\log(n)-1-1/2-\cdots -1/n)}\, e^{x}e^{x/2}\cdots e^{x/n}}{x(1+x)(1+x/2)\cdots (1+x/n)} \\\\&amp;=\frac{e^{-\gamma x}}{x}\prod_{n=1}^\infty e^{x/n}\left(1+\frac xn\right)^{-1}\tag 3 \end{align}$$</p> <p>where $(3)$ gives the well-known Weierstrass product for Gamma. </p> <p>Differentiating the logarithm of $(3)$ and setting $x=1$ reveals</p> <p>$$\Gamma'(1)=\Gamma(1)\left(-\gamma -1+\sum_{n=1}^\infty \left(\frac1n-\frac{1}{n+1}\right)\right)=-\gamma \tag 4$$</p> <p>Differentiating $(1)$ and setting $x=1$ yields</p> <p>$$\Gamma'(1)=\int_0^\infty \log(x)e^{-x}\,dx \tag 5$$</p> <p>whence comparing $(4)$ and $(5)$ we obtain the coveted result</p> <p>$$\int_0^\infty \log(x)e^{-x}\,dx =-\gamma$$</p> <hr> <p>In <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that that $\gamma$ as given by $\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$ is equal to $\gamma$ as expressed by the limit $\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$.</p> Answer 7: <p>The same idea as in <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2141879#2141879">my previous answer</a> applies. Notice that</p> <p>$$ f(x) = \frac{1}{1+x} \quad \Rightarrow \quad \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -\log\epsilon + \log(1+\epsilon) = -\log\epsilon + o(1) $$</p> <p>as $\epsilon \to 0^+$. Now from the linked answer above, we find that</p> <p>\begin{align*} f(x) = e^{-x} &amp;\quad \Rightarrow \quad c(f) = \lim_{R\to\infty} \left( \int_{0}^{R} \frac{ds}{1+s} - \log R \right) - \gamma = -\gamma \\ &amp;\quad \Rightarrow \quad \int_{\epsilon}^{\infty} \frac{e^{-x}}{x} \, dx = -\log\epsilon - \gamma + o(1). \end{align*}</p> <p>It is worth to remark that the $\gamma$ term above is computed from the identity $\gamma = -\int_{0}^{\infty} e^{-x}\log x \, dx$, which you are already aware of. From this,</p> <p>\begin{align*} \int_{\epsilon}^{\infty} \frac{1}{x^a + 1} \, \frac{dx}{x} - \int_{\epsilon}^{\infty} e^{-x} \, \frac{dx}{x} &amp;= \frac{1}{a}\int_{\epsilon^a}^{\infty} \frac{1}{x + 1} \, \frac{dx}{x} - \int_{\epsilon}^{\infty} e^{-x} \, \frac{dx}{x}\\ &amp;= \gamma + o(1) \end{align*}</p> <p>and taking $\epsilon \to 0^+$ gives the result.</p> <hr> <p>Using the quantity $c(f)$, you can compute various integrals (including all the integrals you have asked) together with some tabulated results for $c(f)$:</p> <p>\begin{align*} c\left\{\frac{1}{(1+x)^\alpha}\right\} &amp;= -H_{\alpha-1}, &amp; c\{e^{-x}\} &amp;= -\gamma, \\ c\left\{\frac{x}{e^x-1}\right\} &amp;= 0, &amp; c\{\cos x\} &amp;= -\gamma, \end{align*}</p> <p>where $H_n$ is the harmonic numbers. For instance, if $a &gt; 0$ then</p> <p>\begin{align*} \int_{0}^{\infty} \left( \frac{1}{\sqrt{1+a x}} - e^{-x^2} \right) \frac{dx}{x} &amp;= c\left\{ \frac{1}{\sqrt{1+a x}} - e^{-x^2} \right\} \\ &amp;= c\left\{ \frac{1}{\sqrt{1+a x}} \right\} - c\{e^{-x^2}\} \\ &amp;= c\left\{ \frac{1}{\sqrt{1+x}} \right\} - \log a - \frac{1}{2}c\{e^{-x}\} \\ &amp;= H_{-1/2} - \log a + \frac{\gamma}{2} \\ &amp;= \frac{\gamma}{2} - \log(4a). \end{align*}</p> Answer 8: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &amp;\left.\int_{0}^{\infty}\pars{{1 \over 1 + x^{a}} - {1 \over \expo{x}}} \,{\dd x \over x}\,\right\vert_{\ a\ &gt;\ 0} = \lim_{\epsilon \to 0^{+}}\pars{% \int_{\epsilon}^{\infty}{\dd x \over \pars{1 + x^{a}}x} - \int_{\epsilon}^{\infty}{\expo{-x} \over x}\,\dd x} \\[5mm] = &amp;\ \lim_{\epsilon \to 0^{+}}\braces{% {1 \over a}\int_{\epsilon^{a}}^{\infty}{\dd x \over \pars{1 + x}x} - \bracks{-\ln\pars{\epsilon}\expo{-\epsilon} + \int_{\epsilon}^{\infty}\ln\pars{x}\expo{-x}\,\dd x}} \\[5mm] = &amp;\ \gamma + \lim_{\epsilon \to 0^{+}}\bracks{% -{1 \over a}\ln\pars{\epsilon^{a} \over 1 + \epsilon^{a}} + \ln\pars{\epsilon}\expo{-\epsilon}}\label{1}\tag{1} \end{align}</p> <blockquote> <p>because $\ds{\int_{0}^{\infty}\ln\pars{x}\expo{-x}\,\dd x = \left.\totald{}{\mu}\int_{0}^{\infty}x^{\mu}\expo{-x}\,\dd x \,\right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu} \,\right\vert_{\ \mu\ =\ 0} =\Psi\pars{1} = -\gamma}$.</p> </blockquote> <p>Expression \eqref{1} becomes: \begin{align} &amp;\left.\int_{0}^{\infty}\pars{{1 \over 1 + x^{a}} - {1 \over \expo{x}}} \,{\dd x \over x}\,\right\vert_{\ a\ &gt;\ 0} = \gamma\ +\ \underbrace{\lim_{\epsilon \to 0^{+}}\bracks{% {1 \over a}\ln\pars{1 + \epsilon^{a}} - \ln\pars{\epsilon}\pars{1 - \expo{-\epsilon}}}}_{\ds{=\ 0}}\ =\ \bbox[15px,#ffe,border:1px dotted navy]{\ds{\gamma}} \end{align}</p> Answer 9: <p>Integrating by parts, $$ \int_0^{\infty} \left( \frac{1}{1+x^a} - e^{-x} \right) \frac{dx}{x} = 0-0 + \int_0^{\infty} \left( \frac{ax^a}{x(1+x^a)^2} - e^{-x} \right) \log{x} \, dx $$</p> <p>Of course, we recognise the second term as a familiar definition of/easy-to-derive formula for $\gamma$. The first term we need to show is zero. But $$ \int \frac{ax^a\log{x}}{x(1+x^a)^2} \, dx = \frac{x^a\log{x}}{1+x^a} - \frac{1}{a}\log{(1+x^a)}, $$ which is continuous and tends to zero at both endpoints since $a&gt;0$.</p> <hr> <p>The really interesting thing about this result in my opinion is that it shows the first term is a complete red herring: let $F$ be continuous and continuously differentiable on $(0,\infty)$ with the following properties:</p> <ul> <li>$ F(x) = 1 + o(1/\log{x})$ as $x \downarrow 0 $,</li> <li>$F(x) = o(x^{-\epsilon})$ as $x \uparrow \infty$ for some $\epsilon&gt;0$,</li> <li>$\int_0^{\infty} F'(x) \log{x} \, dx = 0$</li> </ul> <p>Then $$ \gamma = \int_0^{\infty} \left( F(x) - e^{-x} \right) \frac{dx}{x}. $$ The proof is essentially identical to the above: $$ \int_0^{\infty} ( F(x) - e^{-x} ) \frac{dx}{x} = [(F(x) - e^{-x}) \log{x}]_0^{\infty} - \int_0^{\infty} ( F'(x) + e^{-x} ) \log{x} \, dx = \int_0^{\infty} e^{-x} \log{x} \, dx, $$ The integral on the left exists by the first two conditions on $F$, which are also enough to ensure the boundary terms from the integration by parts go to zero.</p> Answer 10: <blockquote> <p><strong>I thought it might be instructive to present a supplement to the nice solution posted by @Chappers in order to have a self-contained way forward. To that end, we proceed.</strong></p> </blockquote> <hr> <p>In <a href="https://math.stackexchange.com/questions/1913325/real-analysis-methods-to-evaluate-int-0-infty-fracxa1x2-dx-a1/1913576#1913576">THIS ANSWER</a>, I showed using the integral representation of the Gamma function</p> <p>$$\Gamma(x)=\int_0^\infty s^{x-1}e^{-s}\,ds \tag 1$$</p> <p>that Gamma can be expressed as the limit</p> <p>$$\Gamma(x)=\lim_{n\to \infty}\frac{n^x\,n!}{x(x+1)(x+2)\cdots (x+n)} \tag 2$$</p> <p>Now, note that $(2)$ can be rewritten as </p> <p>$$\begin{align} \Gamma(x)&amp;=\lim_{n\to \infty}\frac{e^{x(\log(n)-1-1/2-\cdots -1/n)}\, e^{x}e^{x/2}\cdots e^{x/n}}{x(1+x)(1+x/2)\cdots (1+x/n)} \\\\&amp;=\frac{e^{-\gamma x}}{x}\prod_{n=1}^\infty e^{x/n}\left(1+\frac xn\right)^{-1}\tag 3 \end{align}$$</p> <p>where $(3)$ gives the well-known Weierstrass product for Gamma. </p> <p>Differentiating the logarithm of $(3)$ and setting $x=1$ reveals</p> <p>$$\Gamma'(1)=\Gamma(1)\left(-\gamma -1+\sum_{n=1}^\infty \left(\frac1n-\frac{1}{n+1}\right)\right)=-\gamma \tag 4$$</p> <p>Differentiating $(1)$ and setting $x=1$ yields</p> <p>$$\Gamma'(1)=\int_0^\infty \log(x)e^{-x}\,dx \tag 5$$</p> <p>whence comparing $(4)$ and $(5)$ we obtain the coveted result</p> <p>$$\int_0^\infty \log(x)e^{-x}\,dx =-\gamma$$</p> <hr> <p>In <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2101763#2101763">THIS ANSWER</a>, I showed that that $\gamma$ as given by $\gamma=-\int_0^\infty e^{-x}\,\log(x)\,dx$ is equal to $\gamma$ as expressed by the limit $\gamma=\lim_{n\to \infty}\left(-\log(n)+\sum_{k=1}^n\frac1k\right)$.</p> Answer 11: <p>The same idea as in <a href="https://math.stackexchange.com/questions/2100567/real-analysis-methodologies-to-show-gamma-2-int-0-infty-frac-cosx2-cos/2141879#2141879">my previous answer</a> applies. Notice that</p> <p>$$ f(x) = \frac{1}{1+x} \quad \Rightarrow \quad \int_{\epsilon}^{\infty} \frac{f(x)}{x} \, dx = -\log\epsilon + \log(1+\epsilon) = -\log\epsilon + o(1) $$</p> <p>as $\epsilon \to 0^+$. Now from the linked answer above, we find that</p> <p>\begin{align*} f(x) = e^{-x} &amp;\quad \Rightarrow \quad c(f) = \lim_{R\to\infty} \left( \int_{0}^{R} \frac{ds}{1+s} - \log R \right) - \gamma = -\gamma \\ &amp;\quad \Rightarrow \quad \int_{\epsilon}^{\infty} \frac{e^{-x}}{x} \, dx = -\log\epsilon - \gamma + o(1). \end{align*}</p> <p>It is worth to remark that the $\gamma$ term above is computed from the identity $\gamma = -\int_{0}^{\infty} e^{-x}\log x \, dx$, which you are already aware of. From this,</p> <p>\begin{align*} \int_{\epsilon}^{\infty} \frac{1}{x^a + 1} \, \frac{dx}{x} - \int_{\epsilon}^{\infty} e^{-x} \, \frac{dx}{x} &amp;= \frac{1}{a}\int_{\epsilon^a}^{\infty} \frac{1}{x + 1} \, \frac{dx}{x} - \int_{\epsilon}^{\infty} e^{-x} \, \frac{dx}{x}\\ &amp;= \gamma + o(1) \end{align*}</p> <p>and taking $\epsilon \to 0^+$ gives the result.</p> <hr> <p>Using the quantity $c(f)$, you can compute various integrals (including all the integrals you have asked) together with some tabulated results for $c(f)$:</p> <p>\begin{align*} c\left\{\frac{1}{(1+x)^\alpha}\right\} &amp;= -H_{\alpha-1}, &amp; c\{e^{-x}\} &amp;= -\gamma, \\ c\left\{\frac{x}{e^x-1}\right\} &amp;= 0, &amp; c\{\cos x\} &amp;= -\gamma, \end{align*}</p> <p>where $H_n$ is the harmonic numbers. For instance, if $a &gt; 0$ then</p> <p>\begin{align*} \int_{0}^{\infty} \left( \frac{1}{\sqrt{1+a x}} - e^{-x^2} \right) \frac{dx}{x} &amp;= c\left\{ \frac{1}{\sqrt{1+a x}} - e^{-x^2} \right\} \\ &amp;= c\left\{ \frac{1}{\sqrt{1+a x}} \right\} - c\{e^{-x^2}\} \\ &amp;= c\left\{ \frac{1}{\sqrt{1+x}} \right\} - \log a - \frac{1}{2}c\{e^{-x}\} \\ &amp;= H_{-1/2} - \log a + \frac{\gamma}{2} \\ &amp;= \frac{\gamma}{2} - \log(4a). \end{align*}</p> Answer 12: <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &amp;\left.\int_{0}^{\infty}\pars{{1 \over 1 + x^{a}} - {1 \over \expo{x}}} \,{\dd x \over x}\,\right\vert_{\ a\ &gt;\ 0} = \lim_{\epsilon \to 0^{+}}\pars{% \int_{\epsilon}^{\infty}{\dd x \over \pars{1 + x^{a}}x} - \int_{\epsilon}^{\infty}{\expo{-x} \over x}\,\dd x} \\[5mm] = &amp;\ \lim_{\epsilon \to 0^{+}}\braces{% {1 \over a}\int_{\epsilon^{a}}^{\infty}{\dd x \over \pars{1 + x}x} - \bracks{-\ln\pars{\epsilon}\expo{-\epsilon} + \int_{\epsilon}^{\infty}\ln\pars{x}\expo{-x}\,\dd x}} \\[5mm] = &amp;\ \gamma + \lim_{\epsilon \to 0^{+}}\bracks{% -{1 \over a}\ln\pars{\epsilon^{a} \over 1 + \epsilon^{a}} + \ln\pars{\epsilon}\expo{-\epsilon}}\label{1}\tag{1} \end{align}</p> <blockquote> <p>because $\ds{\int_{0}^{\infty}\ln\pars{x}\expo{-x}\,\dd x = \left.\totald{}{\mu}\int_{0}^{\infty}x^{\mu}\expo{-x}\,\dd x \,\right\vert_{\ \mu\ =\ 0} = \left.\totald{\Gamma\pars{\mu + 1}}{\mu} \,\right\vert_{\ \mu\ =\ 0} =\Psi\pars{1} = -\gamma}$.</p> </blockquote> <p>Expression \eqref{1} becomes: \begin{align} &amp;\left.\int_{0}^{\infty}\pars{{1 \over 1 + x^{a}} - {1 \over \expo{x}}} \,{\dd x \over x}\,\right\vert_{\ a\ &gt;\ 0} = \gamma\ +\ \underbrace{\lim_{\epsilon \to 0^{+}}\bracks{% {1 \over a}\ln\pars{1 + \epsilon^{a}} - \ln\pars{\epsilon}\pars{1 - \expo{-\epsilon}}}}_{\ds{=\ 0}}\ =\ \bbox[15px,#ffe,border:1px dotted navy]{\ds{\gamma}} \end{align}</p>
Methods for constructing symplectic forms
2,087,055
4
https://math.stackexchange.com/questions/2087055/methods-for-constructing-symplectic-forms
<p>Suppose I am given a $2n$ dimensional manifold $M$ and I want to put a symplectic form on it. How could I determine whether or not it admits one and if I do know that it admits one what are my options for (explicitly) constructing some?</p> <p>My current knowledge of techniques is the following:</p> <p>(1) If we are in $\mathbb R^{2n}$ or $\mathbb C^n$ then we can take global coordinates and construct the <a href="https://en.wikipedia.org/wiki/Symplectic_vector_space#Standard_symplectic_space" rel="nofollow noreferrer">standard ones</a>.</p> <p>(2) If we are on a tangent bundle we can construct the canonical one using the <a href="https://en.wikipedia.org/wiki/Tautological_one-form#Coordinate-free_definition" rel="nofollow noreferrer">tautological 1-form</a>.</p> <p>(3) If the <a href="https://en.wikipedia.org/wiki/Holonomy#Riemannian_holonomy" rel="nofollow noreferrer">Riemannian holonomy</a> (assuming we put some metric on $M$ -- although I'm confused about why the metric we choose matters...) group is contained in the symplectic group then we can transfer symplectic forms on $\mathbb R^{2n}$ to $M$. My understanding here though is that we can say the form looks a certain way in certain coordinates at a point on the manifold but not really what it looks like locally.</p> <p>(4) Brute force- define locally in coordinates and then show well-defined.</p> <p>(5) And I am aware of <a href="https://mathoverflow.net/questions/68715/symplectic-form-with-partition-on-unity">this cohomology existence result</a>.</p> <p>So are there any other techniques? In particular, ways like method $(2)$, where one can construct the form globally but still know what it looks like locally.</p>
[ { "answer_id": 2087759, "score": 5, "text": "<p>In real dimension 4, there is a complete topological characterization of which smooth closed 4-manifolds admit symplectic forms. By a theorem of Donaldson all symplectic 4-manifolds admit the structure of a <a href=\"http://www.ams.org/notices/200508/what-is.pdf\" rel=\"nofollow noreferrer\">Lefschetz pencil</a> compatible with the symplectic form, and by a theorem of Gompf admiting a Lefschetz pencil implies the existence of a compatible symplectic form. </p>\n\n<p>As for obstructions, the most basic is $H^2_{dr}\\ne 0$. There is a canonical way to place an almost complex structure on the tangent bundle of any symplectic manifold, and therefore there are algebraic relations in terms of the characteristic classes of the tangent bundle that need to be solved. For instance in dimension 4 the Wu relations need to be satisfied (c.f. <a href=\"https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures\">https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures</a>). For open manifolds this turns out to be the only obstruction due a theorem of Gromov (any open almost-complex manifold admits a symplectic structure).</p>\n\n<p>In dimension 4 there are finer invariants related to the gauge-theoretic invariants and Heegard Floer homology. A famous result of Taubes in this direction is that if $X,Y$ are closed 4-manifolds with $b_2^+&gt;0$ then $X\\#Y$ does not admit a symplectic structure. </p>\n" }, { "answer_id": 2087741, "score": 3, "text": "<p>In your list of constructions, you are certainly missing coadjoint orbits: If $G$ is any Lie group with Lie algebra $\\mathfrak g$, let $\\mathfrak g^*$ be the dual vector space to $\\mathfrak g$. This carries a natural representation of $G$, defined by $(g\\cdot\\lambda)(X):=\\lambda(Ad(g^{-1})(X))$ with $Ad$ denoting the usual adjoint action. Any orbit of $G$ in this representation is a smooth submanifold (since it is a homogeneous space of $G$) which inherits a canonical symplectic structure, usually called the Kirillov-Kostant-Souriau form.</p>\n\n<p>However, I think that overall your question is rather misleading: By Darboux's theorem, all symplectic forms look the same locally, i.e. locally around any point, you can find coordinates in which a given symplectic form looks like the standard form on $\\mathbb R^{2n}$. Global existence of symplectic forms is a very difficult question, and apart from the obvious topological obstructions that occur in your question (and which basically only concern compact manifolds) non-existence results depend on subtle invariants.</p>\n\n<p>By the way, the condition in (3) is a typical example for why I consider your question as misleading. The condition on holonomy says that the given metric $g$ is a Kähler metric. This implies that its so-called fundamental two-form is a symplectic form. Hoever, Kähler metrics are much more restrictive objects than symplectic forms, and even if a manifold admits one Kähler metric, then there is an infinite dimensional space of metrics on $M$ which are not Kähler. So this is a method which only works if you have guessed the right result from the beginning. This does not mean that the condition is not interesting, but you get the existence of a symplectic form from existence of a much stronger and much more rigid structure. </p>\n" }, { "answer_id": 2087759, "score": 5, "text": "<p>In real dimension 4, there is a complete topological characterization of which smooth closed 4-manifolds admit symplectic forms. By a theorem of Donaldson all symplectic 4-manifolds admit the structure of a <a href=\"http://www.ams.org/notices/200508/what-is.pdf\" rel=\"nofollow noreferrer\">Lefschetz pencil</a> compatible with the symplectic form, and by a theorem of Gompf admiting a Lefschetz pencil implies the existence of a compatible symplectic form. </p>\n\n<p>As for obstructions, the most basic is $H^2_{dr}\\ne 0$. There is a canonical way to place an almost complex structure on the tangent bundle of any symplectic manifold, and therefore there are algebraic relations in terms of the characteristic classes of the tangent bundle that need to be solved. For instance in dimension 4 the Wu relations need to be satisfied (c.f. <a href=\"https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures\">https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures</a>). For open manifolds this turns out to be the only obstruction due a theorem of Gromov (any open almost-complex manifold admits a symplectic structure).</p>\n\n<p>In dimension 4 there are finer invariants related to the gauge-theoretic invariants and Heegard Floer homology. A famous result of Taubes in this direction is that if $X,Y$ are closed 4-manifolds with $b_2^+&gt;0$ then $X\\#Y$ does not admit a symplectic structure. </p>\n" }, { "answer_id": 2087741, "score": 3, "text": "<p>In your list of constructions, you are certainly missing coadjoint orbits: If $G$ is any Lie group with Lie algebra $\\mathfrak g$, let $\\mathfrak g^*$ be the dual vector space to $\\mathfrak g$. This carries a natural representation of $G$, defined by $(g\\cdot\\lambda)(X):=\\lambda(Ad(g^{-1})(X))$ with $Ad$ denoting the usual adjoint action. Any orbit of $G$ in this representation is a smooth submanifold (since it is a homogeneous space of $G$) which inherits a canonical symplectic structure, usually called the Kirillov-Kostant-Souriau form.</p>\n\n<p>However, I think that overall your question is rather misleading: By Darboux's theorem, all symplectic forms look the same locally, i.e. locally around any point, you can find coordinates in which a given symplectic form looks like the standard form on $\\mathbb R^{2n}$. Global existence of symplectic forms is a very difficult question, and apart from the obvious topological obstructions that occur in your question (and which basically only concern compact manifolds) non-existence results depend on subtle invariants.</p>\n\n<p>By the way, the condition in (3) is a typical example for why I consider your question as misleading. The condition on holonomy says that the given metric $g$ is a Kähler metric. This implies that its so-called fundamental two-form is a symplectic form. Hoever, Kähler metrics are much more restrictive objects than symplectic forms, and even if a manifold admits one Kähler metric, then there is an infinite dimensional space of metrics on $M$ which are not Kähler. So this is a method which only works if you have guessed the right result from the beginning. This does not mean that the condition is not interesting, but you get the existence of a symplectic form from existence of a much stronger and much more rigid structure. </p>\n" }, { "answer_id": 2087759, "score": 5, "text": "<p>In real dimension 4, there is a complete topological characterization of which smooth closed 4-manifolds admit symplectic forms. By a theorem of Donaldson all symplectic 4-manifolds admit the structure of a <a href=\"http://www.ams.org/notices/200508/what-is.pdf\" rel=\"nofollow noreferrer\">Lefschetz pencil</a> compatible with the symplectic form, and by a theorem of Gompf admiting a Lefschetz pencil implies the existence of a compatible symplectic form. </p>\n\n<p>As for obstructions, the most basic is $H^2_{dr}\\ne 0$. There is a canonical way to place an almost complex structure on the tangent bundle of any symplectic manifold, and therefore there are algebraic relations in terms of the characteristic classes of the tangent bundle that need to be solved. For instance in dimension 4 the Wu relations need to be satisfied (c.f. <a href=\"https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures\">https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures</a>). For open manifolds this turns out to be the only obstruction due a theorem of Gromov (any open almost-complex manifold admits a symplectic structure).</p>\n\n<p>In dimension 4 there are finer invariants related to the gauge-theoretic invariants and Heegard Floer homology. A famous result of Taubes in this direction is that if $X,Y$ are closed 4-manifolds with $b_2^+&gt;0$ then $X\\#Y$ does not admit a symplectic structure. </p>\n" }, { "answer_id": 2087741, "score": 3, "text": "<p>In your list of constructions, you are certainly missing coadjoint orbits: If $G$ is any Lie group with Lie algebra $\\mathfrak g$, let $\\mathfrak g^*$ be the dual vector space to $\\mathfrak g$. This carries a natural representation of $G$, defined by $(g\\cdot\\lambda)(X):=\\lambda(Ad(g^{-1})(X))$ with $Ad$ denoting the usual adjoint action. Any orbit of $G$ in this representation is a smooth submanifold (since it is a homogeneous space of $G$) which inherits a canonical symplectic structure, usually called the Kirillov-Kostant-Souriau form.</p>\n\n<p>However, I think that overall your question is rather misleading: By Darboux's theorem, all symplectic forms look the same locally, i.e. locally around any point, you can find coordinates in which a given symplectic form looks like the standard form on $\\mathbb R^{2n}$. Global existence of symplectic forms is a very difficult question, and apart from the obvious topological obstructions that occur in your question (and which basically only concern compact manifolds) non-existence results depend on subtle invariants.</p>\n\n<p>By the way, the condition in (3) is a typical example for why I consider your question as misleading. The condition on holonomy says that the given metric $g$ is a Kähler metric. This implies that its so-called fundamental two-form is a symplectic form. Hoever, Kähler metrics are much more restrictive objects than symplectic forms, and even if a manifold admits one Kähler metric, then there is an infinite dimensional space of metrics on $M$ which are not Kähler. So this is a method which only works if you have guessed the right result from the beginning. This does not mean that the condition is not interesting, but you get the existence of a symplectic form from existence of a much stronger and much more rigid structure. </p>\n" } ]
Question: Methods for constructing symplectic forms Answer 1: <p>In real dimension 4, there is a complete topological characterization of which smooth closed 4-manifolds admit symplectic forms. By a theorem of Donaldson all symplectic 4-manifolds admit the structure of a <a href="http://www.ams.org/notices/200508/what-is.pdf" rel="nofollow noreferrer">Lefschetz pencil</a> compatible with the symplectic form, and by a theorem of Gompf admiting a Lefschetz pencil implies the existence of a compatible symplectic form. </p> <p>As for obstructions, the most basic is $H^2_{dr}\ne 0$. There is a canonical way to place an almost complex structure on the tangent bundle of any symplectic manifold, and therefore there are algebraic relations in terms of the characteristic classes of the tangent bundle that need to be solved. For instance in dimension 4 the Wu relations need to be satisfied (c.f. <a href="https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures">https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures</a>). For open manifolds this turns out to be the only obstruction due a theorem of Gromov (any open almost-complex manifold admits a symplectic structure).</p> <p>In dimension 4 there are finer invariants related to the gauge-theoretic invariants and Heegard Floer homology. A famous result of Taubes in this direction is that if $X,Y$ are closed 4-manifolds with $b_2^+&gt;0$ then $X\#Y$ does not admit a symplectic structure. </p> Answer 2: <p>In your list of constructions, you are certainly missing coadjoint orbits: If $G$ is any Lie group with Lie algebra $\mathfrak g$, let $\mathfrak g^*$ be the dual vector space to $\mathfrak g$. This carries a natural representation of $G$, defined by $(g\cdot\lambda)(X):=\lambda(Ad(g^{-1})(X))$ with $Ad$ denoting the usual adjoint action. Any orbit of $G$ in this representation is a smooth submanifold (since it is a homogeneous space of $G$) which inherits a canonical symplectic structure, usually called the Kirillov-Kostant-Souriau form.</p> <p>However, I think that overall your question is rather misleading: By Darboux's theorem, all symplectic forms look the same locally, i.e. locally around any point, you can find coordinates in which a given symplectic form looks like the standard form on $\mathbb R^{2n}$. Global existence of symplectic forms is a very difficult question, and apart from the obvious topological obstructions that occur in your question (and which basically only concern compact manifolds) non-existence results depend on subtle invariants.</p> <p>By the way, the condition in (3) is a typical example for why I consider your question as misleading. The condition on holonomy says that the given metric $g$ is a Kähler metric. This implies that its so-called fundamental two-form is a symplectic form. Hoever, Kähler metrics are much more restrictive objects than symplectic forms, and even if a manifold admits one Kähler metric, then there is an infinite dimensional space of metrics on $M$ which are not Kähler. So this is a method which only works if you have guessed the right result from the beginning. This does not mean that the condition is not interesting, but you get the existence of a symplectic form from existence of a much stronger and much more rigid structure. </p> Answer 3: <p>In real dimension 4, there is a complete topological characterization of which smooth closed 4-manifolds admit symplectic forms. By a theorem of Donaldson all symplectic 4-manifolds admit the structure of a <a href="http://www.ams.org/notices/200508/what-is.pdf" rel="nofollow noreferrer">Lefschetz pencil</a> compatible with the symplectic form, and by a theorem of Gompf admiting a Lefschetz pencil implies the existence of a compatible symplectic form. </p> <p>As for obstructions, the most basic is $H^2_{dr}\ne 0$. There is a canonical way to place an almost complex structure on the tangent bundle of any symplectic manifold, and therefore there are algebraic relations in terms of the characteristic classes of the tangent bundle that need to be solved. For instance in dimension 4 the Wu relations need to be satisfied (c.f. <a href="https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures">https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures</a>). For open manifolds this turns out to be the only obstruction due a theorem of Gromov (any open almost-complex manifold admits a symplectic structure).</p> <p>In dimension 4 there are finer invariants related to the gauge-theoretic invariants and Heegard Floer homology. A famous result of Taubes in this direction is that if $X,Y$ are closed 4-manifolds with $b_2^+&gt;0$ then $X\#Y$ does not admit a symplectic structure. </p> Answer 4: <p>In your list of constructions, you are certainly missing coadjoint orbits: If $G$ is any Lie group with Lie algebra $\mathfrak g$, let $\mathfrak g^*$ be the dual vector space to $\mathfrak g$. This carries a natural representation of $G$, defined by $(g\cdot\lambda)(X):=\lambda(Ad(g^{-1})(X))$ with $Ad$ denoting the usual adjoint action. Any orbit of $G$ in this representation is a smooth submanifold (since it is a homogeneous space of $G$) which inherits a canonical symplectic structure, usually called the Kirillov-Kostant-Souriau form.</p> <p>However, I think that overall your question is rather misleading: By Darboux's theorem, all symplectic forms look the same locally, i.e. locally around any point, you can find coordinates in which a given symplectic form looks like the standard form on $\mathbb R^{2n}$. Global existence of symplectic forms is a very difficult question, and apart from the obvious topological obstructions that occur in your question (and which basically only concern compact manifolds) non-existence results depend on subtle invariants.</p> <p>By the way, the condition in (3) is a typical example for why I consider your question as misleading. The condition on holonomy says that the given metric $g$ is a Kähler metric. This implies that its so-called fundamental two-form is a symplectic form. Hoever, Kähler metrics are much more restrictive objects than symplectic forms, and even if a manifold admits one Kähler metric, then there is an infinite dimensional space of metrics on $M$ which are not Kähler. So this is a method which only works if you have guessed the right result from the beginning. This does not mean that the condition is not interesting, but you get the existence of a symplectic form from existence of a much stronger and much more rigid structure. </p> Answer 5: <p>In real dimension 4, there is a complete topological characterization of which smooth closed 4-manifolds admit symplectic forms. By a theorem of Donaldson all symplectic 4-manifolds admit the structure of a <a href="http://www.ams.org/notices/200508/what-is.pdf" rel="nofollow noreferrer">Lefschetz pencil</a> compatible with the symplectic form, and by a theorem of Gompf admiting a Lefschetz pencil implies the existence of a compatible symplectic form. </p> <p>As for obstructions, the most basic is $H^2_{dr}\ne 0$. There is a canonical way to place an almost complex structure on the tangent bundle of any symplectic manifold, and therefore there are algebraic relations in terms of the characteristic classes of the tangent bundle that need to be solved. For instance in dimension 4 the Wu relations need to be satisfied (c.f. <a href="https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures">https://mathoverflow.net/questions/63439/how-can-we-detect-the-existence-of-almost-complex-structures</a>). For open manifolds this turns out to be the only obstruction due a theorem of Gromov (any open almost-complex manifold admits a symplectic structure).</p> <p>In dimension 4 there are finer invariants related to the gauge-theoretic invariants and Heegard Floer homology. A famous result of Taubes in this direction is that if $X,Y$ are closed 4-manifolds with $b_2^+&gt;0$ then $X\#Y$ does not admit a symplectic structure. </p> Answer 6: <p>In your list of constructions, you are certainly missing coadjoint orbits: If $G$ is any Lie group with Lie algebra $\mathfrak g$, let $\mathfrak g^*$ be the dual vector space to $\mathfrak g$. This carries a natural representation of $G$, defined by $(g\cdot\lambda)(X):=\lambda(Ad(g^{-1})(X))$ with $Ad$ denoting the usual adjoint action. Any orbit of $G$ in this representation is a smooth submanifold (since it is a homogeneous space of $G$) which inherits a canonical symplectic structure, usually called the Kirillov-Kostant-Souriau form.</p> <p>However, I think that overall your question is rather misleading: By Darboux's theorem, all symplectic forms look the same locally, i.e. locally around any point, you can find coordinates in which a given symplectic form looks like the standard form on $\mathbb R^{2n}$. Global existence of symplectic forms is a very difficult question, and apart from the obvious topological obstructions that occur in your question (and which basically only concern compact manifolds) non-existence results depend on subtle invariants.</p> <p>By the way, the condition in (3) is a typical example for why I consider your question as misleading. The condition on holonomy says that the given metric $g$ is a Kähler metric. This implies that its so-called fundamental two-form is a symplectic form. Hoever, Kähler metrics are much more restrictive objects than symplectic forms, and even if a manifold admits one Kähler metric, then there is an infinite dimensional space of metrics on $M$ which are not Kähler. So this is a method which only works if you have guessed the right result from the beginning. This does not mean that the condition is not interesting, but you get the existence of a symplectic form from existence of a much stronger and much more rigid structure. </p>
How to integrate $\tan(x) \tan(2x) \tan(3x)$?
2,087,119
4
https://math.stackexchange.com/questions/2087119/how-to-integrate-tanx-tan2x-tan3x
<p>We have to integrate the following</p> <p>$$\int \tan x \tan 2x \tan 3x \,\mathrm dx$$</p> <p>In this I tried as</p> <p>First split it into sine and cosine terms</p> <p>Then used $\sin 2x =2\sin x \cos x$</p> <p>But after that got stuck</p>
[ { "answer_id": 2087129, "score": 10, "text": "<p>Since\n$$\n\\tan3x=\\tan(x+2x)=\\dfrac{\\tan x+\\tan2x}{1-\\tan x\\tan2x},\n$$\nwe have\n$$\n\\tan3x-\\tan x\\tan2x\\tan3x=\\tan x+\\tan2x,\n$$\ni.e.\n$$\n\\tan x\\tan 2x\\tan3x=\\tan 3x-\\tan x-\\tan 2x.\n$$\nFor $a\\ne 0$ we have\n$$\n\\int\\tan(ax)\\,dx=-\\dfrac{1}{a}\\ln|\\cos(ax)|+c, \n$$\nand therefore\n\\begin{eqnarray}\n\\int \\tan x\\tan 2x\\tan3x\\,dx&amp;=&amp;\\int(\\tan 3x-\\tan x-\\tan 2x)\\,dx\\\\\n&amp;=&amp;\\ln|\\cos x|+\\dfrac{1}{2}\\ln|\\cos2x|-\\dfrac13\\ln|\\cos3x|+c\n\\end{eqnarray}</p>\n" }, { "answer_id": 2087125, "score": 7, "text": "<p>(Useful) Trick time!</p>\n\n<p>Note that $\\tan 3x = \\frac{(\\tan 2x + \\tan x)}{(1-\\tan x \\tan 2x)}$. Cross multiply, and you get that $\\tan 3x \\tan 2x \\tan x = -\\tan x - \\tan 2x + \\tan 3x$. Now, the integrals of $\\tan x$ is $\\log \\sec x$, so you can finish it from here.</p>\n" }, { "answer_id": 2087127, "score": 7, "text": "<p>Notice\n$$\\tan3x=\\frac{\\tan2x+\\tan x}{1-\\tan2x\\tan x}\\Rightarrow \\tan3x\\tan2x\\tan x=\\tan3x-\\tan2x-\\tan x$$\nthen\n$$\\int \\tan3x\\tan2x\\tan x \\mathrm{d}x = \\int \\tan3x - \\tan2x - \\tan x \\ \\mathrm{d}x$$\nthe answer will follow.</p>\n" }, { "answer_id": 2087131, "score": 7, "text": "<p>In addition to the other answers, we can also do like this:<br>\n<br>\nUse $\\tan 3x =\\frac {3\\cos^2 x \\sin x-\\sin ^3 x}{\\cos^3 x-3\\cos x\\sin^2 x}$ ,$\\tan 2x=\\frac {2\\cos x\\sin x}{\\cos^2 x-\\sin^2 x}$ and substitute $u=\\cos x $ to get $$I =\\int \\tan x\\tan 2x\\tan 3x dx =\\int -\\frac {2(4\\cos^4 x -5\\cos^2 x+1)}{\\cos x (2\\cos^2 x-1)(4\\cos^2 x-3)} \\sin x dx =2 \\int \\frac {4u^4-5u^2+1}{u (2u^2-1)(4u^2-3)} du = -\\frac {8}{3} \\int \\frac{u}{4u^2-3} du +\\int \\frac {2u}{2u^2-1} du +\\int \\frac {2}{3u} du $$ Hope you can take it from here. </p>\n" }, { "answer_id": 2087129, "score": 10, "text": "<p>Since\n$$\n\\tan3x=\\tan(x+2x)=\\dfrac{\\tan x+\\tan2x}{1-\\tan x\\tan2x},\n$$\nwe have\n$$\n\\tan3x-\\tan x\\tan2x\\tan3x=\\tan x+\\tan2x,\n$$\ni.e.\n$$\n\\tan x\\tan 2x\\tan3x=\\tan 3x-\\tan x-\\tan 2x.\n$$\nFor $a\\ne 0$ we have\n$$\n\\int\\tan(ax)\\,dx=-\\dfrac{1}{a}\\ln|\\cos(ax)|+c, \n$$\nand therefore\n\\begin{eqnarray}\n\\int \\tan x\\tan 2x\\tan3x\\,dx&amp;=&amp;\\int(\\tan 3x-\\tan x-\\tan 2x)\\,dx\\\\\n&amp;=&amp;\\ln|\\cos x|+\\dfrac{1}{2}\\ln|\\cos2x|-\\dfrac13\\ln|\\cos3x|+c\n\\end{eqnarray}</p>\n" }, { "answer_id": 2087125, "score": 7, "text": "<p>(Useful) Trick time!</p>\n\n<p>Note that $\\tan 3x = \\frac{(\\tan 2x + \\tan x)}{(1-\\tan x \\tan 2x)}$. Cross multiply, and you get that $\\tan 3x \\tan 2x \\tan x = -\\tan x - \\tan 2x + \\tan 3x$. Now, the integrals of $\\tan x$ is $\\log \\sec x$, so you can finish it from here.</p>\n" }, { "answer_id": 2087127, "score": 7, "text": "<p>Notice\n$$\\tan3x=\\frac{\\tan2x+\\tan x}{1-\\tan2x\\tan x}\\Rightarrow \\tan3x\\tan2x\\tan x=\\tan3x-\\tan2x-\\tan x$$\nthen\n$$\\int \\tan3x\\tan2x\\tan x \\mathrm{d}x = \\int \\tan3x - \\tan2x - \\tan x \\ \\mathrm{d}x$$\nthe answer will follow.</p>\n" }, { "answer_id": 2087131, "score": 7, "text": "<p>In addition to the other answers, we can also do like this:<br>\n<br>\nUse $\\tan 3x =\\frac {3\\cos^2 x \\sin x-\\sin ^3 x}{\\cos^3 x-3\\cos x\\sin^2 x}$ ,$\\tan 2x=\\frac {2\\cos x\\sin x}{\\cos^2 x-\\sin^2 x}$ and substitute $u=\\cos x $ to get $$I =\\int \\tan x\\tan 2x\\tan 3x dx =\\int -\\frac {2(4\\cos^4 x -5\\cos^2 x+1)}{\\cos x (2\\cos^2 x-1)(4\\cos^2 x-3)} \\sin x dx =2 \\int \\frac {4u^4-5u^2+1}{u (2u^2-1)(4u^2-3)} du = -\\frac {8}{3} \\int \\frac{u}{4u^2-3} du +\\int \\frac {2u}{2u^2-1} du +\\int \\frac {2}{3u} du $$ Hope you can take it from here. </p>\n" }, { "answer_id": 2087129, "score": 10, "text": "<p>Since\n$$\n\\tan3x=\\tan(x+2x)=\\dfrac{\\tan x+\\tan2x}{1-\\tan x\\tan2x},\n$$\nwe have\n$$\n\\tan3x-\\tan x\\tan2x\\tan3x=\\tan x+\\tan2x,\n$$\ni.e.\n$$\n\\tan x\\tan 2x\\tan3x=\\tan 3x-\\tan x-\\tan 2x.\n$$\nFor $a\\ne 0$ we have\n$$\n\\int\\tan(ax)\\,dx=-\\dfrac{1}{a}\\ln|\\cos(ax)|+c, \n$$\nand therefore\n\\begin{eqnarray}\n\\int \\tan x\\tan 2x\\tan3x\\,dx&amp;=&amp;\\int(\\tan 3x-\\tan x-\\tan 2x)\\,dx\\\\\n&amp;=&amp;\\ln|\\cos x|+\\dfrac{1}{2}\\ln|\\cos2x|-\\dfrac13\\ln|\\cos3x|+c\n\\end{eqnarray}</p>\n" }, { "answer_id": 2087125, "score": 7, "text": "<p>(Useful) Trick time!</p>\n\n<p>Note that $\\tan 3x = \\frac{(\\tan 2x + \\tan x)}{(1-\\tan x \\tan 2x)}$. Cross multiply, and you get that $\\tan 3x \\tan 2x \\tan x = -\\tan x - \\tan 2x + \\tan 3x$. Now, the integrals of $\\tan x$ is $\\log \\sec x$, so you can finish it from here.</p>\n" }, { "answer_id": 2087127, "score": 7, "text": "<p>Notice\n$$\\tan3x=\\frac{\\tan2x+\\tan x}{1-\\tan2x\\tan x}\\Rightarrow \\tan3x\\tan2x\\tan x=\\tan3x-\\tan2x-\\tan x$$\nthen\n$$\\int \\tan3x\\tan2x\\tan x \\mathrm{d}x = \\int \\tan3x - \\tan2x - \\tan x \\ \\mathrm{d}x$$\nthe answer will follow.</p>\n" }, { "answer_id": 2087131, "score": 7, "text": "<p>In addition to the other answers, we can also do like this:<br>\n<br>\nUse $\\tan 3x =\\frac {3\\cos^2 x \\sin x-\\sin ^3 x}{\\cos^3 x-3\\cos x\\sin^2 x}$ ,$\\tan 2x=\\frac {2\\cos x\\sin x}{\\cos^2 x-\\sin^2 x}$ and substitute $u=\\cos x $ to get $$I =\\int \\tan x\\tan 2x\\tan 3x dx =\\int -\\frac {2(4\\cos^4 x -5\\cos^2 x+1)}{\\cos x (2\\cos^2 x-1)(4\\cos^2 x-3)} \\sin x dx =2 \\int \\frac {4u^4-5u^2+1}{u (2u^2-1)(4u^2-3)} du = -\\frac {8}{3} \\int \\frac{u}{4u^2-3} du +\\int \\frac {2u}{2u^2-1} du +\\int \\frac {2}{3u} du $$ Hope you can take it from here. </p>\n" } ]
Question: How to integrate $\tan(x) \tan(2x) \tan(3x)$? Answer 1: <p>Since $$ \tan3x=\tan(x+2x)=\dfrac{\tan x+\tan2x}{1-\tan x\tan2x}, $$ we have $$ \tan3x-\tan x\tan2x\tan3x=\tan x+\tan2x, $$ i.e. $$ \tan x\tan 2x\tan3x=\tan 3x-\tan x-\tan 2x. $$ For $a\ne 0$ we have $$ \int\tan(ax)\,dx=-\dfrac{1}{a}\ln|\cos(ax)|+c, $$ and therefore \begin{eqnarray} \int \tan x\tan 2x\tan3x\,dx&amp;=&amp;\int(\tan 3x-\tan x-\tan 2x)\,dx\\ &amp;=&amp;\ln|\cos x|+\dfrac{1}{2}\ln|\cos2x|-\dfrac13\ln|\cos3x|+c \end{eqnarray}</p> Answer 2: <p>(Useful) Trick time!</p> <p>Note that $\tan 3x = \frac{(\tan 2x + \tan x)}{(1-\tan x \tan 2x)}$. Cross multiply, and you get that $\tan 3x \tan 2x \tan x = -\tan x - \tan 2x + \tan 3x$. Now, the integrals of $\tan x$ is $\log \sec x$, so you can finish it from here.</p> Answer 3: <p>Notice $$\tan3x=\frac{\tan2x+\tan x}{1-\tan2x\tan x}\Rightarrow \tan3x\tan2x\tan x=\tan3x-\tan2x-\tan x$$ then $$\int \tan3x\tan2x\tan x \mathrm{d}x = \int \tan3x - \tan2x - \tan x \ \mathrm{d}x$$ the answer will follow.</p> Answer 4: <p>In addition to the other answers, we can also do like this:<br> <br> Use $\tan 3x =\frac {3\cos^2 x \sin x-\sin ^3 x}{\cos^3 x-3\cos x\sin^2 x}$ ,$\tan 2x=\frac {2\cos x\sin x}{\cos^2 x-\sin^2 x}$ and substitute $u=\cos x $ to get $$I =\int \tan x\tan 2x\tan 3x dx =\int -\frac {2(4\cos^4 x -5\cos^2 x+1)}{\cos x (2\cos^2 x-1)(4\cos^2 x-3)} \sin x dx =2 \int \frac {4u^4-5u^2+1}{u (2u^2-1)(4u^2-3)} du = -\frac {8}{3} \int \frac{u}{4u^2-3} du +\int \frac {2u}{2u^2-1} du +\int \frac {2}{3u} du $$ Hope you can take it from here. </p> Answer 5: <p>Since $$ \tan3x=\tan(x+2x)=\dfrac{\tan x+\tan2x}{1-\tan x\tan2x}, $$ we have $$ \tan3x-\tan x\tan2x\tan3x=\tan x+\tan2x, $$ i.e. $$ \tan x\tan 2x\tan3x=\tan 3x-\tan x-\tan 2x. $$ For $a\ne 0$ we have $$ \int\tan(ax)\,dx=-\dfrac{1}{a}\ln|\cos(ax)|+c, $$ and therefore \begin{eqnarray} \int \tan x\tan 2x\tan3x\,dx&amp;=&amp;\int(\tan 3x-\tan x-\tan 2x)\,dx\\ &amp;=&amp;\ln|\cos x|+\dfrac{1}{2}\ln|\cos2x|-\dfrac13\ln|\cos3x|+c \end{eqnarray}</p> Answer 6: <p>(Useful) Trick time!</p> <p>Note that $\tan 3x = \frac{(\tan 2x + \tan x)}{(1-\tan x \tan 2x)}$. Cross multiply, and you get that $\tan 3x \tan 2x \tan x = -\tan x - \tan 2x + \tan 3x$. Now, the integrals of $\tan x$ is $\log \sec x$, so you can finish it from here.</p> Answer 7: <p>Notice $$\tan3x=\frac{\tan2x+\tan x}{1-\tan2x\tan x}\Rightarrow \tan3x\tan2x\tan x=\tan3x-\tan2x-\tan x$$ then $$\int \tan3x\tan2x\tan x \mathrm{d}x = \int \tan3x - \tan2x - \tan x \ \mathrm{d}x$$ the answer will follow.</p> Answer 8: <p>In addition to the other answers, we can also do like this:<br> <br> Use $\tan 3x =\frac {3\cos^2 x \sin x-\sin ^3 x}{\cos^3 x-3\cos x\sin^2 x}$ ,$\tan 2x=\frac {2\cos x\sin x}{\cos^2 x-\sin^2 x}$ and substitute $u=\cos x $ to get $$I =\int \tan x\tan 2x\tan 3x dx =\int -\frac {2(4\cos^4 x -5\cos^2 x+1)}{\cos x (2\cos^2 x-1)(4\cos^2 x-3)} \sin x dx =2 \int \frac {4u^4-5u^2+1}{u (2u^2-1)(4u^2-3)} du = -\frac {8}{3} \int \frac{u}{4u^2-3} du +\int \frac {2u}{2u^2-1} du +\int \frac {2}{3u} du $$ Hope you can take it from here. </p> Answer 9: <p>Since $$ \tan3x=\tan(x+2x)=\dfrac{\tan x+\tan2x}{1-\tan x\tan2x}, $$ we have $$ \tan3x-\tan x\tan2x\tan3x=\tan x+\tan2x, $$ i.e. $$ \tan x\tan 2x\tan3x=\tan 3x-\tan x-\tan 2x. $$ For $a\ne 0$ we have $$ \int\tan(ax)\,dx=-\dfrac{1}{a}\ln|\cos(ax)|+c, $$ and therefore \begin{eqnarray} \int \tan x\tan 2x\tan3x\,dx&amp;=&amp;\int(\tan 3x-\tan x-\tan 2x)\,dx\\ &amp;=&amp;\ln|\cos x|+\dfrac{1}{2}\ln|\cos2x|-\dfrac13\ln|\cos3x|+c \end{eqnarray}</p> Answer 10: <p>(Useful) Trick time!</p> <p>Note that $\tan 3x = \frac{(\tan 2x + \tan x)}{(1-\tan x \tan 2x)}$. Cross multiply, and you get that $\tan 3x \tan 2x \tan x = -\tan x - \tan 2x + \tan 3x$. Now, the integrals of $\tan x$ is $\log \sec x$, so you can finish it from here.</p> Answer 11: <p>Notice $$\tan3x=\frac{\tan2x+\tan x}{1-\tan2x\tan x}\Rightarrow \tan3x\tan2x\tan x=\tan3x-\tan2x-\tan x$$ then $$\int \tan3x\tan2x\tan x \mathrm{d}x = \int \tan3x - \tan2x - \tan x \ \mathrm{d}x$$ the answer will follow.</p> Answer 12: <p>In addition to the other answers, we can also do like this:<br> <br> Use $\tan 3x =\frac {3\cos^2 x \sin x-\sin ^3 x}{\cos^3 x-3\cos x\sin^2 x}$ ,$\tan 2x=\frac {2\cos x\sin x}{\cos^2 x-\sin^2 x}$ and substitute $u=\cos x $ to get $$I =\int \tan x\tan 2x\tan 3x dx =\int -\frac {2(4\cos^4 x -5\cos^2 x+1)}{\cos x (2\cos^2 x-1)(4\cos^2 x-3)} \sin x dx =2 \int \frac {4u^4-5u^2+1}{u (2u^2-1)(4u^2-3)} du = -\frac {8}{3} \int \frac{u}{4u^2-3} du +\int \frac {2u}{2u^2-1} du +\int \frac {2}{3u} du $$ Hope you can take it from here. </p>
Hint in integration $\int\frac{x^{2}}{\left(x\cos x-\sin x \right )\left( x\sin x+\cos x \right )}\,\mathrm{d}x$
2,087,133
4
https://math.stackexchange.com/questions/2087133/hint-in-integration-int-fracx2-leftx-cos-x-sin-x-right-left-x-sin
<p>In the following integration </p> <p>$$\int \frac{x^{2}}{\left ( x\cos x-\sin x \right )\left ( x\sin x+\cos x \right )}\, \mathrm{d}x$$</p> <p>I tried alot. But does not get any proper start.</p> <p>Can anybody provide me a hint.</p>
[ { "answer_id": 2087138, "score": 8, "text": "<p><strong>Hint</strong>. One may observe the following trick\n$$\n\\begin{align}\n\\frac{x^2}{(x \\cos x-\\sin x)(x\\sin x+\\cos x)}&amp;=\\frac{x\\cos x(x \\cos x-\\sin x)+x\\sin x(x\\sin x+\\cos x)}{(x \\cos x-\\sin x)(x\\sin x+\\cos x)}\n\\\\\\\\&amp;=\\frac{x\\cos x}{x\\sin x+\\cos x}+\\frac{x \\sin x}{x \\cos x-\\sin x}\n\\\\\\\\&amp;=\\frac{(x\\sin x+\\cos x)'}{(x\\sin x+\\cos x)}-\\frac{(x \\cos x-\\sin x)'}{(x \\cos x-\\sin x)}\n\\end{align}\n$$ then one may conclude.</p>\n" }, { "answer_id": 2087142, "score": 6, "text": "<p>Notice\n$$x^{2}=x^2\\left ( \\sin^2x+\\cos^2x \\right )$$\nthen\n\\begin{align*}\n\\int \\frac{x^{2}}{\\left ( x\\cos x-\\sin x \\right )\\left ( x\\sin x+\\cos x \\right )}\\, \\mathrm{d}x&amp;=\\int \\frac{x^2\\left ( \\sin^2x+\\cos^2x \\right )}{\\left ( x\\cos x-\\sin x \\right )\\left ( x\\sin x+\\cos x \\right )}\\, \\mathrm{d}x\\\\\n&amp;=\\int \\frac{x\\cos x}{x\\sin x+\\cos x}\\, \\mathrm{d}x+\\int \\frac{x\\sin x}{x\\cos x-\\sin x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln\\left | x\\sin x+\\cos x \\right |-\\ln\\left | x\\cos x-\\sin x \\right |+C\n\\end{align*}</p>\n" }, { "answer_id": 2087138, "score": 8, "text": "<p><strong>Hint</strong>. One may observe the following trick\n$$\n\\begin{align}\n\\frac{x^2}{(x \\cos x-\\sin x)(x\\sin x+\\cos x)}&amp;=\\frac{x\\cos x(x \\cos x-\\sin x)+x\\sin x(x\\sin x+\\cos x)}{(x \\cos x-\\sin x)(x\\sin x+\\cos x)}\n\\\\\\\\&amp;=\\frac{x\\cos x}{x\\sin x+\\cos x}+\\frac{x \\sin x}{x \\cos x-\\sin x}\n\\\\\\\\&amp;=\\frac{(x\\sin x+\\cos x)'}{(x\\sin x+\\cos x)}-\\frac{(x \\cos x-\\sin x)'}{(x \\cos x-\\sin x)}\n\\end{align}\n$$ then one may conclude.</p>\n" }, { "answer_id": 2087142, "score": 6, "text": "<p>Notice\n$$x^{2}=x^2\\left ( \\sin^2x+\\cos^2x \\right )$$\nthen\n\\begin{align*}\n\\int \\frac{x^{2}}{\\left ( x\\cos x-\\sin x \\right )\\left ( x\\sin x+\\cos x \\right )}\\, \\mathrm{d}x&amp;=\\int \\frac{x^2\\left ( \\sin^2x+\\cos^2x \\right )}{\\left ( x\\cos x-\\sin x \\right )\\left ( x\\sin x+\\cos x \\right )}\\, \\mathrm{d}x\\\\\n&amp;=\\int \\frac{x\\cos x}{x\\sin x+\\cos x}\\, \\mathrm{d}x+\\int \\frac{x\\sin x}{x\\cos x-\\sin x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln\\left | x\\sin x+\\cos x \\right |-\\ln\\left | x\\cos x-\\sin x \\right |+C\n\\end{align*}</p>\n" }, { "answer_id": 2087138, "score": 8, "text": "<p><strong>Hint</strong>. One may observe the following trick\n$$\n\\begin{align}\n\\frac{x^2}{(x \\cos x-\\sin x)(x\\sin x+\\cos x)}&amp;=\\frac{x\\cos x(x \\cos x-\\sin x)+x\\sin x(x\\sin x+\\cos x)}{(x \\cos x-\\sin x)(x\\sin x+\\cos x)}\n\\\\\\\\&amp;=\\frac{x\\cos x}{x\\sin x+\\cos x}+\\frac{x \\sin x}{x \\cos x-\\sin x}\n\\\\\\\\&amp;=\\frac{(x\\sin x+\\cos x)'}{(x\\sin x+\\cos x)}-\\frac{(x \\cos x-\\sin x)'}{(x \\cos x-\\sin x)}\n\\end{align}\n$$ then one may conclude.</p>\n" }, { "answer_id": 2087142, "score": 6, "text": "<p>Notice\n$$x^{2}=x^2\\left ( \\sin^2x+\\cos^2x \\right )$$\nthen\n\\begin{align*}\n\\int \\frac{x^{2}}{\\left ( x\\cos x-\\sin x \\right )\\left ( x\\sin x+\\cos x \\right )}\\, \\mathrm{d}x&amp;=\\int \\frac{x^2\\left ( \\sin^2x+\\cos^2x \\right )}{\\left ( x\\cos x-\\sin x \\right )\\left ( x\\sin x+\\cos x \\right )}\\, \\mathrm{d}x\\\\\n&amp;=\\int \\frac{x\\cos x}{x\\sin x+\\cos x}\\, \\mathrm{d}x+\\int \\frac{x\\sin x}{x\\cos x-\\sin x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln\\left | x\\sin x+\\cos x \\right |-\\ln\\left | x\\cos x-\\sin x \\right |+C\n\\end{align*}</p>\n" } ]
Question: Hint in integration $\int\frac{x^{2}}{\left(x\cos x-\sin x \right )\left( x\sin x+\cos x \right )}\,\mathrm{d}x$ Answer 1: <p><strong>Hint</strong>. One may observe the following trick $$ \begin{align} \frac{x^2}{(x \cos x-\sin x)(x\sin x+\cos x)}&amp;=\frac{x\cos x(x \cos x-\sin x)+x\sin x(x\sin x+\cos x)}{(x \cos x-\sin x)(x\sin x+\cos x)} \\\\&amp;=\frac{x\cos x}{x\sin x+\cos x}+\frac{x \sin x}{x \cos x-\sin x} \\\\&amp;=\frac{(x\sin x+\cos x)'}{(x\sin x+\cos x)}-\frac{(x \cos x-\sin x)'}{(x \cos x-\sin x)} \end{align} $$ then one may conclude.</p> Answer 2: <p>Notice $$x^{2}=x^2\left ( \sin^2x+\cos^2x \right )$$ then \begin{align*} \int \frac{x^{2}}{\left ( x\cos x-\sin x \right )\left ( x\sin x+\cos x \right )}\, \mathrm{d}x&amp;=\int \frac{x^2\left ( \sin^2x+\cos^2x \right )}{\left ( x\cos x-\sin x \right )\left ( x\sin x+\cos x \right )}\, \mathrm{d}x\\ &amp;=\int \frac{x\cos x}{x\sin x+\cos x}\, \mathrm{d}x+\int \frac{x\sin x}{x\cos x-\sin x}\, \mathrm{d}x\\ &amp;=\ln\left | x\sin x+\cos x \right |-\ln\left | x\cos x-\sin x \right |+C \end{align*}</p> Answer 3: <p><strong>Hint</strong>. One may observe the following trick $$ \begin{align} \frac{x^2}{(x \cos x-\sin x)(x\sin x+\cos x)}&amp;=\frac{x\cos x(x \cos x-\sin x)+x\sin x(x\sin x+\cos x)}{(x \cos x-\sin x)(x\sin x+\cos x)} \\\\&amp;=\frac{x\cos x}{x\sin x+\cos x}+\frac{x \sin x}{x \cos x-\sin x} \\\\&amp;=\frac{(x\sin x+\cos x)'}{(x\sin x+\cos x)}-\frac{(x \cos x-\sin x)'}{(x \cos x-\sin x)} \end{align} $$ then one may conclude.</p> Answer 4: <p>Notice $$x^{2}=x^2\left ( \sin^2x+\cos^2x \right )$$ then \begin{align*} \int \frac{x^{2}}{\left ( x\cos x-\sin x \right )\left ( x\sin x+\cos x \right )}\, \mathrm{d}x&amp;=\int \frac{x^2\left ( \sin^2x+\cos^2x \right )}{\left ( x\cos x-\sin x \right )\left ( x\sin x+\cos x \right )}\, \mathrm{d}x\\ &amp;=\int \frac{x\cos x}{x\sin x+\cos x}\, \mathrm{d}x+\int \frac{x\sin x}{x\cos x-\sin x}\, \mathrm{d}x\\ &amp;=\ln\left | x\sin x+\cos x \right |-\ln\left | x\cos x-\sin x \right |+C \end{align*}</p> Answer 5: <p><strong>Hint</strong>. One may observe the following trick $$ \begin{align} \frac{x^2}{(x \cos x-\sin x)(x\sin x+\cos x)}&amp;=\frac{x\cos x(x \cos x-\sin x)+x\sin x(x\sin x+\cos x)}{(x \cos x-\sin x)(x\sin x+\cos x)} \\\\&amp;=\frac{x\cos x}{x\sin x+\cos x}+\frac{x \sin x}{x \cos x-\sin x} \\\\&amp;=\frac{(x\sin x+\cos x)'}{(x\sin x+\cos x)}-\frac{(x \cos x-\sin x)'}{(x \cos x-\sin x)} \end{align} $$ then one may conclude.</p> Answer 6: <p>Notice $$x^{2}=x^2\left ( \sin^2x+\cos^2x \right )$$ then \begin{align*} \int \frac{x^{2}}{\left ( x\cos x-\sin x \right )\left ( x\sin x+\cos x \right )}\, \mathrm{d}x&amp;=\int \frac{x^2\left ( \sin^2x+\cos^2x \right )}{\left ( x\cos x-\sin x \right )\left ( x\sin x+\cos x \right )}\, \mathrm{d}x\\ &amp;=\int \frac{x\cos x}{x\sin x+\cos x}\, \mathrm{d}x+\int \frac{x\sin x}{x\cos x-\sin x}\, \mathrm{d}x\\ &amp;=\ln\left | x\sin x+\cos x \right |-\ln\left | x\cos x-\sin x \right |+C \end{align*}</p>
To Show that $6$ divides $n$ if $6$ $|$ $(p_1 ^2 + p_2 ^2 + p_3 ^2 ......... p_n ^2)$ where $p_i$ is a prime$
2,087,379
4
https://math.stackexchange.com/questions/2087379/to-show-that-6-divides-n-if-6-p-1-2-p-2-2-p-3-2-p-n
<p>Let $n$ be a <strong>positive</strong> integer and there be $p_1,p_2,p_3,........p_n$ prime numbers such that <strong>all</strong> of them are greater than $5$. If $6$ divides $p_1 ^2 + p_2 ^2 + p_3 ^2+\ldots +p_n ^2$, prove that $6$ divides $n$. <br><br><strong>NOTE:-</strong> This problem is the $2nd$ question of $1998$ $RMO$ (Regional Mathematical Olympiad).<br> I tried using congruences with $2$ and $3$ on the first condition but it does not work.</p>
[ { "answer_id": 2087383, "score": 7, "text": "<p>Notice that every prime$&gt;3$ is of the form $6n\\pm1$, So square of every such prime will be $36n^2\\pm12n+1$, so when you are adding all those squares, you will get something like: </p>\n\n<p>$36(n_1^2+n_2^2+n_3^2......)+12(\\pm n_1+\\pm n_2.....)+ n(1)$..</p>\n\n<p>The two terms containing $6$ and $12$ are divisible by $6$, the divisibility of whole by $6$ depends upon divisibilty of $n$ by $6$ and vice versa.</p>\n" }, { "answer_id": 2087397, "score": 6, "text": "<p>The sum of squares of $n$ odd numbers is $\\equiv n\\pmod 8$ because a single odd square is $\\equiv 1\\pmod 8$; similarly the sum of squares of $n$ numbers not divisible by $3$ is $\\equiv n\\pmod 3$. Hence under the given conditions, we even have $p_1^2+\\ldots +p_n^2\\equiv n\\pmod {24}$, but of course in particular $p_1^2+\\ldots +p_n^2\\equiv n\\pmod {6}$.</p>\n" }, { "answer_id": 2087502, "score": 3, "text": "<p>Using congruences on $2$ and $3$ works as follows:</p>\n\n<p>For all primes $p_i\\ge 5$, we know that $2 \\nmid p_i$ and $3\\nmid p_i$. Therefore $p_i^2\\equiv 1 \\bmod 2$ and $p_i^2\\equiv 1 \\bmod 3$, which immediately gives us that $p_i^2 \\equiv 1 \\bmod 6$ by the Chinese Remainder Theorem.</p>\n\n<p>Clearly adding $n$ such prime squares to a total $s$ will only give us $s\\equiv 0\\bmod 6 $ (that is, $s\\mid 6$) if $n\\mid 6 $ also.</p>\n" }, { "answer_id": 2087383, "score": 7, "text": "<p>Notice that every prime$&gt;3$ is of the form $6n\\pm1$, So square of every such prime will be $36n^2\\pm12n+1$, so when you are adding all those squares, you will get something like: </p>\n\n<p>$36(n_1^2+n_2^2+n_3^2......)+12(\\pm n_1+\\pm n_2.....)+ n(1)$..</p>\n\n<p>The two terms containing $6$ and $12$ are divisible by $6$, the divisibility of whole by $6$ depends upon divisibilty of $n$ by $6$ and vice versa.</p>\n" }, { "answer_id": 2087397, "score": 6, "text": "<p>The sum of squares of $n$ odd numbers is $\\equiv n\\pmod 8$ because a single odd square is $\\equiv 1\\pmod 8$; similarly the sum of squares of $n$ numbers not divisible by $3$ is $\\equiv n\\pmod 3$. Hence under the given conditions, we even have $p_1^2+\\ldots +p_n^2\\equiv n\\pmod {24}$, but of course in particular $p_1^2+\\ldots +p_n^2\\equiv n\\pmod {6}$.</p>\n" }, { "answer_id": 2087502, "score": 3, "text": "<p>Using congruences on $2$ and $3$ works as follows:</p>\n\n<p>For all primes $p_i\\ge 5$, we know that $2 \\nmid p_i$ and $3\\nmid p_i$. Therefore $p_i^2\\equiv 1 \\bmod 2$ and $p_i^2\\equiv 1 \\bmod 3$, which immediately gives us that $p_i^2 \\equiv 1 \\bmod 6$ by the Chinese Remainder Theorem.</p>\n\n<p>Clearly adding $n$ such prime squares to a total $s$ will only give us $s\\equiv 0\\bmod 6 $ (that is, $s\\mid 6$) if $n\\mid 6 $ also.</p>\n" }, { "answer_id": 2087383, "score": 7, "text": "<p>Notice that every prime$&gt;3$ is of the form $6n\\pm1$, So square of every such prime will be $36n^2\\pm12n+1$, so when you are adding all those squares, you will get something like: </p>\n\n<p>$36(n_1^2+n_2^2+n_3^2......)+12(\\pm n_1+\\pm n_2.....)+ n(1)$..</p>\n\n<p>The two terms containing $6$ and $12$ are divisible by $6$, the divisibility of whole by $6$ depends upon divisibilty of $n$ by $6$ and vice versa.</p>\n" }, { "answer_id": 2087397, "score": 6, "text": "<p>The sum of squares of $n$ odd numbers is $\\equiv n\\pmod 8$ because a single odd square is $\\equiv 1\\pmod 8$; similarly the sum of squares of $n$ numbers not divisible by $3$ is $\\equiv n\\pmod 3$. Hence under the given conditions, we even have $p_1^2+\\ldots +p_n^2\\equiv n\\pmod {24}$, but of course in particular $p_1^2+\\ldots +p_n^2\\equiv n\\pmod {6}$.</p>\n" }, { "answer_id": 2087502, "score": 3, "text": "<p>Using congruences on $2$ and $3$ works as follows:</p>\n\n<p>For all primes $p_i\\ge 5$, we know that $2 \\nmid p_i$ and $3\\nmid p_i$. Therefore $p_i^2\\equiv 1 \\bmod 2$ and $p_i^2\\equiv 1 \\bmod 3$, which immediately gives us that $p_i^2 \\equiv 1 \\bmod 6$ by the Chinese Remainder Theorem.</p>\n\n<p>Clearly adding $n$ such prime squares to a total $s$ will only give us $s\\equiv 0\\bmod 6 $ (that is, $s\\mid 6$) if $n\\mid 6 $ also.</p>\n" } ]
Question: To Show that $6$ divides $n$ if $6$ $|$ $(p_1 ^2 + p_2 ^2 + p_3 ^2 ......... p_n ^2)$ where $p_i$ is a prime$ Answer 1: <p>Notice that every prime$&gt;3$ is of the form $6n\pm1$, So square of every such prime will be $36n^2\pm12n+1$, so when you are adding all those squares, you will get something like: </p> <p>$36(n_1^2+n_2^2+n_3^2......)+12(\pm n_1+\pm n_2.....)+ n(1)$..</p> <p>The two terms containing $6$ and $12$ are divisible by $6$, the divisibility of whole by $6$ depends upon divisibilty of $n$ by $6$ and vice versa.</p> Answer 2: <p>The sum of squares of $n$ odd numbers is $\equiv n\pmod 8$ because a single odd square is $\equiv 1\pmod 8$; similarly the sum of squares of $n$ numbers not divisible by $3$ is $\equiv n\pmod 3$. Hence under the given conditions, we even have $p_1^2+\ldots +p_n^2\equiv n\pmod {24}$, but of course in particular $p_1^2+\ldots +p_n^2\equiv n\pmod {6}$.</p> Answer 3: <p>Using congruences on $2$ and $3$ works as follows:</p> <p>For all primes $p_i\ge 5$, we know that $2 \nmid p_i$ and $3\nmid p_i$. Therefore $p_i^2\equiv 1 \bmod 2$ and $p_i^2\equiv 1 \bmod 3$, which immediately gives us that $p_i^2 \equiv 1 \bmod 6$ by the Chinese Remainder Theorem.</p> <p>Clearly adding $n$ such prime squares to a total $s$ will only give us $s\equiv 0\bmod 6 $ (that is, $s\mid 6$) if $n\mid 6 $ also.</p> Answer 4: <p>Notice that every prime$&gt;3$ is of the form $6n\pm1$, So square of every such prime will be $36n^2\pm12n+1$, so when you are adding all those squares, you will get something like: </p> <p>$36(n_1^2+n_2^2+n_3^2......)+12(\pm n_1+\pm n_2.....)+ n(1)$..</p> <p>The two terms containing $6$ and $12$ are divisible by $6$, the divisibility of whole by $6$ depends upon divisibilty of $n$ by $6$ and vice versa.</p> Answer 5: <p>The sum of squares of $n$ odd numbers is $\equiv n\pmod 8$ because a single odd square is $\equiv 1\pmod 8$; similarly the sum of squares of $n$ numbers not divisible by $3$ is $\equiv n\pmod 3$. Hence under the given conditions, we even have $p_1^2+\ldots +p_n^2\equiv n\pmod {24}$, but of course in particular $p_1^2+\ldots +p_n^2\equiv n\pmod {6}$.</p> Answer 6: <p>Using congruences on $2$ and $3$ works as follows:</p> <p>For all primes $p_i\ge 5$, we know that $2 \nmid p_i$ and $3\nmid p_i$. Therefore $p_i^2\equiv 1 \bmod 2$ and $p_i^2\equiv 1 \bmod 3$, which immediately gives us that $p_i^2 \equiv 1 \bmod 6$ by the Chinese Remainder Theorem.</p> <p>Clearly adding $n$ such prime squares to a total $s$ will only give us $s\equiv 0\bmod 6 $ (that is, $s\mid 6$) if $n\mid 6 $ also.</p> Answer 7: <p>Notice that every prime$&gt;3$ is of the form $6n\pm1$, So square of every such prime will be $36n^2\pm12n+1$, so when you are adding all those squares, you will get something like: </p> <p>$36(n_1^2+n_2^2+n_3^2......)+12(\pm n_1+\pm n_2.....)+ n(1)$..</p> <p>The two terms containing $6$ and $12$ are divisible by $6$, the divisibility of whole by $6$ depends upon divisibilty of $n$ by $6$ and vice versa.</p> Answer 8: <p>The sum of squares of $n$ odd numbers is $\equiv n\pmod 8$ because a single odd square is $\equiv 1\pmod 8$; similarly the sum of squares of $n$ numbers not divisible by $3$ is $\equiv n\pmod 3$. Hence under the given conditions, we even have $p_1^2+\ldots +p_n^2\equiv n\pmod {24}$, but of course in particular $p_1^2+\ldots +p_n^2\equiv n\pmod {6}$.</p> Answer 9: <p>Using congruences on $2$ and $3$ works as follows:</p> <p>For all primes $p_i\ge 5$, we know that $2 \nmid p_i$ and $3\nmid p_i$. Therefore $p_i^2\equiv 1 \bmod 2$ and $p_i^2\equiv 1 \bmod 3$, which immediately gives us that $p_i^2 \equiv 1 \bmod 6$ by the Chinese Remainder Theorem.</p> <p>Clearly adding $n$ such prime squares to a total $s$ will only give us $s\equiv 0\bmod 6 $ (that is, $s\mid 6$) if $n\mid 6 $ also.</p>
How to construct $A$ and $b$, given the set of all solutions to $Ax=b$?
2,087,439
4
https://math.stackexchange.com/questions/2087439/how-to-construct-a-and-b-given-the-set-of-all-solutions-to-ax-b
<blockquote> <p>I want to find a matrix $A \in \mathbb{C}^{2x4}$ and $b \in\mathbb{C}^{2}$ the solution of $Ax=b$ is: $$L = \left\{\pmatrix{1\\2\\0\\-1} + x_1\pmatrix{1\\-2\\2\\1} + x_2\pmatrix{2\\2\\-1\\1}\right\}$$</p> </blockquote> <p>Therefore $\dim(A) = 4$, $\dim(\ker(A)) = 2$, $\dim(\operatorname{im}(A)) = 2$.</p> <p>$A$ and $b$ have the following format:</p> <p>$$ A= \begin{pmatrix} a_{11} &amp; a_{12} &amp; a_{13} &amp; a_{14}\\ a_{21} &amp; a_{22} &amp; a_{23} &amp; a_{24} \\ \end{pmatrix} b= \pmatrix{b_1\\b_2} $$</p> <p>My idea is to solve the following equation for A and b:</p> <p>$$A\,\pmatrix{1 + x_1 + 2x_2\\2 - 2x_1+ 2x_2\\2x_1 - x_2\\-1+x_1 + x_2} = b$$ Is this the right way to start? I get then a linear equation with 12 unknowns and only 2 equations. I know that there must be many solutions. Do I simply define some of them as 1 or 0?</p>
[ { "answer_id": 2087468, "score": 5, "text": "<p>If you do this you don't get a linear system with twelve unknowns and two equations. Firstly, computing the product of $A$ with a matrix with some entries unknown is not linear, because it will involve terms of the form $a_{ij}x_k$; and secondly, $x_1$ and $x_2$ are <em>not</em> unknowns, the equation has to hold for all values of $x_1$ and $x_2$.</p>\n\n<p>You know that in general the set of solutions of a system $Ax=b$ is given (if non-empty) by $v_0+\\operatorname{null}A$ for some solution $v_0$ (any one will do, but you need to know that there is at least one). Comparing this with the definition of $L$, you get that $A$ must be such that\n$$\\operatorname{null} A=\\operatorname{span} ((1,-2,2,1)^T , (2,2,-1,1)^T).$$\nSo you want the rows $A_1,A_2$ of $A$ to be linearly independent (so that $A$ has rank $2$) and such that\n$$A_i\\cdot (1,-2,2,1)^T=0=A_i\\cdot (2,2,-1,1)^T;$$\nTaking the transpose of these equations shows that the rows of $A$ must form a basis of the space of solutions to the homogeneous linear system\n$$\\left\\{\\begin{array}{lll}a_1-2a_2+2a_3+a_4 &amp; = &amp; 0 \\\\ 2a_1+2a_2-a_3+a_4 &amp; = &amp; 0 \\end{array}\\right.$$</p>\n\n<p>Once you've found the matrix $A$, you need to make sure that $(1,2,0,-1)^T$ is a solution of the system $Ax=b$, so you just have to take\n$$b=A\\cdot \\pmatrix{1\\\\2\\\\0\\\\-1}.$$</p>\n" }, { "answer_id": 2087523, "score": 1, "text": "<p>Using\n$$\nv_0=\\pmatrix{1\\\\2\\\\0\\\\-1} \\quad\nv_1=\\pmatrix{1\\\\-2\\\\2\\\\1} \\quad \nv_2=\\pmatrix{2\\\\2\\\\-1\\\\1}\n$$\nIf you want\n$$\nb = A(v_0 + x_1 v_1 + x_2 v_2) \\iff \\\\\n\\underbrace{b - A v_0}_d = x_1 Av_1 + x_2 Av_2 \n= \\underbrace{(Av_1 Av_2)}_C \\underbrace{(x_1, x_2)^T}_u \\iff \\\\\nC u = d \\quad\n$$\nto hold for all $u = (x_1, x_2)^T \\in \\mathbb{R}^2$ it must include $0$, so $C0 = 0 = d$ and thus $b = A v_0$ for whatever $A$ we end up with. </p>\n\n<p>Further $C u = 0$ for all $u$ means $C e_i = c_i = 0$ thus $C=(c_1 \\, c_2) = (0 \\, 0) = 0$.</p>\n\n<p>This requires $A v_1 = A v_2 = 0$. </p>\n\n<p>With $A^T = (r_1 \\, r_2)$ we need $r_i^T v_j = r_i \\cdot v_j = 0 \\iff r_i \\perp v_j \\iff v_j^T r_i = 0$. </p>\n\n<p>So $\\ker A = \\langle v_1, v_2 \\rangle \\perp \n\\langle r_1, r_2 \\rangle = \\DeclareMathOperator{img}{img}\\img A^T$\n$$\nV^T r = 0 \\iff \n\\begin{bmatrix}\nv_1^T \\\\\nv_2^T\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1 &amp; -2 &amp; 2 &amp; 1 \\\\\n2 &amp; 2 &amp; -1 &amp; 1\n\\end{bmatrix}\n\\to\n\\begin{bmatrix}\n1 &amp; -2 &amp; 2 &amp; 1 \\\\\n0 &amp; 6 &amp; -5 &amp; -1\n\\end{bmatrix}\n\\to \\\\\n\\begin{bmatrix}\n1 &amp; -2 &amp; 2 &amp; 1 \\\\\n0 &amp; 1 &amp; -5/6 &amp; -1/6\n\\end{bmatrix}\n\\to\n\\begin{bmatrix}\n1 &amp; 0 &amp; 1/3 &amp; 2/3 \\\\\n0 &amp; 1 &amp; -5/6 &amp; -1/6\n\\end{bmatrix}\n$$\nWe find $r_1 = (-1, 1, 1, 1)^T$, $r_2 = (0, 3, 4, -2)^T$ do the job.</p>\n\n<p>So\n$$\nA =\n\\begin{pmatrix}\n-1 &amp; 1 &amp; 1 &amp; 1 \\\\\n 0 &amp; 3 &amp; 4 &amp; -2\n\\end{pmatrix}\n\\quad\nb =\n\\begin{pmatrix}\n0 \\\\\n8\n\\end{pmatrix}\n$$\nis a solution.</p>\n" }, { "answer_id": 2087468, "score": 5, "text": "<p>If you do this you don't get a linear system with twelve unknowns and two equations. Firstly, computing the product of $A$ with a matrix with some entries unknown is not linear, because it will involve terms of the form $a_{ij}x_k$; and secondly, $x_1$ and $x_2$ are <em>not</em> unknowns, the equation has to hold for all values of $x_1$ and $x_2$.</p>\n\n<p>You know that in general the set of solutions of a system $Ax=b$ is given (if non-empty) by $v_0+\\operatorname{null}A$ for some solution $v_0$ (any one will do, but you need to know that there is at least one). Comparing this with the definition of $L$, you get that $A$ must be such that\n$$\\operatorname{null} A=\\operatorname{span} ((1,-2,2,1)^T , (2,2,-1,1)^T).$$\nSo you want the rows $A_1,A_2$ of $A$ to be linearly independent (so that $A$ has rank $2$) and such that\n$$A_i\\cdot (1,-2,2,1)^T=0=A_i\\cdot (2,2,-1,1)^T;$$\nTaking the transpose of these equations shows that the rows of $A$ must form a basis of the space of solutions to the homogeneous linear system\n$$\\left\\{\\begin{array}{lll}a_1-2a_2+2a_3+a_4 &amp; = &amp; 0 \\\\ 2a_1+2a_2-a_3+a_4 &amp; = &amp; 0 \\end{array}\\right.$$</p>\n\n<p>Once you've found the matrix $A$, you need to make sure that $(1,2,0,-1)^T$ is a solution of the system $Ax=b$, so you just have to take\n$$b=A\\cdot \\pmatrix{1\\\\2\\\\0\\\\-1}.$$</p>\n" }, { "answer_id": 2087523, "score": 1, "text": "<p>Using\n$$\nv_0=\\pmatrix{1\\\\2\\\\0\\\\-1} \\quad\nv_1=\\pmatrix{1\\\\-2\\\\2\\\\1} \\quad \nv_2=\\pmatrix{2\\\\2\\\\-1\\\\1}\n$$\nIf you want\n$$\nb = A(v_0 + x_1 v_1 + x_2 v_2) \\iff \\\\\n\\underbrace{b - A v_0}_d = x_1 Av_1 + x_2 Av_2 \n= \\underbrace{(Av_1 Av_2)}_C \\underbrace{(x_1, x_2)^T}_u \\iff \\\\\nC u = d \\quad\n$$\nto hold for all $u = (x_1, x_2)^T \\in \\mathbb{R}^2$ it must include $0$, so $C0 = 0 = d$ and thus $b = A v_0$ for whatever $A$ we end up with. </p>\n\n<p>Further $C u = 0$ for all $u$ means $C e_i = c_i = 0$ thus $C=(c_1 \\, c_2) = (0 \\, 0) = 0$.</p>\n\n<p>This requires $A v_1 = A v_2 = 0$. </p>\n\n<p>With $A^T = (r_1 \\, r_2)$ we need $r_i^T v_j = r_i \\cdot v_j = 0 \\iff r_i \\perp v_j \\iff v_j^T r_i = 0$. </p>\n\n<p>So $\\ker A = \\langle v_1, v_2 \\rangle \\perp \n\\langle r_1, r_2 \\rangle = \\DeclareMathOperator{img}{img}\\img A^T$\n$$\nV^T r = 0 \\iff \n\\begin{bmatrix}\nv_1^T \\\\\nv_2^T\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1 &amp; -2 &amp; 2 &amp; 1 \\\\\n2 &amp; 2 &amp; -1 &amp; 1\n\\end{bmatrix}\n\\to\n\\begin{bmatrix}\n1 &amp; -2 &amp; 2 &amp; 1 \\\\\n0 &amp; 6 &amp; -5 &amp; -1\n\\end{bmatrix}\n\\to \\\\\n\\begin{bmatrix}\n1 &amp; -2 &amp; 2 &amp; 1 \\\\\n0 &amp; 1 &amp; -5/6 &amp; -1/6\n\\end{bmatrix}\n\\to\n\\begin{bmatrix}\n1 &amp; 0 &amp; 1/3 &amp; 2/3 \\\\\n0 &amp; 1 &amp; -5/6 &amp; -1/6\n\\end{bmatrix}\n$$\nWe find $r_1 = (-1, 1, 1, 1)^T$, $r_2 = (0, 3, 4, -2)^T$ do the job.</p>\n\n<p>So\n$$\nA =\n\\begin{pmatrix}\n-1 &amp; 1 &amp; 1 &amp; 1 \\\\\n 0 &amp; 3 &amp; 4 &amp; -2\n\\end{pmatrix}\n\\quad\nb =\n\\begin{pmatrix}\n0 \\\\\n8\n\\end{pmatrix}\n$$\nis a solution.</p>\n" }, { "answer_id": 2087468, "score": 5, "text": "<p>If you do this you don't get a linear system with twelve unknowns and two equations. Firstly, computing the product of $A$ with a matrix with some entries unknown is not linear, because it will involve terms of the form $a_{ij}x_k$; and secondly, $x_1$ and $x_2$ are <em>not</em> unknowns, the equation has to hold for all values of $x_1$ and $x_2$.</p>\n\n<p>You know that in general the set of solutions of a system $Ax=b$ is given (if non-empty) by $v_0+\\operatorname{null}A$ for some solution $v_0$ (any one will do, but you need to know that there is at least one). Comparing this with the definition of $L$, you get that $A$ must be such that\n$$\\operatorname{null} A=\\operatorname{span} ((1,-2,2,1)^T , (2,2,-1,1)^T).$$\nSo you want the rows $A_1,A_2$ of $A$ to be linearly independent (so that $A$ has rank $2$) and such that\n$$A_i\\cdot (1,-2,2,1)^T=0=A_i\\cdot (2,2,-1,1)^T;$$\nTaking the transpose of these equations shows that the rows of $A$ must form a basis of the space of solutions to the homogeneous linear system\n$$\\left\\{\\begin{array}{lll}a_1-2a_2+2a_3+a_4 &amp; = &amp; 0 \\\\ 2a_1+2a_2-a_3+a_4 &amp; = &amp; 0 \\end{array}\\right.$$</p>\n\n<p>Once you've found the matrix $A$, you need to make sure that $(1,2,0,-1)^T$ is a solution of the system $Ax=b$, so you just have to take\n$$b=A\\cdot \\pmatrix{1\\\\2\\\\0\\\\-1}.$$</p>\n" }, { "answer_id": 2087523, "score": 1, "text": "<p>Using\n$$\nv_0=\\pmatrix{1\\\\2\\\\0\\\\-1} \\quad\nv_1=\\pmatrix{1\\\\-2\\\\2\\\\1} \\quad \nv_2=\\pmatrix{2\\\\2\\\\-1\\\\1}\n$$\nIf you want\n$$\nb = A(v_0 + x_1 v_1 + x_2 v_2) \\iff \\\\\n\\underbrace{b - A v_0}_d = x_1 Av_1 + x_2 Av_2 \n= \\underbrace{(Av_1 Av_2)}_C \\underbrace{(x_1, x_2)^T}_u \\iff \\\\\nC u = d \\quad\n$$\nto hold for all $u = (x_1, x_2)^T \\in \\mathbb{R}^2$ it must include $0$, so $C0 = 0 = d$ and thus $b = A v_0$ for whatever $A$ we end up with. </p>\n\n<p>Further $C u = 0$ for all $u$ means $C e_i = c_i = 0$ thus $C=(c_1 \\, c_2) = (0 \\, 0) = 0$.</p>\n\n<p>This requires $A v_1 = A v_2 = 0$. </p>\n\n<p>With $A^T = (r_1 \\, r_2)$ we need $r_i^T v_j = r_i \\cdot v_j = 0 \\iff r_i \\perp v_j \\iff v_j^T r_i = 0$. </p>\n\n<p>So $\\ker A = \\langle v_1, v_2 \\rangle \\perp \n\\langle r_1, r_2 \\rangle = \\DeclareMathOperator{img}{img}\\img A^T$\n$$\nV^T r = 0 \\iff \n\\begin{bmatrix}\nv_1^T \\\\\nv_2^T\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1 &amp; -2 &amp; 2 &amp; 1 \\\\\n2 &amp; 2 &amp; -1 &amp; 1\n\\end{bmatrix}\n\\to\n\\begin{bmatrix}\n1 &amp; -2 &amp; 2 &amp; 1 \\\\\n0 &amp; 6 &amp; -5 &amp; -1\n\\end{bmatrix}\n\\to \\\\\n\\begin{bmatrix}\n1 &amp; -2 &amp; 2 &amp; 1 \\\\\n0 &amp; 1 &amp; -5/6 &amp; -1/6\n\\end{bmatrix}\n\\to\n\\begin{bmatrix}\n1 &amp; 0 &amp; 1/3 &amp; 2/3 \\\\\n0 &amp; 1 &amp; -5/6 &amp; -1/6\n\\end{bmatrix}\n$$\nWe find $r_1 = (-1, 1, 1, 1)^T$, $r_2 = (0, 3, 4, -2)^T$ do the job.</p>\n\n<p>So\n$$\nA =\n\\begin{pmatrix}\n-1 &amp; 1 &amp; 1 &amp; 1 \\\\\n 0 &amp; 3 &amp; 4 &amp; -2\n\\end{pmatrix}\n\\quad\nb =\n\\begin{pmatrix}\n0 \\\\\n8\n\\end{pmatrix}\n$$\nis a solution.</p>\n" } ]
Question: How to construct $A$ and $b$, given the set of all solutions to $Ax=b$? Answer 1: <p>If you do this you don't get a linear system with twelve unknowns and two equations. Firstly, computing the product of $A$ with a matrix with some entries unknown is not linear, because it will involve terms of the form $a_{ij}x_k$; and secondly, $x_1$ and $x_2$ are <em>not</em> unknowns, the equation has to hold for all values of $x_1$ and $x_2$.</p> <p>You know that in general the set of solutions of a system $Ax=b$ is given (if non-empty) by $v_0+\operatorname{null}A$ for some solution $v_0$ (any one will do, but you need to know that there is at least one). Comparing this with the definition of $L$, you get that $A$ must be such that $$\operatorname{null} A=\operatorname{span} ((1,-2,2,1)^T , (2,2,-1,1)^T).$$ So you want the rows $A_1,A_2$ of $A$ to be linearly independent (so that $A$ has rank $2$) and such that $$A_i\cdot (1,-2,2,1)^T=0=A_i\cdot (2,2,-1,1)^T;$$ Taking the transpose of these equations shows that the rows of $A$ must form a basis of the space of solutions to the homogeneous linear system $$\left\{\begin{array}{lll}a_1-2a_2+2a_3+a_4 &amp; = &amp; 0 \\ 2a_1+2a_2-a_3+a_4 &amp; = &amp; 0 \end{array}\right.$$</p> <p>Once you've found the matrix $A$, you need to make sure that $(1,2,0,-1)^T$ is a solution of the system $Ax=b$, so you just have to take $$b=A\cdot \pmatrix{1\\2\\0\\-1}.$$</p> Answer 2: <p>Using $$ v_0=\pmatrix{1\\2\\0\\-1} \quad v_1=\pmatrix{1\\-2\\2\\1} \quad v_2=\pmatrix{2\\2\\-1\\1} $$ If you want $$ b = A(v_0 + x_1 v_1 + x_2 v_2) \iff \\ \underbrace{b - A v_0}_d = x_1 Av_1 + x_2 Av_2 = \underbrace{(Av_1 Av_2)}_C \underbrace{(x_1, x_2)^T}_u \iff \\ C u = d \quad $$ to hold for all $u = (x_1, x_2)^T \in \mathbb{R}^2$ it must include $0$, so $C0 = 0 = d$ and thus $b = A v_0$ for whatever $A$ we end up with. </p> <p>Further $C u = 0$ for all $u$ means $C e_i = c_i = 0$ thus $C=(c_1 \, c_2) = (0 \, 0) = 0$.</p> <p>This requires $A v_1 = A v_2 = 0$. </p> <p>With $A^T = (r_1 \, r_2)$ we need $r_i^T v_j = r_i \cdot v_j = 0 \iff r_i \perp v_j \iff v_j^T r_i = 0$. </p> <p>So $\ker A = \langle v_1, v_2 \rangle \perp \langle r_1, r_2 \rangle = \DeclareMathOperator{img}{img}\img A^T$ $$ V^T r = 0 \iff \begin{bmatrix} v_1^T \\ v_2^T \end{bmatrix} = \begin{bmatrix} 1 &amp; -2 &amp; 2 &amp; 1 \\ 2 &amp; 2 &amp; -1 &amp; 1 \end{bmatrix} \to \begin{bmatrix} 1 &amp; -2 &amp; 2 &amp; 1 \\ 0 &amp; 6 &amp; -5 &amp; -1 \end{bmatrix} \to \\ \begin{bmatrix} 1 &amp; -2 &amp; 2 &amp; 1 \\ 0 &amp; 1 &amp; -5/6 &amp; -1/6 \end{bmatrix} \to \begin{bmatrix} 1 &amp; 0 &amp; 1/3 &amp; 2/3 \\ 0 &amp; 1 &amp; -5/6 &amp; -1/6 \end{bmatrix} $$ We find $r_1 = (-1, 1, 1, 1)^T$, $r_2 = (0, 3, 4, -2)^T$ do the job.</p> <p>So $$ A = \begin{pmatrix} -1 &amp; 1 &amp; 1 &amp; 1 \\ 0 &amp; 3 &amp; 4 &amp; -2 \end{pmatrix} \quad b = \begin{pmatrix} 0 \\ 8 \end{pmatrix} $$ is a solution.</p> Answer 3: <p>If you do this you don't get a linear system with twelve unknowns and two equations. Firstly, computing the product of $A$ with a matrix with some entries unknown is not linear, because it will involve terms of the form $a_{ij}x_k$; and secondly, $x_1$ and $x_2$ are <em>not</em> unknowns, the equation has to hold for all values of $x_1$ and $x_2$.</p> <p>You know that in general the set of solutions of a system $Ax=b$ is given (if non-empty) by $v_0+\operatorname{null}A$ for some solution $v_0$ (any one will do, but you need to know that there is at least one). Comparing this with the definition of $L$, you get that $A$ must be such that $$\operatorname{null} A=\operatorname{span} ((1,-2,2,1)^T , (2,2,-1,1)^T).$$ So you want the rows $A_1,A_2$ of $A$ to be linearly independent (so that $A$ has rank $2$) and such that $$A_i\cdot (1,-2,2,1)^T=0=A_i\cdot (2,2,-1,1)^T;$$ Taking the transpose of these equations shows that the rows of $A$ must form a basis of the space of solutions to the homogeneous linear system $$\left\{\begin{array}{lll}a_1-2a_2+2a_3+a_4 &amp; = &amp; 0 \\ 2a_1+2a_2-a_3+a_4 &amp; = &amp; 0 \end{array}\right.$$</p> <p>Once you've found the matrix $A$, you need to make sure that $(1,2,0,-1)^T$ is a solution of the system $Ax=b$, so you just have to take $$b=A\cdot \pmatrix{1\\2\\0\\-1}.$$</p> Answer 4: <p>Using $$ v_0=\pmatrix{1\\2\\0\\-1} \quad v_1=\pmatrix{1\\-2\\2\\1} \quad v_2=\pmatrix{2\\2\\-1\\1} $$ If you want $$ b = A(v_0 + x_1 v_1 + x_2 v_2) \iff \\ \underbrace{b - A v_0}_d = x_1 Av_1 + x_2 Av_2 = \underbrace{(Av_1 Av_2)}_C \underbrace{(x_1, x_2)^T}_u \iff \\ C u = d \quad $$ to hold for all $u = (x_1, x_2)^T \in \mathbb{R}^2$ it must include $0$, so $C0 = 0 = d$ and thus $b = A v_0$ for whatever $A$ we end up with. </p> <p>Further $C u = 0$ for all $u$ means $C e_i = c_i = 0$ thus $C=(c_1 \, c_2) = (0 \, 0) = 0$.</p> <p>This requires $A v_1 = A v_2 = 0$. </p> <p>With $A^T = (r_1 \, r_2)$ we need $r_i^T v_j = r_i \cdot v_j = 0 \iff r_i \perp v_j \iff v_j^T r_i = 0$. </p> <p>So $\ker A = \langle v_1, v_2 \rangle \perp \langle r_1, r_2 \rangle = \DeclareMathOperator{img}{img}\img A^T$ $$ V^T r = 0 \iff \begin{bmatrix} v_1^T \\ v_2^T \end{bmatrix} = \begin{bmatrix} 1 &amp; -2 &amp; 2 &amp; 1 \\ 2 &amp; 2 &amp; -1 &amp; 1 \end{bmatrix} \to \begin{bmatrix} 1 &amp; -2 &amp; 2 &amp; 1 \\ 0 &amp; 6 &amp; -5 &amp; -1 \end{bmatrix} \to \\ \begin{bmatrix} 1 &amp; -2 &amp; 2 &amp; 1 \\ 0 &amp; 1 &amp; -5/6 &amp; -1/6 \end{bmatrix} \to \begin{bmatrix} 1 &amp; 0 &amp; 1/3 &amp; 2/3 \\ 0 &amp; 1 &amp; -5/6 &amp; -1/6 \end{bmatrix} $$ We find $r_1 = (-1, 1, 1, 1)^T$, $r_2 = (0, 3, 4, -2)^T$ do the job.</p> <p>So $$ A = \begin{pmatrix} -1 &amp; 1 &amp; 1 &amp; 1 \\ 0 &amp; 3 &amp; 4 &amp; -2 \end{pmatrix} \quad b = \begin{pmatrix} 0 \\ 8 \end{pmatrix} $$ is a solution.</p> Answer 5: <p>If you do this you don't get a linear system with twelve unknowns and two equations. Firstly, computing the product of $A$ with a matrix with some entries unknown is not linear, because it will involve terms of the form $a_{ij}x_k$; and secondly, $x_1$ and $x_2$ are <em>not</em> unknowns, the equation has to hold for all values of $x_1$ and $x_2$.</p> <p>You know that in general the set of solutions of a system $Ax=b$ is given (if non-empty) by $v_0+\operatorname{null}A$ for some solution $v_0$ (any one will do, but you need to know that there is at least one). Comparing this with the definition of $L$, you get that $A$ must be such that $$\operatorname{null} A=\operatorname{span} ((1,-2,2,1)^T , (2,2,-1,1)^T).$$ So you want the rows $A_1,A_2$ of $A$ to be linearly independent (so that $A$ has rank $2$) and such that $$A_i\cdot (1,-2,2,1)^T=0=A_i\cdot (2,2,-1,1)^T;$$ Taking the transpose of these equations shows that the rows of $A$ must form a basis of the space of solutions to the homogeneous linear system $$\left\{\begin{array}{lll}a_1-2a_2+2a_3+a_4 &amp; = &amp; 0 \\ 2a_1+2a_2-a_3+a_4 &amp; = &amp; 0 \end{array}\right.$$</p> <p>Once you've found the matrix $A$, you need to make sure that $(1,2,0,-1)^T$ is a solution of the system $Ax=b$, so you just have to take $$b=A\cdot \pmatrix{1\\2\\0\\-1}.$$</p> Answer 6: <p>Using $$ v_0=\pmatrix{1\\2\\0\\-1} \quad v_1=\pmatrix{1\\-2\\2\\1} \quad v_2=\pmatrix{2\\2\\-1\\1} $$ If you want $$ b = A(v_0 + x_1 v_1 + x_2 v_2) \iff \\ \underbrace{b - A v_0}_d = x_1 Av_1 + x_2 Av_2 = \underbrace{(Av_1 Av_2)}_C \underbrace{(x_1, x_2)^T}_u \iff \\ C u = d \quad $$ to hold for all $u = (x_1, x_2)^T \in \mathbb{R}^2$ it must include $0$, so $C0 = 0 = d$ and thus $b = A v_0$ for whatever $A$ we end up with. </p> <p>Further $C u = 0$ for all $u$ means $C e_i = c_i = 0$ thus $C=(c_1 \, c_2) = (0 \, 0) = 0$.</p> <p>This requires $A v_1 = A v_2 = 0$. </p> <p>With $A^T = (r_1 \, r_2)$ we need $r_i^T v_j = r_i \cdot v_j = 0 \iff r_i \perp v_j \iff v_j^T r_i = 0$. </p> <p>So $\ker A = \langle v_1, v_2 \rangle \perp \langle r_1, r_2 \rangle = \DeclareMathOperator{img}{img}\img A^T$ $$ V^T r = 0 \iff \begin{bmatrix} v_1^T \\ v_2^T \end{bmatrix} = \begin{bmatrix} 1 &amp; -2 &amp; 2 &amp; 1 \\ 2 &amp; 2 &amp; -1 &amp; 1 \end{bmatrix} \to \begin{bmatrix} 1 &amp; -2 &amp; 2 &amp; 1 \\ 0 &amp; 6 &amp; -5 &amp; -1 \end{bmatrix} \to \\ \begin{bmatrix} 1 &amp; -2 &amp; 2 &amp; 1 \\ 0 &amp; 1 &amp; -5/6 &amp; -1/6 \end{bmatrix} \to \begin{bmatrix} 1 &amp; 0 &amp; 1/3 &amp; 2/3 \\ 0 &amp; 1 &amp; -5/6 &amp; -1/6 \end{bmatrix} $$ We find $r_1 = (-1, 1, 1, 1)^T$, $r_2 = (0, 3, 4, -2)^T$ do the job.</p> <p>So $$ A = \begin{pmatrix} -1 &amp; 1 &amp; 1 &amp; 1 \\ 0 &amp; 3 &amp; 4 &amp; -2 \end{pmatrix} \quad b = \begin{pmatrix} 0 \\ 8 \end{pmatrix} $$ is a solution.</p>
What is the purpose of this axiom in the definition of a uniform space?
2,087,765
4
https://math.stackexchange.com/questions/2087765/what-is-the-purpose-of-this-axiom-in-the-definition-of-a-uniform-space
<p>I started reading about uniform spaces the other day, and in an attempt to familiarize myself with the notion I attempted to work through some examples based on the entourage definition of a uniformity. I was able to understand and verify the uniformity on a metric space in which $\{\{(x,\ y)\ |\ d(x,\ y)&lt;a\}\ |\ a\in\mathbb{R}_{&gt;0}\}$ is a fundamental system of entourages.</p> <p>However, something still doesn't sit well with me, and that's the 4th axiom wikipedia gives:</p> <blockquote> <p>If $U\in\Phi$, then there is some $V\in\Phi$ for which $(x,\ y)\in V$ and $(y,\ z)\in V$ implies $(x,\ z)\in U$.</p> </blockquote> <p>I recognize that this implies that for all $U\in\Phi$, there is $V\in\Phi$ for which $V\circ V\subseteq U$ where $\circ$ is the binary relation composition operator. My question is: <strong>why?</strong></p> <p>Why is it necessary to include this in the definition of a uniform space? As far as I've read, it appears to me uniform spaces were invented to generalize the notion of uniform continuity between spaces: a map between uniform spaces is uniformly continuous if the image of an entourage in the domain space is an entourage in the codomain space. This also allows us to generalize the notion of Cauchy sequences to Cauchy nets.</p> <p>I understand that a net $(x_\alpha)$ is a Cauchy net if for every entourage $U$ there is $\lambda$ such that whenever $\alpha,\ \beta &gt; \lambda$, $(x_\alpha,\ x_\beta)\in U$.</p> <p>Now, I can understand how the notion of a Cauchy net generalizes the notion of a Cauchy sequence (under the condition that an entourage is taken to be a binary relation of "closeness"), but I haven't worked through the details of why the above definition of a uniformly continuous map preserves Cauchy nets (though it doesn't sound too far off from the usual proof that uniformly continuous functions on $\mathbb{R}$ preserve Cauchy sequences).</p> <p>I read in <a href="https://math.stackexchange.com/questions/105977/definition-of-uniform-structure">this answer</a> that the 4th axiom is essentially inspired by the triangle inequality, but that doesn't quite make sense to me either. I suppose it could be that axiom 4 is meant to generalize the notion of a metric space, in which uniform continuity can be defined (and in fact, the notion of a metric is required in the usual definition of uniform continuity), but I still don't quite see how omitting the 4th axiom would break the definition of uniform continuity.</p> <p>So, what part of all of this, the generalization of Cauchy sequences and uniform continuity, is the 4th axiom required for? Why would omitting the 4th axiom be a problem, and why does the 4th axiom "fix" the problem, and properly generalize uniform continuity?</p>
[ { "answer_id": 2087779, "score": 6, "text": "<p>The purpose of the axiom is to make sure that this common proof motif from metric spaces will generalize to uniform spaces:</p>\n<blockquote>\n<p>Let <span class=\"math-container\">$x$</span> (with certain properties) be given. The enemy chooses an <span class=\"math-container\">$\\varepsilon&gt;0$</span> and we must then find a <span class=\"math-container\">$z$</span> (with certain other properties) within <span class=\"math-container\">$\\varepsilon$</span> of <span class=\"math-container\">$x$</span>.</p>\n<p>We can do that by first finding an <span class=\"math-container\">$y$</span> (with yet other properties) within <span class=\"math-container\">$\\varepsilon/2$</span> of <span class=\"math-container\">$x$</span> and <em>then</em> finding a <span class=\"math-container\">$z$</span> with the desired property within <span class=\"math-container\">$\\varepsilon/2$</span> of <span class=\"math-container\">$y$</span>. Then, due to the triangle inequality, that <span class=\"math-container\">$z$</span> will be within <span class=\"math-container\">$\\varepsilon$</span> of <span class=\"math-container\">$x$</span>.</p>\n<p>The task is now reduced to proving separately that (1) there is an <span class=\"math-container\">$y$</span> near every <span class=\"math-container\">$x$</span>, and (2) there is a <span class=\"math-container\">$z$</span> near every <span class=\"math-container\">$y$</span>.</p>\n</blockquote>\n<p>In a uniform space, the enemy choosing a <span class=\"math-container\">$\\varepsilon$</span> corresponds to choosing an <span class=\"math-container\">$U$</span>, and splitting the <span class=\"math-container\">$\\varepsilon$</span> into two <span class=\"math-container\">$\\frac\\varepsilon2$</span>s corresponds to picking a <span class=\"math-container\">$V$</span> by the axiom.</p>\n<p>With metric spaces, we sometimes need to prove nearness in a chain of more than two steps, for example splitting <span class=\"math-container\">$\\varepsilon$</span> into seven <span class=\"math-container\">$\\frac\\varepsilon7$</span>s. This would be modeled in a uniform space by several successive uses of the axiom.</p>\n<p>Sometimes it is convenient that the pieces you split the <span class=\"math-container\">$\\varepsilon$</span> into are <em>equal</em>, which is why the axiom gives you a single <span class=\"math-container\">$V$</span> such that <span class=\"math-container\">$V\\circ V\\subseteq U$</span>, and not just <span class=\"math-container\">$V$</span> and <span class=\"math-container\">$W$</span> such that <span class=\"math-container\">$V\\circ W\\subseteq U$</span> (which would have been enough to enable the simple case I've sketched above). But the difference is not really important; the <span class=\"math-container\">$V\\circ W$</span> version implies the <span class=\"math-container\">$V\\circ V$</span> version because entourages are preserved by intersection.</p>\n" }, { "answer_id": 3091791, "score": 1, "text": "<p>From a purely intuitive standpoint, I believe the axiom can be interpreted as implying that there are arbitrarily small entourages. That is, makes sense if we view an entourage <span class=\"math-container\">$U$</span> as a mapping taking any point <span class=\"math-container\">$x$</span> to a neighborhood <span class=\"math-container\">$U[x]$</span> (with the different neighborhoods having the same 'size').</p>\n\n<p>In particular, this allows us to find a entourage <span class=\"math-container\">$V$</span> of at most half the 'radius' (imagining an open ball), since <span class=\"math-container\">$V \\circ V$</span> is twice the 'radius' of <span class=\"math-container\">$V$</span>. Since entourages are closed under taking supersets (which allows us to increase the 'radius'), we get that there are entourages of any size.</p>\n\n<p>Why does this matter? Well, consider a case where this isn't true: if <span class=\"math-container\">$(X, d)$</span> is a metric space, we could let \n <span class=\"math-container\">$$ U_{r}[x] = \\{ y \\in X : 1 &lt; d(x,y) &lt; r \\} \\cup \\{x\\} $$</span>\nfor each <span class=\"math-container\">$r &gt; 1$</span>. Then I believe the collection of <span class=\"math-container\">$U_{r}$</span> generates a collection of sets satisfying every axiom but the fourth. What topology does it induce? Well, it will be hard for the sets <span class=\"math-container\">$U_{r}[x]$</span> to be neighborhoods; you would hope that it gives the metric topology, but it can't since there is no way to represent distances less than 1 (let alone arbitrarily small distances).</p>\n\n<p>I am relatively certain that this does not actually induce a valid topology, in which case you have a problem. If uniform continuity were defined on such a space, you could have it being uniformly continuous yet not continuous in any meaningful sense. More generally, if uniform spaces specialize topological spaces, then every entourage must generate a topology.</p>\n" }, { "answer_id": 2087779, "score": 6, "text": "<p>The purpose of the axiom is to make sure that this common proof motif from metric spaces will generalize to uniform spaces:</p>\n<blockquote>\n<p>Let <span class=\"math-container\">$x$</span> (with certain properties) be given. The enemy chooses an <span class=\"math-container\">$\\varepsilon&gt;0$</span> and we must then find a <span class=\"math-container\">$z$</span> (with certain other properties) within <span class=\"math-container\">$\\varepsilon$</span> of <span class=\"math-container\">$x$</span>.</p>\n<p>We can do that by first finding an <span class=\"math-container\">$y$</span> (with yet other properties) within <span class=\"math-container\">$\\varepsilon/2$</span> of <span class=\"math-container\">$x$</span> and <em>then</em> finding a <span class=\"math-container\">$z$</span> with the desired property within <span class=\"math-container\">$\\varepsilon/2$</span> of <span class=\"math-container\">$y$</span>. Then, due to the triangle inequality, that <span class=\"math-container\">$z$</span> will be within <span class=\"math-container\">$\\varepsilon$</span> of <span class=\"math-container\">$x$</span>.</p>\n<p>The task is now reduced to proving separately that (1) there is an <span class=\"math-container\">$y$</span> near every <span class=\"math-container\">$x$</span>, and (2) there is a <span class=\"math-container\">$z$</span> near every <span class=\"math-container\">$y$</span>.</p>\n</blockquote>\n<p>In a uniform space, the enemy choosing a <span class=\"math-container\">$\\varepsilon$</span> corresponds to choosing an <span class=\"math-container\">$U$</span>, and splitting the <span class=\"math-container\">$\\varepsilon$</span> into two <span class=\"math-container\">$\\frac\\varepsilon2$</span>s corresponds to picking a <span class=\"math-container\">$V$</span> by the axiom.</p>\n<p>With metric spaces, we sometimes need to prove nearness in a chain of more than two steps, for example splitting <span class=\"math-container\">$\\varepsilon$</span> into seven <span class=\"math-container\">$\\frac\\varepsilon7$</span>s. This would be modeled in a uniform space by several successive uses of the axiom.</p>\n<p>Sometimes it is convenient that the pieces you split the <span class=\"math-container\">$\\varepsilon$</span> into are <em>equal</em>, which is why the axiom gives you a single <span class=\"math-container\">$V$</span> such that <span class=\"math-container\">$V\\circ V\\subseteq U$</span>, and not just <span class=\"math-container\">$V$</span> and <span class=\"math-container\">$W$</span> such that <span class=\"math-container\">$V\\circ W\\subseteq U$</span> (which would have been enough to enable the simple case I've sketched above). But the difference is not really important; the <span class=\"math-container\">$V\\circ W$</span> version implies the <span class=\"math-container\">$V\\circ V$</span> version because entourages are preserved by intersection.</p>\n" }, { "answer_id": 3091791, "score": 1, "text": "<p>From a purely intuitive standpoint, I believe the axiom can be interpreted as implying that there are arbitrarily small entourages. That is, makes sense if we view an entourage <span class=\"math-container\">$U$</span> as a mapping taking any point <span class=\"math-container\">$x$</span> to a neighborhood <span class=\"math-container\">$U[x]$</span> (with the different neighborhoods having the same 'size').</p>\n\n<p>In particular, this allows us to find a entourage <span class=\"math-container\">$V$</span> of at most half the 'radius' (imagining an open ball), since <span class=\"math-container\">$V \\circ V$</span> is twice the 'radius' of <span class=\"math-container\">$V$</span>. Since entourages are closed under taking supersets (which allows us to increase the 'radius'), we get that there are entourages of any size.</p>\n\n<p>Why does this matter? Well, consider a case where this isn't true: if <span class=\"math-container\">$(X, d)$</span> is a metric space, we could let \n <span class=\"math-container\">$$ U_{r}[x] = \\{ y \\in X : 1 &lt; d(x,y) &lt; r \\} \\cup \\{x\\} $$</span>\nfor each <span class=\"math-container\">$r &gt; 1$</span>. Then I believe the collection of <span class=\"math-container\">$U_{r}$</span> generates a collection of sets satisfying every axiom but the fourth. What topology does it induce? Well, it will be hard for the sets <span class=\"math-container\">$U_{r}[x]$</span> to be neighborhoods; you would hope that it gives the metric topology, but it can't since there is no way to represent distances less than 1 (let alone arbitrarily small distances).</p>\n\n<p>I am relatively certain that this does not actually induce a valid topology, in which case you have a problem. If uniform continuity were defined on such a space, you could have it being uniformly continuous yet not continuous in any meaningful sense. More generally, if uniform spaces specialize topological spaces, then every entourage must generate a topology.</p>\n" }, { "answer_id": 2087779, "score": 6, "text": "<p>The purpose of the axiom is to make sure that this common proof motif from metric spaces will generalize to uniform spaces:</p>\n<blockquote>\n<p>Let <span class=\"math-container\">$x$</span> (with certain properties) be given. The enemy chooses an <span class=\"math-container\">$\\varepsilon&gt;0$</span> and we must then find a <span class=\"math-container\">$z$</span> (with certain other properties) within <span class=\"math-container\">$\\varepsilon$</span> of <span class=\"math-container\">$x$</span>.</p>\n<p>We can do that by first finding an <span class=\"math-container\">$y$</span> (with yet other properties) within <span class=\"math-container\">$\\varepsilon/2$</span> of <span class=\"math-container\">$x$</span> and <em>then</em> finding a <span class=\"math-container\">$z$</span> with the desired property within <span class=\"math-container\">$\\varepsilon/2$</span> of <span class=\"math-container\">$y$</span>. Then, due to the triangle inequality, that <span class=\"math-container\">$z$</span> will be within <span class=\"math-container\">$\\varepsilon$</span> of <span class=\"math-container\">$x$</span>.</p>\n<p>The task is now reduced to proving separately that (1) there is an <span class=\"math-container\">$y$</span> near every <span class=\"math-container\">$x$</span>, and (2) there is a <span class=\"math-container\">$z$</span> near every <span class=\"math-container\">$y$</span>.</p>\n</blockquote>\n<p>In a uniform space, the enemy choosing a <span class=\"math-container\">$\\varepsilon$</span> corresponds to choosing an <span class=\"math-container\">$U$</span>, and splitting the <span class=\"math-container\">$\\varepsilon$</span> into two <span class=\"math-container\">$\\frac\\varepsilon2$</span>s corresponds to picking a <span class=\"math-container\">$V$</span> by the axiom.</p>\n<p>With metric spaces, we sometimes need to prove nearness in a chain of more than two steps, for example splitting <span class=\"math-container\">$\\varepsilon$</span> into seven <span class=\"math-container\">$\\frac\\varepsilon7$</span>s. This would be modeled in a uniform space by several successive uses of the axiom.</p>\n<p>Sometimes it is convenient that the pieces you split the <span class=\"math-container\">$\\varepsilon$</span> into are <em>equal</em>, which is why the axiom gives you a single <span class=\"math-container\">$V$</span> such that <span class=\"math-container\">$V\\circ V\\subseteq U$</span>, and not just <span class=\"math-container\">$V$</span> and <span class=\"math-container\">$W$</span> such that <span class=\"math-container\">$V\\circ W\\subseteq U$</span> (which would have been enough to enable the simple case I've sketched above). But the difference is not really important; the <span class=\"math-container\">$V\\circ W$</span> version implies the <span class=\"math-container\">$V\\circ V$</span> version because entourages are preserved by intersection.</p>\n" }, { "answer_id": 3091791, "score": 1, "text": "<p>From a purely intuitive standpoint, I believe the axiom can be interpreted as implying that there are arbitrarily small entourages. That is, makes sense if we view an entourage <span class=\"math-container\">$U$</span> as a mapping taking any point <span class=\"math-container\">$x$</span> to a neighborhood <span class=\"math-container\">$U[x]$</span> (with the different neighborhoods having the same 'size').</p>\n\n<p>In particular, this allows us to find a entourage <span class=\"math-container\">$V$</span> of at most half the 'radius' (imagining an open ball), since <span class=\"math-container\">$V \\circ V$</span> is twice the 'radius' of <span class=\"math-container\">$V$</span>. Since entourages are closed under taking supersets (which allows us to increase the 'radius'), we get that there are entourages of any size.</p>\n\n<p>Why does this matter? Well, consider a case where this isn't true: if <span class=\"math-container\">$(X, d)$</span> is a metric space, we could let \n <span class=\"math-container\">$$ U_{r}[x] = \\{ y \\in X : 1 &lt; d(x,y) &lt; r \\} \\cup \\{x\\} $$</span>\nfor each <span class=\"math-container\">$r &gt; 1$</span>. Then I believe the collection of <span class=\"math-container\">$U_{r}$</span> generates a collection of sets satisfying every axiom but the fourth. What topology does it induce? Well, it will be hard for the sets <span class=\"math-container\">$U_{r}[x]$</span> to be neighborhoods; you would hope that it gives the metric topology, but it can't since there is no way to represent distances less than 1 (let alone arbitrarily small distances).</p>\n\n<p>I am relatively certain that this does not actually induce a valid topology, in which case you have a problem. If uniform continuity were defined on such a space, you could have it being uniformly continuous yet not continuous in any meaningful sense. More generally, if uniform spaces specialize topological spaces, then every entourage must generate a topology.</p>\n" } ]
Question: What is the purpose of this axiom in the definition of a uniform space? Answer 1: <p>The purpose of the axiom is to make sure that this common proof motif from metric spaces will generalize to uniform spaces:</p> <blockquote> <p>Let <span class="math-container">$x$</span> (with certain properties) be given. The enemy chooses an <span class="math-container">$\varepsilon&gt;0$</span> and we must then find a <span class="math-container">$z$</span> (with certain other properties) within <span class="math-container">$\varepsilon$</span> of <span class="math-container">$x$</span>.</p> <p>We can do that by first finding an <span class="math-container">$y$</span> (with yet other properties) within <span class="math-container">$\varepsilon/2$</span> of <span class="math-container">$x$</span> and <em>then</em> finding a <span class="math-container">$z$</span> with the desired property within <span class="math-container">$\varepsilon/2$</span> of <span class="math-container">$y$</span>. Then, due to the triangle inequality, that <span class="math-container">$z$</span> will be within <span class="math-container">$\varepsilon$</span> of <span class="math-container">$x$</span>.</p> <p>The task is now reduced to proving separately that (1) there is an <span class="math-container">$y$</span> near every <span class="math-container">$x$</span>, and (2) there is a <span class="math-container">$z$</span> near every <span class="math-container">$y$</span>.</p> </blockquote> <p>In a uniform space, the enemy choosing a <span class="math-container">$\varepsilon$</span> corresponds to choosing an <span class="math-container">$U$</span>, and splitting the <span class="math-container">$\varepsilon$</span> into two <span class="math-container">$\frac\varepsilon2$</span>s corresponds to picking a <span class="math-container">$V$</span> by the axiom.</p> <p>With metric spaces, we sometimes need to prove nearness in a chain of more than two steps, for example splitting <span class="math-container">$\varepsilon$</span> into seven <span class="math-container">$\frac\varepsilon7$</span>s. This would be modeled in a uniform space by several successive uses of the axiom.</p> <p>Sometimes it is convenient that the pieces you split the <span class="math-container">$\varepsilon$</span> into are <em>equal</em>, which is why the axiom gives you a single <span class="math-container">$V$</span> such that <span class="math-container">$V\circ V\subseteq U$</span>, and not just <span class="math-container">$V$</span> and <span class="math-container">$W$</span> such that <span class="math-container">$V\circ W\subseteq U$</span> (which would have been enough to enable the simple case I've sketched above). But the difference is not really important; the <span class="math-container">$V\circ W$</span> version implies the <span class="math-container">$V\circ V$</span> version because entourages are preserved by intersection.</p> Answer 2: <p>From a purely intuitive standpoint, I believe the axiom can be interpreted as implying that there are arbitrarily small entourages. That is, makes sense if we view an entourage <span class="math-container">$U$</span> as a mapping taking any point <span class="math-container">$x$</span> to a neighborhood <span class="math-container">$U[x]$</span> (with the different neighborhoods having the same 'size').</p> <p>In particular, this allows us to find a entourage <span class="math-container">$V$</span> of at most half the 'radius' (imagining an open ball), since <span class="math-container">$V \circ V$</span> is twice the 'radius' of <span class="math-container">$V$</span>. Since entourages are closed under taking supersets (which allows us to increase the 'radius'), we get that there are entourages of any size.</p> <p>Why does this matter? Well, consider a case where this isn't true: if <span class="math-container">$(X, d)$</span> is a metric space, we could let <span class="math-container">$$ U_{r}[x] = \{ y \in X : 1 &lt; d(x,y) &lt; r \} \cup \{x\} $$</span> for each <span class="math-container">$r &gt; 1$</span>. Then I believe the collection of <span class="math-container">$U_{r}$</span> generates a collection of sets satisfying every axiom but the fourth. What topology does it induce? Well, it will be hard for the sets <span class="math-container">$U_{r}[x]$</span> to be neighborhoods; you would hope that it gives the metric topology, but it can't since there is no way to represent distances less than 1 (let alone arbitrarily small distances).</p> <p>I am relatively certain that this does not actually induce a valid topology, in which case you have a problem. If uniform continuity were defined on such a space, you could have it being uniformly continuous yet not continuous in any meaningful sense. More generally, if uniform spaces specialize topological spaces, then every entourage must generate a topology.</p> Answer 3: <p>The purpose of the axiom is to make sure that this common proof motif from metric spaces will generalize to uniform spaces:</p> <blockquote> <p>Let <span class="math-container">$x$</span> (with certain properties) be given. The enemy chooses an <span class="math-container">$\varepsilon&gt;0$</span> and we must then find a <span class="math-container">$z$</span> (with certain other properties) within <span class="math-container">$\varepsilon$</span> of <span class="math-container">$x$</span>.</p> <p>We can do that by first finding an <span class="math-container">$y$</span> (with yet other properties) within <span class="math-container">$\varepsilon/2$</span> of <span class="math-container">$x$</span> and <em>then</em> finding a <span class="math-container">$z$</span> with the desired property within <span class="math-container">$\varepsilon/2$</span> of <span class="math-container">$y$</span>. Then, due to the triangle inequality, that <span class="math-container">$z$</span> will be within <span class="math-container">$\varepsilon$</span> of <span class="math-container">$x$</span>.</p> <p>The task is now reduced to proving separately that (1) there is an <span class="math-container">$y$</span> near every <span class="math-container">$x$</span>, and (2) there is a <span class="math-container">$z$</span> near every <span class="math-container">$y$</span>.</p> </blockquote> <p>In a uniform space, the enemy choosing a <span class="math-container">$\varepsilon$</span> corresponds to choosing an <span class="math-container">$U$</span>, and splitting the <span class="math-container">$\varepsilon$</span> into two <span class="math-container">$\frac\varepsilon2$</span>s corresponds to picking a <span class="math-container">$V$</span> by the axiom.</p> <p>With metric spaces, we sometimes need to prove nearness in a chain of more than two steps, for example splitting <span class="math-container">$\varepsilon$</span> into seven <span class="math-container">$\frac\varepsilon7$</span>s. This would be modeled in a uniform space by several successive uses of the axiom.</p> <p>Sometimes it is convenient that the pieces you split the <span class="math-container">$\varepsilon$</span> into are <em>equal</em>, which is why the axiom gives you a single <span class="math-container">$V$</span> such that <span class="math-container">$V\circ V\subseteq U$</span>, and not just <span class="math-container">$V$</span> and <span class="math-container">$W$</span> such that <span class="math-container">$V\circ W\subseteq U$</span> (which would have been enough to enable the simple case I've sketched above). But the difference is not really important; the <span class="math-container">$V\circ W$</span> version implies the <span class="math-container">$V\circ V$</span> version because entourages are preserved by intersection.</p> Answer 4: <p>From a purely intuitive standpoint, I believe the axiom can be interpreted as implying that there are arbitrarily small entourages. That is, makes sense if we view an entourage <span class="math-container">$U$</span> as a mapping taking any point <span class="math-container">$x$</span> to a neighborhood <span class="math-container">$U[x]$</span> (with the different neighborhoods having the same 'size').</p> <p>In particular, this allows us to find a entourage <span class="math-container">$V$</span> of at most half the 'radius' (imagining an open ball), since <span class="math-container">$V \circ V$</span> is twice the 'radius' of <span class="math-container">$V$</span>. Since entourages are closed under taking supersets (which allows us to increase the 'radius'), we get that there are entourages of any size.</p> <p>Why does this matter? Well, consider a case where this isn't true: if <span class="math-container">$(X, d)$</span> is a metric space, we could let <span class="math-container">$$ U_{r}[x] = \{ y \in X : 1 &lt; d(x,y) &lt; r \} \cup \{x\} $$</span> for each <span class="math-container">$r &gt; 1$</span>. Then I believe the collection of <span class="math-container">$U_{r}$</span> generates a collection of sets satisfying every axiom but the fourth. What topology does it induce? Well, it will be hard for the sets <span class="math-container">$U_{r}[x]$</span> to be neighborhoods; you would hope that it gives the metric topology, but it can't since there is no way to represent distances less than 1 (let alone arbitrarily small distances).</p> <p>I am relatively certain that this does not actually induce a valid topology, in which case you have a problem. If uniform continuity were defined on such a space, you could have it being uniformly continuous yet not continuous in any meaningful sense. More generally, if uniform spaces specialize topological spaces, then every entourage must generate a topology.</p> Answer 5: <p>The purpose of the axiom is to make sure that this common proof motif from metric spaces will generalize to uniform spaces:</p> <blockquote> <p>Let <span class="math-container">$x$</span> (with certain properties) be given. The enemy chooses an <span class="math-container">$\varepsilon&gt;0$</span> and we must then find a <span class="math-container">$z$</span> (with certain other properties) within <span class="math-container">$\varepsilon$</span> of <span class="math-container">$x$</span>.</p> <p>We can do that by first finding an <span class="math-container">$y$</span> (with yet other properties) within <span class="math-container">$\varepsilon/2$</span> of <span class="math-container">$x$</span> and <em>then</em> finding a <span class="math-container">$z$</span> with the desired property within <span class="math-container">$\varepsilon/2$</span> of <span class="math-container">$y$</span>. Then, due to the triangle inequality, that <span class="math-container">$z$</span> will be within <span class="math-container">$\varepsilon$</span> of <span class="math-container">$x$</span>.</p> <p>The task is now reduced to proving separately that (1) there is an <span class="math-container">$y$</span> near every <span class="math-container">$x$</span>, and (2) there is a <span class="math-container">$z$</span> near every <span class="math-container">$y$</span>.</p> </blockquote> <p>In a uniform space, the enemy choosing a <span class="math-container">$\varepsilon$</span> corresponds to choosing an <span class="math-container">$U$</span>, and splitting the <span class="math-container">$\varepsilon$</span> into two <span class="math-container">$\frac\varepsilon2$</span>s corresponds to picking a <span class="math-container">$V$</span> by the axiom.</p> <p>With metric spaces, we sometimes need to prove nearness in a chain of more than two steps, for example splitting <span class="math-container">$\varepsilon$</span> into seven <span class="math-container">$\frac\varepsilon7$</span>s. This would be modeled in a uniform space by several successive uses of the axiom.</p> <p>Sometimes it is convenient that the pieces you split the <span class="math-container">$\varepsilon$</span> into are <em>equal</em>, which is why the axiom gives you a single <span class="math-container">$V$</span> such that <span class="math-container">$V\circ V\subseteq U$</span>, and not just <span class="math-container">$V$</span> and <span class="math-container">$W$</span> such that <span class="math-container">$V\circ W\subseteq U$</span> (which would have been enough to enable the simple case I've sketched above). But the difference is not really important; the <span class="math-container">$V\circ W$</span> version implies the <span class="math-container">$V\circ V$</span> version because entourages are preserved by intersection.</p> Answer 6: <p>From a purely intuitive standpoint, I believe the axiom can be interpreted as implying that there are arbitrarily small entourages. That is, makes sense if we view an entourage <span class="math-container">$U$</span> as a mapping taking any point <span class="math-container">$x$</span> to a neighborhood <span class="math-container">$U[x]$</span> (with the different neighborhoods having the same 'size').</p> <p>In particular, this allows us to find a entourage <span class="math-container">$V$</span> of at most half the 'radius' (imagining an open ball), since <span class="math-container">$V \circ V$</span> is twice the 'radius' of <span class="math-container">$V$</span>. Since entourages are closed under taking supersets (which allows us to increase the 'radius'), we get that there are entourages of any size.</p> <p>Why does this matter? Well, consider a case where this isn't true: if <span class="math-container">$(X, d)$</span> is a metric space, we could let <span class="math-container">$$ U_{r}[x] = \{ y \in X : 1 &lt; d(x,y) &lt; r \} \cup \{x\} $$</span> for each <span class="math-container">$r &gt; 1$</span>. Then I believe the collection of <span class="math-container">$U_{r}$</span> generates a collection of sets satisfying every axiom but the fourth. What topology does it induce? Well, it will be hard for the sets <span class="math-container">$U_{r}[x]$</span> to be neighborhoods; you would hope that it gives the metric topology, but it can't since there is no way to represent distances less than 1 (let alone arbitrarily small distances).</p> <p>I am relatively certain that this does not actually induce a valid topology, in which case you have a problem. If uniform continuity were defined on such a space, you could have it being uniformly continuous yet not continuous in any meaningful sense. More generally, if uniform spaces specialize topological spaces, then every entourage must generate a topology.</p>
Find all triples $(a,b,c)$ of integers
2,087,891
4
https://math.stackexchange.com/questions/2087891/find-all-triples-a-b-c-of-integers
<blockquote> <p>Find all triples $(a,b,c)$ of integers such that $a+b+c = 3$ and $ab+bc+ca +2abc= 2017$.</p> </blockquote> <p>Adding the two equations and then adding $1$ gives $(a+1)(b+1)(c+1)+abc = 2021$. I then thought about trying to prove that $abc$ must divide $2021$ but didn't see how to prove that. </p>
[ { "answer_id": 2087894, "score": 8, "text": "<p>Hint: multiply the first equation by 2 and the second by 4 and then add them and add 1 to both terms.</p>\n\n<p>You'll get $(1+2a)(1+2b)(1+2c) = 8075 = 5^2 \\cdot 17 \\cdot 19$.</p>\n" }, { "answer_id": 2087892, "score": 0, "text": "<p>Hint </p>\n\n<p>$(a+b+c) ^2=a^2+b^2+c^2+2 (ab +bc+ca) $</p>\n" }, { "answer_id": 2087894, "score": 8, "text": "<p>Hint: multiply the first equation by 2 and the second by 4 and then add them and add 1 to both terms.</p>\n\n<p>You'll get $(1+2a)(1+2b)(1+2c) = 8075 = 5^2 \\cdot 17 \\cdot 19$.</p>\n" }, { "answer_id": 2087892, "score": 0, "text": "<p>Hint </p>\n\n<p>$(a+b+c) ^2=a^2+b^2+c^2+2 (ab +bc+ca) $</p>\n" }, { "answer_id": 2087894, "score": 8, "text": "<p>Hint: multiply the first equation by 2 and the second by 4 and then add them and add 1 to both terms.</p>\n\n<p>You'll get $(1+2a)(1+2b)(1+2c) = 8075 = 5^2 \\cdot 17 \\cdot 19$.</p>\n" }, { "answer_id": 2087892, "score": 0, "text": "<p>Hint </p>\n\n<p>$(a+b+c) ^2=a^2+b^2+c^2+2 (ab +bc+ca) $</p>\n" } ]
Question: Find all triples $(a,b,c)$ of integers Answer 1: <p>Hint: multiply the first equation by 2 and the second by 4 and then add them and add 1 to both terms.</p> <p>You'll get $(1+2a)(1+2b)(1+2c) = 8075 = 5^2 \cdot 17 \cdot 19$.</p> Answer 2: <p>Hint </p> <p>$(a+b+c) ^2=a^2+b^2+c^2+2 (ab +bc+ca) $</p> Answer 3: <p>Hint: multiply the first equation by 2 and the second by 4 and then add them and add 1 to both terms.</p> <p>You'll get $(1+2a)(1+2b)(1+2c) = 8075 = 5^2 \cdot 17 \cdot 19$.</p> Answer 4: <p>Hint </p> <p>$(a+b+c) ^2=a^2+b^2+c^2+2 (ab +bc+ca) $</p> Answer 5: <p>Hint: multiply the first equation by 2 and the second by 4 and then add them and add 1 to both terms.</p> <p>You'll get $(1+2a)(1+2b)(1+2c) = 8075 = 5^2 \cdot 17 \cdot 19$.</p> Answer 6: <p>Hint </p> <p>$(a+b+c) ^2=a^2+b^2+c^2+2 (ab +bc+ca) $</p>
$f$ differentiable except in $1$ point
2,087,976
4
https://math.stackexchange.com/questions/2087976/f-differentiable-except-in-1-point
<p>Assume we have a function $f : \mathbb{R} \rightarrow \mathbb{R}$ such that $f$ is differentiable for all non-zero $x$, $f$ is continuous at $0$ and $$\lim_{x \uparrow 0} f'(x) = \lim_{x \downarrow 0} f'(x) &lt; \infty. $$ Is it then true that $f$ is also differentiable at $0$?</p> <p>If we drop the hypothesis that $f$ is continuous, this is not true since the signums function is a simple counterexample. But what if $f$ is continuous? I don't think this is true but I can't find a counterexample...</p>
[ { "answer_id": 2087997, "score": 6, "text": "<p>Yes, $f$ is differentiable at $0$.</p>\n\n<p>Hint. By the Mean Value Theorem used in the interval $[0,x]$ for all $x&gt;0$,\n$$\\frac{f(x)-f(0)}{x-0}=f'(t_x)$$\nfor some $t_x\\in (0,x).$ \nHence \n$$\\lim_{x\\to 0^+}\\frac{f(x)-f(0)}{x-0}=\\lim_{x\\to 0^+}f'(t_x)=\\lim_{t\\to 0^+}f'(t)$$\nThe same can be done in $[x,0]$ fo $x&lt;0$.</p>\n" }, { "answer_id": 2088001, "score": 1, "text": "<p>f is diferenciable at 0. Hint: Use the definition of the derivative at 0, and by continuity of f at 0 numerator converges to 0 and denominator converges to 0 then apply L'hopital's Rule.</p>\n" }, { "answer_id": 2087997, "score": 6, "text": "<p>Yes, $f$ is differentiable at $0$.</p>\n\n<p>Hint. By the Mean Value Theorem used in the interval $[0,x]$ for all $x&gt;0$,\n$$\\frac{f(x)-f(0)}{x-0}=f'(t_x)$$\nfor some $t_x\\in (0,x).$ \nHence \n$$\\lim_{x\\to 0^+}\\frac{f(x)-f(0)}{x-0}=\\lim_{x\\to 0^+}f'(t_x)=\\lim_{t\\to 0^+}f'(t)$$\nThe same can be done in $[x,0]$ fo $x&lt;0$.</p>\n" }, { "answer_id": 2088001, "score": 1, "text": "<p>f is diferenciable at 0. Hint: Use the definition of the derivative at 0, and by continuity of f at 0 numerator converges to 0 and denominator converges to 0 then apply L'hopital's Rule.</p>\n" }, { "answer_id": 2087997, "score": 6, "text": "<p>Yes, $f$ is differentiable at $0$.</p>\n\n<p>Hint. By the Mean Value Theorem used in the interval $[0,x]$ for all $x&gt;0$,\n$$\\frac{f(x)-f(0)}{x-0}=f'(t_x)$$\nfor some $t_x\\in (0,x).$ \nHence \n$$\\lim_{x\\to 0^+}\\frac{f(x)-f(0)}{x-0}=\\lim_{x\\to 0^+}f'(t_x)=\\lim_{t\\to 0^+}f'(t)$$\nThe same can be done in $[x,0]$ fo $x&lt;0$.</p>\n" }, { "answer_id": 2088001, "score": 1, "text": "<p>f is diferenciable at 0. Hint: Use the definition of the derivative at 0, and by continuity of f at 0 numerator converges to 0 and denominator converges to 0 then apply L'hopital's Rule.</p>\n" } ]
Question: $f$ differentiable except in $1$ point Answer 1: <p>Yes, $f$ is differentiable at $0$.</p> <p>Hint. By the Mean Value Theorem used in the interval $[0,x]$ for all $x&gt;0$, $$\frac{f(x)-f(0)}{x-0}=f'(t_x)$$ for some $t_x\in (0,x).$ Hence $$\lim_{x\to 0^+}\frac{f(x)-f(0)}{x-0}=\lim_{x\to 0^+}f'(t_x)=\lim_{t\to 0^+}f'(t)$$ The same can be done in $[x,0]$ fo $x&lt;0$.</p> Answer 2: <p>f is diferenciable at 0. Hint: Use the definition of the derivative at 0, and by continuity of f at 0 numerator converges to 0 and denominator converges to 0 then apply L'hopital's Rule.</p> Answer 3: <p>Yes, $f$ is differentiable at $0$.</p> <p>Hint. By the Mean Value Theorem used in the interval $[0,x]$ for all $x&gt;0$, $$\frac{f(x)-f(0)}{x-0}=f'(t_x)$$ for some $t_x\in (0,x).$ Hence $$\lim_{x\to 0^+}\frac{f(x)-f(0)}{x-0}=\lim_{x\to 0^+}f'(t_x)=\lim_{t\to 0^+}f'(t)$$ The same can be done in $[x,0]$ fo $x&lt;0$.</p> Answer 4: <p>f is diferenciable at 0. Hint: Use the definition of the derivative at 0, and by continuity of f at 0 numerator converges to 0 and denominator converges to 0 then apply L'hopital's Rule.</p> Answer 5: <p>Yes, $f$ is differentiable at $0$.</p> <p>Hint. By the Mean Value Theorem used in the interval $[0,x]$ for all $x&gt;0$, $$\frac{f(x)-f(0)}{x-0}=f'(t_x)$$ for some $t_x\in (0,x).$ Hence $$\lim_{x\to 0^+}\frac{f(x)-f(0)}{x-0}=\lim_{x\to 0^+}f'(t_x)=\lim_{t\to 0^+}f'(t)$$ The same can be done in $[x,0]$ fo $x&lt;0$.</p> Answer 6: <p>f is diferenciable at 0. Hint: Use the definition of the derivative at 0, and by continuity of f at 0 numerator converges to 0 and denominator converges to 0 then apply L'hopital's Rule.</p>
Is the cartesian coordinate system &#39;special&#39;?
2,087,984
4
https://math.stackexchange.com/questions/2087984/is-the-cartesian-coordinate-system-special
<p>I have started reading about vector calculus and I noted that with other coordinate systems (spherical/polar etc), when you differentiate a vector in this coordinate system you have to differentiate the basis vectors as well. This was at first confusing to me as I thought that, since the vectors are orthogonal, there should not be a need to do this as you could place the origin anywhere. But then I realised that the origin would then be moving (say, with time) too. I haven't quite figured out how/why that applies yet...</p> <p>So with regards to the cartesian coordinate system, I found <a href="https://www.google.co.uk/url?sa=t&amp;source=web&amp;rct=j&amp;url=https://math.stackexchange.com/questions/1333946/definition-of-a-cartesian-coordinate-system&amp;ved=0ahUKEwje-v_X8LDRAhVoyFQKHXQJCsEQFghmMAY&amp;usg=AFQjCNHXDwLrhtI-oxwkJpSo5Powb3aB9A&amp;sig2=2Ybh3MbqfXbaZkoQJEURAw" rel="nofollow noreferrer">this</a> link. It is a bit too advanced for me but the first answer mentions that the derivatives of the bases are also orthogonal. I am guessing this is not so with coordinate systems that are not cartesian?</p> <p>Essentially, to what extent can the cartesian coordinate system be said to be 'special', if at all? Is there a coordinate system which better describes the real world? I have a feeling this will have to do with general relativity and curved spacetime...</p>
[ { "answer_id": 2088000, "score": 3, "text": "<p>The reason differentiating in non-rectangular coordinates requires additional work is because the basic vectors at any given point in space may be orthogonal to each other, but they aren't orthogonal to the basic vectors at any other given point in space. They don't need to vary with time; the geometry of the situation need not be dependent on any non-spatial variables. </p>\n\n<p>For example, consider $\\hat{{r}}$. The $\\hat{r}$ you have at, say $\\theta = \\pi, ~ \\phi = \\pi / 4$ is <em>not</em> the same $\\hat{r}$ you have at other points in space. In other words, the basis vectors in spherical coordinates are not constants; they are functions of each other.</p>\n\n<p>Compare that with $\\hat{i}, \\hat{j}, \\hat{k}$. These bases are invariant. $\\hat{x}$ is $\\hat{x}$ anywhere you go.</p>\n\n<p>So are rectangular coordinates \"special\"? Not in any meaningful way. Everything that's true in rectangular coordinates is true in spherical; you just need to be conscientious about what $\\hat{r}$ really means. </p>\n\n<p>Also, just to clarify in a somewhat non sequitur fashion, remember that basis vectors aren't coordinates; it would be somewhat inconvenient to specify a point in space as a linear combination of non rectangular bases. In fact, what is often done in practice (in basic electrodynamics, for example) is to use rectangular basis vectors but spherical coordinates. Coordinates tell you the address of a point in space, vectors $\\neq$ coordinates.</p>\n" }, { "answer_id": 2088009, "score": 0, "text": "<p>The Cartesian base vectors are the same at each point, while non-Cartesian vary over space and thus feature only local bases.</p>\n" }, { "answer_id": 2088000, "score": 3, "text": "<p>The reason differentiating in non-rectangular coordinates requires additional work is because the basic vectors at any given point in space may be orthogonal to each other, but they aren't orthogonal to the basic vectors at any other given point in space. They don't need to vary with time; the geometry of the situation need not be dependent on any non-spatial variables. </p>\n\n<p>For example, consider $\\hat{{r}}$. The $\\hat{r}$ you have at, say $\\theta = \\pi, ~ \\phi = \\pi / 4$ is <em>not</em> the same $\\hat{r}$ you have at other points in space. In other words, the basis vectors in spherical coordinates are not constants; they are functions of each other.</p>\n\n<p>Compare that with $\\hat{i}, \\hat{j}, \\hat{k}$. These bases are invariant. $\\hat{x}$ is $\\hat{x}$ anywhere you go.</p>\n\n<p>So are rectangular coordinates \"special\"? Not in any meaningful way. Everything that's true in rectangular coordinates is true in spherical; you just need to be conscientious about what $\\hat{r}$ really means. </p>\n\n<p>Also, just to clarify in a somewhat non sequitur fashion, remember that basis vectors aren't coordinates; it would be somewhat inconvenient to specify a point in space as a linear combination of non rectangular bases. In fact, what is often done in practice (in basic electrodynamics, for example) is to use rectangular basis vectors but spherical coordinates. Coordinates tell you the address of a point in space, vectors $\\neq$ coordinates.</p>\n" }, { "answer_id": 2088009, "score": 0, "text": "<p>The Cartesian base vectors are the same at each point, while non-Cartesian vary over space and thus feature only local bases.</p>\n" }, { "answer_id": 2088000, "score": 3, "text": "<p>The reason differentiating in non-rectangular coordinates requires additional work is because the basic vectors at any given point in space may be orthogonal to each other, but they aren't orthogonal to the basic vectors at any other given point in space. They don't need to vary with time; the geometry of the situation need not be dependent on any non-spatial variables. </p>\n\n<p>For example, consider $\\hat{{r}}$. The $\\hat{r}$ you have at, say $\\theta = \\pi, ~ \\phi = \\pi / 4$ is <em>not</em> the same $\\hat{r}$ you have at other points in space. In other words, the basis vectors in spherical coordinates are not constants; they are functions of each other.</p>\n\n<p>Compare that with $\\hat{i}, \\hat{j}, \\hat{k}$. These bases are invariant. $\\hat{x}$ is $\\hat{x}$ anywhere you go.</p>\n\n<p>So are rectangular coordinates \"special\"? Not in any meaningful way. Everything that's true in rectangular coordinates is true in spherical; you just need to be conscientious about what $\\hat{r}$ really means. </p>\n\n<p>Also, just to clarify in a somewhat non sequitur fashion, remember that basis vectors aren't coordinates; it would be somewhat inconvenient to specify a point in space as a linear combination of non rectangular bases. In fact, what is often done in practice (in basic electrodynamics, for example) is to use rectangular basis vectors but spherical coordinates. Coordinates tell you the address of a point in space, vectors $\\neq$ coordinates.</p>\n" }, { "answer_id": 2088009, "score": 0, "text": "<p>The Cartesian base vectors are the same at each point, while non-Cartesian vary over space and thus feature only local bases.</p>\n" } ]
Question: Is the cartesian coordinate system &#39;special&#39;? Answer 1: <p>The reason differentiating in non-rectangular coordinates requires additional work is because the basic vectors at any given point in space may be orthogonal to each other, but they aren't orthogonal to the basic vectors at any other given point in space. They don't need to vary with time; the geometry of the situation need not be dependent on any non-spatial variables. </p> <p>For example, consider $\hat{{r}}$. The $\hat{r}$ you have at, say $\theta = \pi, ~ \phi = \pi / 4$ is <em>not</em> the same $\hat{r}$ you have at other points in space. In other words, the basis vectors in spherical coordinates are not constants; they are functions of each other.</p> <p>Compare that with $\hat{i}, \hat{j}, \hat{k}$. These bases are invariant. $\hat{x}$ is $\hat{x}$ anywhere you go.</p> <p>So are rectangular coordinates "special"? Not in any meaningful way. Everything that's true in rectangular coordinates is true in spherical; you just need to be conscientious about what $\hat{r}$ really means. </p> <p>Also, just to clarify in a somewhat non sequitur fashion, remember that basis vectors aren't coordinates; it would be somewhat inconvenient to specify a point in space as a linear combination of non rectangular bases. In fact, what is often done in practice (in basic electrodynamics, for example) is to use rectangular basis vectors but spherical coordinates. Coordinates tell you the address of a point in space, vectors $\neq$ coordinates.</p> Answer 2: <p>The Cartesian base vectors are the same at each point, while non-Cartesian vary over space and thus feature only local bases.</p> Answer 3: <p>The reason differentiating in non-rectangular coordinates requires additional work is because the basic vectors at any given point in space may be orthogonal to each other, but they aren't orthogonal to the basic vectors at any other given point in space. They don't need to vary with time; the geometry of the situation need not be dependent on any non-spatial variables. </p> <p>For example, consider $\hat{{r}}$. The $\hat{r}$ you have at, say $\theta = \pi, ~ \phi = \pi / 4$ is <em>not</em> the same $\hat{r}$ you have at other points in space. In other words, the basis vectors in spherical coordinates are not constants; they are functions of each other.</p> <p>Compare that with $\hat{i}, \hat{j}, \hat{k}$. These bases are invariant. $\hat{x}$ is $\hat{x}$ anywhere you go.</p> <p>So are rectangular coordinates "special"? Not in any meaningful way. Everything that's true in rectangular coordinates is true in spherical; you just need to be conscientious about what $\hat{r}$ really means. </p> <p>Also, just to clarify in a somewhat non sequitur fashion, remember that basis vectors aren't coordinates; it would be somewhat inconvenient to specify a point in space as a linear combination of non rectangular bases. In fact, what is often done in practice (in basic electrodynamics, for example) is to use rectangular basis vectors but spherical coordinates. Coordinates tell you the address of a point in space, vectors $\neq$ coordinates.</p> Answer 4: <p>The Cartesian base vectors are the same at each point, while non-Cartesian vary over space and thus feature only local bases.</p> Answer 5: <p>The reason differentiating in non-rectangular coordinates requires additional work is because the basic vectors at any given point in space may be orthogonal to each other, but they aren't orthogonal to the basic vectors at any other given point in space. They don't need to vary with time; the geometry of the situation need not be dependent on any non-spatial variables. </p> <p>For example, consider $\hat{{r}}$. The $\hat{r}$ you have at, say $\theta = \pi, ~ \phi = \pi / 4$ is <em>not</em> the same $\hat{r}$ you have at other points in space. In other words, the basis vectors in spherical coordinates are not constants; they are functions of each other.</p> <p>Compare that with $\hat{i}, \hat{j}, \hat{k}$. These bases are invariant. $\hat{x}$ is $\hat{x}$ anywhere you go.</p> <p>So are rectangular coordinates "special"? Not in any meaningful way. Everything that's true in rectangular coordinates is true in spherical; you just need to be conscientious about what $\hat{r}$ really means. </p> <p>Also, just to clarify in a somewhat non sequitur fashion, remember that basis vectors aren't coordinates; it would be somewhat inconvenient to specify a point in space as a linear combination of non rectangular bases. In fact, what is often done in practice (in basic electrodynamics, for example) is to use rectangular basis vectors but spherical coordinates. Coordinates tell you the address of a point in space, vectors $\neq$ coordinates.</p> Answer 6: <p>The Cartesian base vectors are the same at each point, while non-Cartesian vary over space and thus feature only local bases.</p>
Proving Convergence of Least Squares Regression with i.i.d. Gaussian Noise
2,088,010
4
https://math.stackexchange.com/questions/2088010/proving-convergence-of-least-squares-regression-with-i-i-d-gaussian-noise
<p>I have a basic question that I can't seem to find an answer for -- perhaps I'm not wording it correctly. Suppose that we have an $n$-by-$d$ matrix, $X$ that represents input features, and we have a $n$-by-$1$ vector of output labels, $y$. Furthermore, let these labels be a noisy linear transformation of the input features:</p> <p>$$ y = X \cdot w + \epsilon$$</p> <p>Where $w$ represents a $d$-dimensional set of true weights and $\epsilon$ is i.i.d. zero-mean Gaussian noise, and I am interested in inferring $w$ using ordinary least squares linear regression (OLS).</p> <p>I would like to prove that as the number of data points, $n$, increases, the weights predicted by OLS converges in probability to the true weights $w$ -- say, in $l_2$-norm.</p> <p>Can anyone help me to go about proving this, or point me to references? Thanks!</p> <p>As pointed out in the comments, it is important to keep in mind how $X$ is constructed. Let $X$ be a random matrix.</p>
[ { "answer_id": 2088151, "score": 3, "text": "<p>Here I'm going to weaken the assumptions: rather than \"Gaussian\" we'll say they errors have expected value $0$ and finite variance; rather than \"identically distributed\" we'll say they all have the same variance; and rather than \"independent\" we'll say \"uncorrelated\", i.e. all the covariances $\\text{are } 0.$</p>\n\n<p>The least-squares estimator of $w$ is\n$$\n\\hat w = (X^T X)^{-1} X^T y.\n$$\n(BTW, one must have $\\operatorname{rank}(X) = d$; otherwise the matrix inverse above makes no sense and the least-square estimator is not unique.) Then we have\n$$\n\\operatorname{var} (\\hat w) = (\\text{a }d\\times d\\text{ matrix }) = \\operatorname{E}\\big( (\\hat w - w)(\\hat w - w)^T\\big) = \\sigma^2(X^T X)^{-1} \\tag 1\n$$\nwhere $\\sigma^2$ is the noise variance, and we get this via the identity\n$$\n\\operatorname{var}(A\\hat w) = A\\Big(\\operatorname{var}(\\hat w) \\Big) A^T, \\text{ for any } k\\times d \\text{ matrix } A.\n$$</p>\n\n<p>Increasing the sample size $n$ means increasing the number of rows in the matrix $X$. This leaves a big question: In what way does the new row of $X$ vary within the $d$-dimensional row space as more and more new rows keep getting added? Without that, you don't have a well-defined question. So here I'll go out on a limb and say that the way we'll do it is we will increase the sample size $n$ by stacking copies of $X$ on top of each other:\n$$\n\\begin{bmatrix} X \\\\ \\vdots \\\\ X \\end{bmatrix} = \\text{an } (nk)\\times d \\text{ matrix} \\tag 2\n$$\nand we'll let $k$ keep growing and the $nk$ errors will all be uncorrelated with each other and otherwise satisfy the assumptions. This just means we're doing replications with the same set of $X$ values. In some contexts, this would be considered repetition rather than replication: we get new $y$ values for each row of $X$, rather than getting new rows of $X$ different from those we already had.</p>\n\n<ul>\n<li>Show that $(1) \\to 0$ as $k\\to\\infty$ (except that instead of $X$ in $(1)$ we'll have the matrix $(2)$.)</li>\n<li>Observe that the variances of the components of $\\hat w$ are the diagonal elements of $(1).$</li>\n<li>Finally, apply <a href=\"https://en.wikipedia.org/wiki/Chebyshev%27s_inequality\" rel=\"nofollow noreferrer\">Chebyshev's inequality</a>.</li>\n</ul>\n" }, { "answer_id": 2088071, "score": 2, "text": "<p>Here is the probabilistic approach to the proof. Assuming that $X$ is a random matrix and $E\\| x \\| &lt; \\infty$ (and $Ey^2 &lt; \\infty)$.</p>\n\n<p>Your minimization problem is $$\\min_{w\\in \\mathbb{R}^d} \\| Xw - y \\|_2^2. $$\nIt is straightforward to show that the OLS estimator (\"weights\" using your terminology) are given by \n$$\n\\hat{w}=(X'X)^{-1}X'y,\n$$\nor equivalently \n$$\n\\hat{w}= \\left( \\frac 1 n \\sum_{i=1}^n x_i x_i^T\\right)^{-1} \\left( \\frac 1 n \\sum_{i=1}^n x_i^T y_i\\right).\n$$\nNote that convergence in $L^2$ norm occur iff $\\hat{w}$ converge in probability to $w$, thus by using WLLN you can observe that \n$$\n\\frac 1 n \\sum_{i=1}^nx_ix_i^T \\xrightarrow{p}Ex_ix_i^T=E(X'X)\n$$\nas $n\\to \\infty$, and \n$$\n\\frac 1 n \\sum_{i=1}^nx_i^Ty_i \\xrightarrow{p}E(X'y). \n$$\nAs such, by the continuous mapping theorem for $f(X'X, X'y)=(X'X)^{-1}X'y$, you get that \n$$\n\\hat{w}\\xrightarrow p (E(X'X))^{-1}E(X'y),\n$$\nwhich are the population (\"real\") weights of $C(X)$.</p>\n" }, { "answer_id": 2088151, "score": 3, "text": "<p>Here I'm going to weaken the assumptions: rather than \"Gaussian\" we'll say they errors have expected value $0$ and finite variance; rather than \"identically distributed\" we'll say they all have the same variance; and rather than \"independent\" we'll say \"uncorrelated\", i.e. all the covariances $\\text{are } 0.$</p>\n\n<p>The least-squares estimator of $w$ is\n$$\n\\hat w = (X^T X)^{-1} X^T y.\n$$\n(BTW, one must have $\\operatorname{rank}(X) = d$; otherwise the matrix inverse above makes no sense and the least-square estimator is not unique.) Then we have\n$$\n\\operatorname{var} (\\hat w) = (\\text{a }d\\times d\\text{ matrix }) = \\operatorname{E}\\big( (\\hat w - w)(\\hat w - w)^T\\big) = \\sigma^2(X^T X)^{-1} \\tag 1\n$$\nwhere $\\sigma^2$ is the noise variance, and we get this via the identity\n$$\n\\operatorname{var}(A\\hat w) = A\\Big(\\operatorname{var}(\\hat w) \\Big) A^T, \\text{ for any } k\\times d \\text{ matrix } A.\n$$</p>\n\n<p>Increasing the sample size $n$ means increasing the number of rows in the matrix $X$. This leaves a big question: In what way does the new row of $X$ vary within the $d$-dimensional row space as more and more new rows keep getting added? Without that, you don't have a well-defined question. So here I'll go out on a limb and say that the way we'll do it is we will increase the sample size $n$ by stacking copies of $X$ on top of each other:\n$$\n\\begin{bmatrix} X \\\\ \\vdots \\\\ X \\end{bmatrix} = \\text{an } (nk)\\times d \\text{ matrix} \\tag 2\n$$\nand we'll let $k$ keep growing and the $nk$ errors will all be uncorrelated with each other and otherwise satisfy the assumptions. This just means we're doing replications with the same set of $X$ values. In some contexts, this would be considered repetition rather than replication: we get new $y$ values for each row of $X$, rather than getting new rows of $X$ different from those we already had.</p>\n\n<ul>\n<li>Show that $(1) \\to 0$ as $k\\to\\infty$ (except that instead of $X$ in $(1)$ we'll have the matrix $(2)$.)</li>\n<li>Observe that the variances of the components of $\\hat w$ are the diagonal elements of $(1).$</li>\n<li>Finally, apply <a href=\"https://en.wikipedia.org/wiki/Chebyshev%27s_inequality\" rel=\"nofollow noreferrer\">Chebyshev's inequality</a>.</li>\n</ul>\n" }, { "answer_id": 2088071, "score": 2, "text": "<p>Here is the probabilistic approach to the proof. Assuming that $X$ is a random matrix and $E\\| x \\| &lt; \\infty$ (and $Ey^2 &lt; \\infty)$.</p>\n\n<p>Your minimization problem is $$\\min_{w\\in \\mathbb{R}^d} \\| Xw - y \\|_2^2. $$\nIt is straightforward to show that the OLS estimator (\"weights\" using your terminology) are given by \n$$\n\\hat{w}=(X'X)^{-1}X'y,\n$$\nor equivalently \n$$\n\\hat{w}= \\left( \\frac 1 n \\sum_{i=1}^n x_i x_i^T\\right)^{-1} \\left( \\frac 1 n \\sum_{i=1}^n x_i^T y_i\\right).\n$$\nNote that convergence in $L^2$ norm occur iff $\\hat{w}$ converge in probability to $w$, thus by using WLLN you can observe that \n$$\n\\frac 1 n \\sum_{i=1}^nx_ix_i^T \\xrightarrow{p}Ex_ix_i^T=E(X'X)\n$$\nas $n\\to \\infty$, and \n$$\n\\frac 1 n \\sum_{i=1}^nx_i^Ty_i \\xrightarrow{p}E(X'y). \n$$\nAs such, by the continuous mapping theorem for $f(X'X, X'y)=(X'X)^{-1}X'y$, you get that \n$$\n\\hat{w}\\xrightarrow p (E(X'X))^{-1}E(X'y),\n$$\nwhich are the population (\"real\") weights of $C(X)$.</p>\n" }, { "answer_id": 2088151, "score": 3, "text": "<p>Here I'm going to weaken the assumptions: rather than \"Gaussian\" we'll say they errors have expected value $0$ and finite variance; rather than \"identically distributed\" we'll say they all have the same variance; and rather than \"independent\" we'll say \"uncorrelated\", i.e. all the covariances $\\text{are } 0.$</p>\n\n<p>The least-squares estimator of $w$ is\n$$\n\\hat w = (X^T X)^{-1} X^T y.\n$$\n(BTW, one must have $\\operatorname{rank}(X) = d$; otherwise the matrix inverse above makes no sense and the least-square estimator is not unique.) Then we have\n$$\n\\operatorname{var} (\\hat w) = (\\text{a }d\\times d\\text{ matrix }) = \\operatorname{E}\\big( (\\hat w - w)(\\hat w - w)^T\\big) = \\sigma^2(X^T X)^{-1} \\tag 1\n$$\nwhere $\\sigma^2$ is the noise variance, and we get this via the identity\n$$\n\\operatorname{var}(A\\hat w) = A\\Big(\\operatorname{var}(\\hat w) \\Big) A^T, \\text{ for any } k\\times d \\text{ matrix } A.\n$$</p>\n\n<p>Increasing the sample size $n$ means increasing the number of rows in the matrix $X$. This leaves a big question: In what way does the new row of $X$ vary within the $d$-dimensional row space as more and more new rows keep getting added? Without that, you don't have a well-defined question. So here I'll go out on a limb and say that the way we'll do it is we will increase the sample size $n$ by stacking copies of $X$ on top of each other:\n$$\n\\begin{bmatrix} X \\\\ \\vdots \\\\ X \\end{bmatrix} = \\text{an } (nk)\\times d \\text{ matrix} \\tag 2\n$$\nand we'll let $k$ keep growing and the $nk$ errors will all be uncorrelated with each other and otherwise satisfy the assumptions. This just means we're doing replications with the same set of $X$ values. In some contexts, this would be considered repetition rather than replication: we get new $y$ values for each row of $X$, rather than getting new rows of $X$ different from those we already had.</p>\n\n<ul>\n<li>Show that $(1) \\to 0$ as $k\\to\\infty$ (except that instead of $X$ in $(1)$ we'll have the matrix $(2)$.)</li>\n<li>Observe that the variances of the components of $\\hat w$ are the diagonal elements of $(1).$</li>\n<li>Finally, apply <a href=\"https://en.wikipedia.org/wiki/Chebyshev%27s_inequality\" rel=\"nofollow noreferrer\">Chebyshev's inequality</a>.</li>\n</ul>\n" }, { "answer_id": 2088071, "score": 2, "text": "<p>Here is the probabilistic approach to the proof. Assuming that $X$ is a random matrix and $E\\| x \\| &lt; \\infty$ (and $Ey^2 &lt; \\infty)$.</p>\n\n<p>Your minimization problem is $$\\min_{w\\in \\mathbb{R}^d} \\| Xw - y \\|_2^2. $$\nIt is straightforward to show that the OLS estimator (\"weights\" using your terminology) are given by \n$$\n\\hat{w}=(X'X)^{-1}X'y,\n$$\nor equivalently \n$$\n\\hat{w}= \\left( \\frac 1 n \\sum_{i=1}^n x_i x_i^T\\right)^{-1} \\left( \\frac 1 n \\sum_{i=1}^n x_i^T y_i\\right).\n$$\nNote that convergence in $L^2$ norm occur iff $\\hat{w}$ converge in probability to $w$, thus by using WLLN you can observe that \n$$\n\\frac 1 n \\sum_{i=1}^nx_ix_i^T \\xrightarrow{p}Ex_ix_i^T=E(X'X)\n$$\nas $n\\to \\infty$, and \n$$\n\\frac 1 n \\sum_{i=1}^nx_i^Ty_i \\xrightarrow{p}E(X'y). \n$$\nAs such, by the continuous mapping theorem for $f(X'X, X'y)=(X'X)^{-1}X'y$, you get that \n$$\n\\hat{w}\\xrightarrow p (E(X'X))^{-1}E(X'y),\n$$\nwhich are the population (\"real\") weights of $C(X)$.</p>\n" } ]
Question: Proving Convergence of Least Squares Regression with i.i.d. Gaussian Noise Answer 1: <p>Here I'm going to weaken the assumptions: rather than "Gaussian" we'll say they errors have expected value $0$ and finite variance; rather than "identically distributed" we'll say they all have the same variance; and rather than "independent" we'll say "uncorrelated", i.e. all the covariances $\text{are } 0.$</p> <p>The least-squares estimator of $w$ is $$ \hat w = (X^T X)^{-1} X^T y. $$ (BTW, one must have $\operatorname{rank}(X) = d$; otherwise the matrix inverse above makes no sense and the least-square estimator is not unique.) Then we have $$ \operatorname{var} (\hat w) = (\text{a }d\times d\text{ matrix }) = \operatorname{E}\big( (\hat w - w)(\hat w - w)^T\big) = \sigma^2(X^T X)^{-1} \tag 1 $$ where $\sigma^2$ is the noise variance, and we get this via the identity $$ \operatorname{var}(A\hat w) = A\Big(\operatorname{var}(\hat w) \Big) A^T, \text{ for any } k\times d \text{ matrix } A. $$</p> <p>Increasing the sample size $n$ means increasing the number of rows in the matrix $X$. This leaves a big question: In what way does the new row of $X$ vary within the $d$-dimensional row space as more and more new rows keep getting added? Without that, you don't have a well-defined question. So here I'll go out on a limb and say that the way we'll do it is we will increase the sample size $n$ by stacking copies of $X$ on top of each other: $$ \begin{bmatrix} X \\ \vdots \\ X \end{bmatrix} = \text{an } (nk)\times d \text{ matrix} \tag 2 $$ and we'll let $k$ keep growing and the $nk$ errors will all be uncorrelated with each other and otherwise satisfy the assumptions. This just means we're doing replications with the same set of $X$ values. In some contexts, this would be considered repetition rather than replication: we get new $y$ values for each row of $X$, rather than getting new rows of $X$ different from those we already had.</p> <ul> <li>Show that $(1) \to 0$ as $k\to\infty$ (except that instead of $X$ in $(1)$ we'll have the matrix $(2)$.)</li> <li>Observe that the variances of the components of $\hat w$ are the diagonal elements of $(1).$</li> <li>Finally, apply <a href="https://en.wikipedia.org/wiki/Chebyshev%27s_inequality" rel="nofollow noreferrer">Chebyshev's inequality</a>.</li> </ul> Answer 2: <p>Here is the probabilistic approach to the proof. Assuming that $X$ is a random matrix and $E\| x \| &lt; \infty$ (and $Ey^2 &lt; \infty)$.</p> <p>Your minimization problem is $$\min_{w\in \mathbb{R}^d} \| Xw - y \|_2^2. $$ It is straightforward to show that the OLS estimator ("weights" using your terminology) are given by $$ \hat{w}=(X'X)^{-1}X'y, $$ or equivalently $$ \hat{w}= \left( \frac 1 n \sum_{i=1}^n x_i x_i^T\right)^{-1} \left( \frac 1 n \sum_{i=1}^n x_i^T y_i\right). $$ Note that convergence in $L^2$ norm occur iff $\hat{w}$ converge in probability to $w$, thus by using WLLN you can observe that $$ \frac 1 n \sum_{i=1}^nx_ix_i^T \xrightarrow{p}Ex_ix_i^T=E(X'X) $$ as $n\to \infty$, and $$ \frac 1 n \sum_{i=1}^nx_i^Ty_i \xrightarrow{p}E(X'y). $$ As such, by the continuous mapping theorem for $f(X'X, X'y)=(X'X)^{-1}X'y$, you get that $$ \hat{w}\xrightarrow p (E(X'X))^{-1}E(X'y), $$ which are the population ("real") weights of $C(X)$.</p> Answer 3: <p>Here I'm going to weaken the assumptions: rather than "Gaussian" we'll say they errors have expected value $0$ and finite variance; rather than "identically distributed" we'll say they all have the same variance; and rather than "independent" we'll say "uncorrelated", i.e. all the covariances $\text{are } 0.$</p> <p>The least-squares estimator of $w$ is $$ \hat w = (X^T X)^{-1} X^T y. $$ (BTW, one must have $\operatorname{rank}(X) = d$; otherwise the matrix inverse above makes no sense and the least-square estimator is not unique.) Then we have $$ \operatorname{var} (\hat w) = (\text{a }d\times d\text{ matrix }) = \operatorname{E}\big( (\hat w - w)(\hat w - w)^T\big) = \sigma^2(X^T X)^{-1} \tag 1 $$ where $\sigma^2$ is the noise variance, and we get this via the identity $$ \operatorname{var}(A\hat w) = A\Big(\operatorname{var}(\hat w) \Big) A^T, \text{ for any } k\times d \text{ matrix } A. $$</p> <p>Increasing the sample size $n$ means increasing the number of rows in the matrix $X$. This leaves a big question: In what way does the new row of $X$ vary within the $d$-dimensional row space as more and more new rows keep getting added? Without that, you don't have a well-defined question. So here I'll go out on a limb and say that the way we'll do it is we will increase the sample size $n$ by stacking copies of $X$ on top of each other: $$ \begin{bmatrix} X \\ \vdots \\ X \end{bmatrix} = \text{an } (nk)\times d \text{ matrix} \tag 2 $$ and we'll let $k$ keep growing and the $nk$ errors will all be uncorrelated with each other and otherwise satisfy the assumptions. This just means we're doing replications with the same set of $X$ values. In some contexts, this would be considered repetition rather than replication: we get new $y$ values for each row of $X$, rather than getting new rows of $X$ different from those we already had.</p> <ul> <li>Show that $(1) \to 0$ as $k\to\infty$ (except that instead of $X$ in $(1)$ we'll have the matrix $(2)$.)</li> <li>Observe that the variances of the components of $\hat w$ are the diagonal elements of $(1).$</li> <li>Finally, apply <a href="https://en.wikipedia.org/wiki/Chebyshev%27s_inequality" rel="nofollow noreferrer">Chebyshev's inequality</a>.</li> </ul> Answer 4: <p>Here is the probabilistic approach to the proof. Assuming that $X$ is a random matrix and $E\| x \| &lt; \infty$ (and $Ey^2 &lt; \infty)$.</p> <p>Your minimization problem is $$\min_{w\in \mathbb{R}^d} \| Xw - y \|_2^2. $$ It is straightforward to show that the OLS estimator ("weights" using your terminology) are given by $$ \hat{w}=(X'X)^{-1}X'y, $$ or equivalently $$ \hat{w}= \left( \frac 1 n \sum_{i=1}^n x_i x_i^T\right)^{-1} \left( \frac 1 n \sum_{i=1}^n x_i^T y_i\right). $$ Note that convergence in $L^2$ norm occur iff $\hat{w}$ converge in probability to $w$, thus by using WLLN you can observe that $$ \frac 1 n \sum_{i=1}^nx_ix_i^T \xrightarrow{p}Ex_ix_i^T=E(X'X) $$ as $n\to \infty$, and $$ \frac 1 n \sum_{i=1}^nx_i^Ty_i \xrightarrow{p}E(X'y). $$ As such, by the continuous mapping theorem for $f(X'X, X'y)=(X'X)^{-1}X'y$, you get that $$ \hat{w}\xrightarrow p (E(X'X))^{-1}E(X'y), $$ which are the population ("real") weights of $C(X)$.</p> Answer 5: <p>Here I'm going to weaken the assumptions: rather than "Gaussian" we'll say they errors have expected value $0$ and finite variance; rather than "identically distributed" we'll say they all have the same variance; and rather than "independent" we'll say "uncorrelated", i.e. all the covariances $\text{are } 0.$</p> <p>The least-squares estimator of $w$ is $$ \hat w = (X^T X)^{-1} X^T y. $$ (BTW, one must have $\operatorname{rank}(X) = d$; otherwise the matrix inverse above makes no sense and the least-square estimator is not unique.) Then we have $$ \operatorname{var} (\hat w) = (\text{a }d\times d\text{ matrix }) = \operatorname{E}\big( (\hat w - w)(\hat w - w)^T\big) = \sigma^2(X^T X)^{-1} \tag 1 $$ where $\sigma^2$ is the noise variance, and we get this via the identity $$ \operatorname{var}(A\hat w) = A\Big(\operatorname{var}(\hat w) \Big) A^T, \text{ for any } k\times d \text{ matrix } A. $$</p> <p>Increasing the sample size $n$ means increasing the number of rows in the matrix $X$. This leaves a big question: In what way does the new row of $X$ vary within the $d$-dimensional row space as more and more new rows keep getting added? Without that, you don't have a well-defined question. So here I'll go out on a limb and say that the way we'll do it is we will increase the sample size $n$ by stacking copies of $X$ on top of each other: $$ \begin{bmatrix} X \\ \vdots \\ X \end{bmatrix} = \text{an } (nk)\times d \text{ matrix} \tag 2 $$ and we'll let $k$ keep growing and the $nk$ errors will all be uncorrelated with each other and otherwise satisfy the assumptions. This just means we're doing replications with the same set of $X$ values. In some contexts, this would be considered repetition rather than replication: we get new $y$ values for each row of $X$, rather than getting new rows of $X$ different from those we already had.</p> <ul> <li>Show that $(1) \to 0$ as $k\to\infty$ (except that instead of $X$ in $(1)$ we'll have the matrix $(2)$.)</li> <li>Observe that the variances of the components of $\hat w$ are the diagonal elements of $(1).$</li> <li>Finally, apply <a href="https://en.wikipedia.org/wiki/Chebyshev%27s_inequality" rel="nofollow noreferrer">Chebyshev's inequality</a>.</li> </ul> Answer 6: <p>Here is the probabilistic approach to the proof. Assuming that $X$ is a random matrix and $E\| x \| &lt; \infty$ (and $Ey^2 &lt; \infty)$.</p> <p>Your minimization problem is $$\min_{w\in \mathbb{R}^d} \| Xw - y \|_2^2. $$ It is straightforward to show that the OLS estimator ("weights" using your terminology) are given by $$ \hat{w}=(X'X)^{-1}X'y, $$ or equivalently $$ \hat{w}= \left( \frac 1 n \sum_{i=1}^n x_i x_i^T\right)^{-1} \left( \frac 1 n \sum_{i=1}^n x_i^T y_i\right). $$ Note that convergence in $L^2$ norm occur iff $\hat{w}$ converge in probability to $w$, thus by using WLLN you can observe that $$ \frac 1 n \sum_{i=1}^nx_ix_i^T \xrightarrow{p}Ex_ix_i^T=E(X'X) $$ as $n\to \infty$, and $$ \frac 1 n \sum_{i=1}^nx_i^Ty_i \xrightarrow{p}E(X'y). $$ As such, by the continuous mapping theorem for $f(X'X, X'y)=(X'X)^{-1}X'y$, you get that $$ \hat{w}\xrightarrow p (E(X'X))^{-1}E(X'y), $$ which are the population ("real") weights of $C(X)$.</p>
Sample joint density conditional on sufficient statistic depends on parameter for uniform distribution
2,088,027
4
https://math.stackexchange.com/questions/2088027/sample-joint-density-conditional-on-sufficient-statistic-depends-on-parameter-fo
<p>Lex $X_1, X_2 \sim U(0, \theta)$ i.i.d , then obviously the sufficient statistic would be $T = \max\{X_1, X_2\}$, which can be easily proven by factorization theorem. However, I try to compute the density $f(x_1, x_2\mid t)$, which turns out to depend on the parameter $\theta$. This absolutely contradicts the definition of sufficient statistic:</p> <p>$$ f(x_1, x_2, t) = \frac 1 {\theta^2}I\{t\leq \theta, \max\{x_1, x_2\}=t\} $$</p> <p>$$ f(t) = \frac{2t}{\theta^2}I\{t \leq \theta\} $$</p> <p>Thus</p> <p>$$ f(x_1, x_2\mid t) = \frac 1 {2t} I\{t\leq \theta\} $$</p> <p>which depends on $\theta$ through the indicator function $I\{t\leq \theta\}$. </p> <p>What's wrong with my reasoning? Thanks a lot!</p>
[ { "answer_id": 2088055, "score": 1, "text": "<p>I think you should have\n$$f(x_1,x_2 \\mid t) = \\frac{1}{2t} I\\{0 \\le t \\le \\theta, \\max\\{x_1,x_2\\} = t\\} = \\frac{1}{2t} I\\{\\max\\{x_1,x_2\\} = t\\}.$$</p>\n\n<p>Regarding the indicator arithmetic: if $I\\{0\\le t \\le \\theta\\} = 0$, then $f(x_1,x_2 \\mid t)$ is not even a well-defined probability distribution, so I think you can ignore this case and assume $0 \\le t \\le \\theta$ so that the denominator is nonzero.</p>\n\n<ul>\n<li><s>if $I\\{0 \\le t \\le \\theta, \\max\\{x_1,x_2\\}=t\\}=1$, then $I\\{0 \\le t \\le \\theta\\} =1$, so the ratio is $1$.</s></li>\n<li><s>otherwise $I\\{0 \\le t \\le \\theta, \\max\\{x_1,x_2\\}=t\\}=0$, so the ratio is $0$ regardless of what the denominator is (even when the denominator is zero)</s></li>\n</ul>\n\n<hr>\n\n<p>Edit:</p>\n\n<p>$$P(T \\le t) = (t/\\theta)^2 I\\{0 \\le t \\le \\theta\\} \\implies f(t) = \\frac{2t}{\\theta^2} I\\{0 \\le t \\le \\theta\\}$$</p>\n\n<p>$$f(x_1,x_2,t) = \\frac{1}{\\theta^2} I\\{0 \\le t \\le \\theta, \\max\\{x_1,x_2\\}=t\\}$$</p>\n" }, { "answer_id": 2088061, "score": 1, "text": "<p>$$f(x_1,x_2\\mid t)\\overset{\\text{Bayes}}{=}\\frac{f(x_1,x_2,t)}{f(t)}=\\frac{f(x_1,x_2)\\mathbb{I}[x_1,x_2\\le t]}{\\mathbb{P}[x_1,x_2\\le t]}=\\frac{\\frac{1}{\\theta^2}\\mathbb{I}[x_1,x_2\\le t]}{\\left(\\frac t \\theta \\right)^2} = \\frac{\\mathbb{I}[x_1,x_2\\le t]}{t^2}$$</p>\n" }, { "answer_id": 2088055, "score": 1, "text": "<p>I think you should have\n$$f(x_1,x_2 \\mid t) = \\frac{1}{2t} I\\{0 \\le t \\le \\theta, \\max\\{x_1,x_2\\} = t\\} = \\frac{1}{2t} I\\{\\max\\{x_1,x_2\\} = t\\}.$$</p>\n\n<p>Regarding the indicator arithmetic: if $I\\{0\\le t \\le \\theta\\} = 0$, then $f(x_1,x_2 \\mid t)$ is not even a well-defined probability distribution, so I think you can ignore this case and assume $0 \\le t \\le \\theta$ so that the denominator is nonzero.</p>\n\n<ul>\n<li><s>if $I\\{0 \\le t \\le \\theta, \\max\\{x_1,x_2\\}=t\\}=1$, then $I\\{0 \\le t \\le \\theta\\} =1$, so the ratio is $1$.</s></li>\n<li><s>otherwise $I\\{0 \\le t \\le \\theta, \\max\\{x_1,x_2\\}=t\\}=0$, so the ratio is $0$ regardless of what the denominator is (even when the denominator is zero)</s></li>\n</ul>\n\n<hr>\n\n<p>Edit:</p>\n\n<p>$$P(T \\le t) = (t/\\theta)^2 I\\{0 \\le t \\le \\theta\\} \\implies f(t) = \\frac{2t}{\\theta^2} I\\{0 \\le t \\le \\theta\\}$$</p>\n\n<p>$$f(x_1,x_2,t) = \\frac{1}{\\theta^2} I\\{0 \\le t \\le \\theta, \\max\\{x_1,x_2\\}=t\\}$$</p>\n" }, { "answer_id": 2088061, "score": 1, "text": "<p>$$f(x_1,x_2\\mid t)\\overset{\\text{Bayes}}{=}\\frac{f(x_1,x_2,t)}{f(t)}=\\frac{f(x_1,x_2)\\mathbb{I}[x_1,x_2\\le t]}{\\mathbb{P}[x_1,x_2\\le t]}=\\frac{\\frac{1}{\\theta^2}\\mathbb{I}[x_1,x_2\\le t]}{\\left(\\frac t \\theta \\right)^2} = \\frac{\\mathbb{I}[x_1,x_2\\le t]}{t^2}$$</p>\n" }, { "answer_id": 2088055, "score": 1, "text": "<p>I think you should have\n$$f(x_1,x_2 \\mid t) = \\frac{1}{2t} I\\{0 \\le t \\le \\theta, \\max\\{x_1,x_2\\} = t\\} = \\frac{1}{2t} I\\{\\max\\{x_1,x_2\\} = t\\}.$$</p>\n\n<p>Regarding the indicator arithmetic: if $I\\{0\\le t \\le \\theta\\} = 0$, then $f(x_1,x_2 \\mid t)$ is not even a well-defined probability distribution, so I think you can ignore this case and assume $0 \\le t \\le \\theta$ so that the denominator is nonzero.</p>\n\n<ul>\n<li><s>if $I\\{0 \\le t \\le \\theta, \\max\\{x_1,x_2\\}=t\\}=1$, then $I\\{0 \\le t \\le \\theta\\} =1$, so the ratio is $1$.</s></li>\n<li><s>otherwise $I\\{0 \\le t \\le \\theta, \\max\\{x_1,x_2\\}=t\\}=0$, so the ratio is $0$ regardless of what the denominator is (even when the denominator is zero)</s></li>\n</ul>\n\n<hr>\n\n<p>Edit:</p>\n\n<p>$$P(T \\le t) = (t/\\theta)^2 I\\{0 \\le t \\le \\theta\\} \\implies f(t) = \\frac{2t}{\\theta^2} I\\{0 \\le t \\le \\theta\\}$$</p>\n\n<p>$$f(x_1,x_2,t) = \\frac{1}{\\theta^2} I\\{0 \\le t \\le \\theta, \\max\\{x_1,x_2\\}=t\\}$$</p>\n" }, { "answer_id": 2088061, "score": 1, "text": "<p>$$f(x_1,x_2\\mid t)\\overset{\\text{Bayes}}{=}\\frac{f(x_1,x_2,t)}{f(t)}=\\frac{f(x_1,x_2)\\mathbb{I}[x_1,x_2\\le t]}{\\mathbb{P}[x_1,x_2\\le t]}=\\frac{\\frac{1}{\\theta^2}\\mathbb{I}[x_1,x_2\\le t]}{\\left(\\frac t \\theta \\right)^2} = \\frac{\\mathbb{I}[x_1,x_2\\le t]}{t^2}$$</p>\n" } ]
Question: Sample joint density conditional on sufficient statistic depends on parameter for uniform distribution Answer 1: <p>I think you should have $$f(x_1,x_2 \mid t) = \frac{1}{2t} I\{0 \le t \le \theta, \max\{x_1,x_2\} = t\} = \frac{1}{2t} I\{\max\{x_1,x_2\} = t\}.$$</p> <p>Regarding the indicator arithmetic: if $I\{0\le t \le \theta\} = 0$, then $f(x_1,x_2 \mid t)$ is not even a well-defined probability distribution, so I think you can ignore this case and assume $0 \le t \le \theta$ so that the denominator is nonzero.</p> <ul> <li><s>if $I\{0 \le t \le \theta, \max\{x_1,x_2\}=t\}=1$, then $I\{0 \le t \le \theta\} =1$, so the ratio is $1$.</s></li> <li><s>otherwise $I\{0 \le t \le \theta, \max\{x_1,x_2\}=t\}=0$, so the ratio is $0$ regardless of what the denominator is (even when the denominator is zero)</s></li> </ul> <hr> <p>Edit:</p> <p>$$P(T \le t) = (t/\theta)^2 I\{0 \le t \le \theta\} \implies f(t) = \frac{2t}{\theta^2} I\{0 \le t \le \theta\}$$</p> <p>$$f(x_1,x_2,t) = \frac{1}{\theta^2} I\{0 \le t \le \theta, \max\{x_1,x_2\}=t\}$$</p> Answer 2: <p>$$f(x_1,x_2\mid t)\overset{\text{Bayes}}{=}\frac{f(x_1,x_2,t)}{f(t)}=\frac{f(x_1,x_2)\mathbb{I}[x_1,x_2\le t]}{\mathbb{P}[x_1,x_2\le t]}=\frac{\frac{1}{\theta^2}\mathbb{I}[x_1,x_2\le t]}{\left(\frac t \theta \right)^2} = \frac{\mathbb{I}[x_1,x_2\le t]}{t^2}$$</p> Answer 3: <p>I think you should have $$f(x_1,x_2 \mid t) = \frac{1}{2t} I\{0 \le t \le \theta, \max\{x_1,x_2\} = t\} = \frac{1}{2t} I\{\max\{x_1,x_2\} = t\}.$$</p> <p>Regarding the indicator arithmetic: if $I\{0\le t \le \theta\} = 0$, then $f(x_1,x_2 \mid t)$ is not even a well-defined probability distribution, so I think you can ignore this case and assume $0 \le t \le \theta$ so that the denominator is nonzero.</p> <ul> <li><s>if $I\{0 \le t \le \theta, \max\{x_1,x_2\}=t\}=1$, then $I\{0 \le t \le \theta\} =1$, so the ratio is $1$.</s></li> <li><s>otherwise $I\{0 \le t \le \theta, \max\{x_1,x_2\}=t\}=0$, so the ratio is $0$ regardless of what the denominator is (even when the denominator is zero)</s></li> </ul> <hr> <p>Edit:</p> <p>$$P(T \le t) = (t/\theta)^2 I\{0 \le t \le \theta\} \implies f(t) = \frac{2t}{\theta^2} I\{0 \le t \le \theta\}$$</p> <p>$$f(x_1,x_2,t) = \frac{1}{\theta^2} I\{0 \le t \le \theta, \max\{x_1,x_2\}=t\}$$</p> Answer 4: <p>$$f(x_1,x_2\mid t)\overset{\text{Bayes}}{=}\frac{f(x_1,x_2,t)}{f(t)}=\frac{f(x_1,x_2)\mathbb{I}[x_1,x_2\le t]}{\mathbb{P}[x_1,x_2\le t]}=\frac{\frac{1}{\theta^2}\mathbb{I}[x_1,x_2\le t]}{\left(\frac t \theta \right)^2} = \frac{\mathbb{I}[x_1,x_2\le t]}{t^2}$$</p> Answer 5: <p>I think you should have $$f(x_1,x_2 \mid t) = \frac{1}{2t} I\{0 \le t \le \theta, \max\{x_1,x_2\} = t\} = \frac{1}{2t} I\{\max\{x_1,x_2\} = t\}.$$</p> <p>Regarding the indicator arithmetic: if $I\{0\le t \le \theta\} = 0$, then $f(x_1,x_2 \mid t)$ is not even a well-defined probability distribution, so I think you can ignore this case and assume $0 \le t \le \theta$ so that the denominator is nonzero.</p> <ul> <li><s>if $I\{0 \le t \le \theta, \max\{x_1,x_2\}=t\}=1$, then $I\{0 \le t \le \theta\} =1$, so the ratio is $1$.</s></li> <li><s>otherwise $I\{0 \le t \le \theta, \max\{x_1,x_2\}=t\}=0$, so the ratio is $0$ regardless of what the denominator is (even when the denominator is zero)</s></li> </ul> <hr> <p>Edit:</p> <p>$$P(T \le t) = (t/\theta)^2 I\{0 \le t \le \theta\} \implies f(t) = \frac{2t}{\theta^2} I\{0 \le t \le \theta\}$$</p> <p>$$f(x_1,x_2,t) = \frac{1}{\theta^2} I\{0 \le t \le \theta, \max\{x_1,x_2\}=t\}$$</p> Answer 6: <p>$$f(x_1,x_2\mid t)\overset{\text{Bayes}}{=}\frac{f(x_1,x_2,t)}{f(t)}=\frac{f(x_1,x_2)\mathbb{I}[x_1,x_2\le t]}{\mathbb{P}[x_1,x_2\le t]}=\frac{\frac{1}{\theta^2}\mathbb{I}[x_1,x_2\le t]}{\left(\frac t \theta \right)^2} = \frac{\mathbb{I}[x_1,x_2\le t]}{t^2}$$</p>
Question about calculating $\lim\limits_{n \to\infty} \dfrac{6n^5+(\sin(8n))^2}{n^4+6} $
2,088,104
4
https://math.stackexchange.com/questions/2088104/question-about-calculating-lim-limits-n-to-infty-dfrac6n5-sin8n2
<p>I have the following limit:</p> <p>$$\lim\limits_{n \to\infty} \frac{6n^5+(\sin(8n))^2}{n^4+6} $$</p> <p>My fist question is, can I solve this by dividing by the highest n power? The reason I was uncertain about doing this was because of the $(\sin(8n))^2 $ however, I figured as it is always bounded by 0 and 1, dividing by n to the 5th power would cause it to tend to 0. Is this wrong?</p> <p>My second question is, if I can solve this by dividing by the highest n power, I get $\frac{6}{0} $ so how do I know whether this tends to positive or negative infinity? The question did not specify whether n was originally tending to positive or negative infinity.</p> <p>Thank you!</p>
[ { "answer_id": 2088109, "score": 1, "text": "<p>For your first question, we can rewrite as follows:</p>\n\n<p>$$\\frac{6n^5+\\sin^2(8n)}{n^4+6}=\\frac{6n^5}{n^4+6}+\\frac{\\sin^2(8n)}{n^4+6}$$</p>\n\n<p>The first fraction is the real problem, but the second fraction can easily be handled with squeeze theorem for the reason you gave $\\sin(x)$ shouldn't be a problem here.</p>\n\n<p>Second question:</p>\n\n<p>Upon then dividing by highest powers, you get $\\frac60$, and you want to know if the result is positive or negative infinity. This may be done by noticing that</p>\n\n<p>$$n^4+6\\ge6\\\\6n^5&gt;0;\\quad n&gt;0\\\\\\sin^2(8n)\\ge0$$</p>\n\n<p>Since anything squared is positive, and I don't think I need to give much explanation as to why $6n^5$ will be positive.</p>\n\n<p>Putting all that together, you should be able to conclude it diverges to $+\\infty$.</p>\n" }, { "answer_id": 2088111, "score": 1, "text": "<p>1) You are not wrong here. $\\sin(ax)$ is always bounded between $0$ and $1$, and when you square any value in $[0,1]$ you get another value in $[0,1]$. We thus don't have to worry about the sine function exceeding this bound.</p>\n\n<p>2) It doesn't matter if $n\\to+\\infty$ or $n\\to-\\infty$, as both will give you the same answer (although $n\\to\\infty$ usually means $n \\to +\\infty$). Note that all the time $n$ shows up it is raised to an even power, meaning that it doesn't matter if $n$ is negative or positive. Symbolically, this means that $(-n)^{2x} = (-n)^{2^x} = \\left((-1)^2\\right)^x n^{2x}= 1^xn^{2x} = n^{2x}$ </p>\n\n<p>As far as actually investigating your limit goes, I would first divide by $n^4$. We get that\n$$\\lim\\limits_{n \\to\\infty} \\frac{6n^5+(\\sin(8n))^2}{n^4+6}$$\n$$=\\lim\\limits_{n \\to\\infty} \\frac{6n+\\frac{(\\sin(8n))^2}{n^4}}{1+\\frac{6}{n^4}}$$\nAs you noted, the sine is bounded, and so dividing by an increasing $n$ will make the term tend to zero. Likewise, $\\lim_{n\\to\\infty} \\frac{6}{n^4} = 0$. Applying these, we get that the limit is just\n$$\\lim\\limits_{n \\to\\infty} 6n$$<br>\nWhich clearly diverges to $+\\infty$. I am not sure how to got $\\frac{6}{0}$ and not $\\frac{6}{1}$, though I would be happy to investigate your work if you post it!</p>\n" }, { "answer_id": 2088112, "score": 0, "text": "<p>$\\lim\\limits_{n \\to\\infty} \\frac{6n^5+\\sin^28n}{n^4+6}=\\lim\\limits_{n \\to\\infty} \\frac{6n+\\frac{\\sin^28n}{n^4}}{1+\\frac{6}{n^4}} =\\infty $</p>\n" }, { "answer_id": 2088117, "score": 0, "text": "<p>It is much simpler to use <em>equivalents</em>:</p>\n\n<p>As $\\sin^28n$ is bounded, $6n^5+\\sin^28n\\sim_\\infty 6n^5$; $n^4+6\\sim_\\infty n^4$, so\n$$\\frac{6n^5+\\sin^28n}{n^4+6}\\sim_\\infty\\frac{6n^5}{n^4}=6n,$$\nand the limit is $+\\infty$.</p>\n" }, { "answer_id": 2088109, "score": 1, "text": "<p>For your first question, we can rewrite as follows:</p>\n\n<p>$$\\frac{6n^5+\\sin^2(8n)}{n^4+6}=\\frac{6n^5}{n^4+6}+\\frac{\\sin^2(8n)}{n^4+6}$$</p>\n\n<p>The first fraction is the real problem, but the second fraction can easily be handled with squeeze theorem for the reason you gave $\\sin(x)$ shouldn't be a problem here.</p>\n\n<p>Second question:</p>\n\n<p>Upon then dividing by highest powers, you get $\\frac60$, and you want to know if the result is positive or negative infinity. This may be done by noticing that</p>\n\n<p>$$n^4+6\\ge6\\\\6n^5&gt;0;\\quad n&gt;0\\\\\\sin^2(8n)\\ge0$$</p>\n\n<p>Since anything squared is positive, and I don't think I need to give much explanation as to why $6n^5$ will be positive.</p>\n\n<p>Putting all that together, you should be able to conclude it diverges to $+\\infty$.</p>\n" }, { "answer_id": 2088111, "score": 1, "text": "<p>1) You are not wrong here. $\\sin(ax)$ is always bounded between $0$ and $1$, and when you square any value in $[0,1]$ you get another value in $[0,1]$. We thus don't have to worry about the sine function exceeding this bound.</p>\n\n<p>2) It doesn't matter if $n\\to+\\infty$ or $n\\to-\\infty$, as both will give you the same answer (although $n\\to\\infty$ usually means $n \\to +\\infty$). Note that all the time $n$ shows up it is raised to an even power, meaning that it doesn't matter if $n$ is negative or positive. Symbolically, this means that $(-n)^{2x} = (-n)^{2^x} = \\left((-1)^2\\right)^x n^{2x}= 1^xn^{2x} = n^{2x}$ </p>\n\n<p>As far as actually investigating your limit goes, I would first divide by $n^4$. We get that\n$$\\lim\\limits_{n \\to\\infty} \\frac{6n^5+(\\sin(8n))^2}{n^4+6}$$\n$$=\\lim\\limits_{n \\to\\infty} \\frac{6n+\\frac{(\\sin(8n))^2}{n^4}}{1+\\frac{6}{n^4}}$$\nAs you noted, the sine is bounded, and so dividing by an increasing $n$ will make the term tend to zero. Likewise, $\\lim_{n\\to\\infty} \\frac{6}{n^4} = 0$. Applying these, we get that the limit is just\n$$\\lim\\limits_{n \\to\\infty} 6n$$<br>\nWhich clearly diverges to $+\\infty$. I am not sure how to got $\\frac{6}{0}$ and not $\\frac{6}{1}$, though I would be happy to investigate your work if you post it!</p>\n" }, { "answer_id": 2088112, "score": 0, "text": "<p>$\\lim\\limits_{n \\to\\infty} \\frac{6n^5+\\sin^28n}{n^4+6}=\\lim\\limits_{n \\to\\infty} \\frac{6n+\\frac{\\sin^28n}{n^4}}{1+\\frac{6}{n^4}} =\\infty $</p>\n" }, { "answer_id": 2088117, "score": 0, "text": "<p>It is much simpler to use <em>equivalents</em>:</p>\n\n<p>As $\\sin^28n$ is bounded, $6n^5+\\sin^28n\\sim_\\infty 6n^5$; $n^4+6\\sim_\\infty n^4$, so\n$$\\frac{6n^5+\\sin^28n}{n^4+6}\\sim_\\infty\\frac{6n^5}{n^4}=6n,$$\nand the limit is $+\\infty$.</p>\n" } ]
Question: Question about calculating $\lim\limits_{n \to\infty} \dfrac{6n^5+(\sin(8n))^2}{n^4+6} $ Answer 1: <p>For your first question, we can rewrite as follows:</p> <p>$$\frac{6n^5+\sin^2(8n)}{n^4+6}=\frac{6n^5}{n^4+6}+\frac{\sin^2(8n)}{n^4+6}$$</p> <p>The first fraction is the real problem, but the second fraction can easily be handled with squeeze theorem for the reason you gave $\sin(x)$ shouldn't be a problem here.</p> <p>Second question:</p> <p>Upon then dividing by highest powers, you get $\frac60$, and you want to know if the result is positive or negative infinity. This may be done by noticing that</p> <p>$$n^4+6\ge6\\6n^5&gt;0;\quad n&gt;0\\\sin^2(8n)\ge0$$</p> <p>Since anything squared is positive, and I don't think I need to give much explanation as to why $6n^5$ will be positive.</p> <p>Putting all that together, you should be able to conclude it diverges to $+\infty$.</p> Answer 2: <p>1) You are not wrong here. $\sin(ax)$ is always bounded between $0$ and $1$, and when you square any value in $[0,1]$ you get another value in $[0,1]$. We thus don't have to worry about the sine function exceeding this bound.</p> <p>2) It doesn't matter if $n\to+\infty$ or $n\to-\infty$, as both will give you the same answer (although $n\to\infty$ usually means $n \to +\infty$). Note that all the time $n$ shows up it is raised to an even power, meaning that it doesn't matter if $n$ is negative or positive. Symbolically, this means that $(-n)^{2x} = (-n)^{2^x} = \left((-1)^2\right)^x n^{2x}= 1^xn^{2x} = n^{2x}$ </p> <p>As far as actually investigating your limit goes, I would first divide by $n^4$. We get that $$\lim\limits_{n \to\infty} \frac{6n^5+(\sin(8n))^2}{n^4+6}$$ $$=\lim\limits_{n \to\infty} \frac{6n+\frac{(\sin(8n))^2}{n^4}}{1+\frac{6}{n^4}}$$ As you noted, the sine is bounded, and so dividing by an increasing $n$ will make the term tend to zero. Likewise, $\lim_{n\to\infty} \frac{6}{n^4} = 0$. Applying these, we get that the limit is just $$\lim\limits_{n \to\infty} 6n$$<br> Which clearly diverges to $+\infty$. I am not sure how to got $\frac{6}{0}$ and not $\frac{6}{1}$, though I would be happy to investigate your work if you post it!</p> Answer 3: <p>$\lim\limits_{n \to\infty} \frac{6n^5+\sin^28n}{n^4+6}=\lim\limits_{n \to\infty} \frac{6n+\frac{\sin^28n}{n^4}}{1+\frac{6}{n^4}} =\infty $</p> Answer 4: <p>It is much simpler to use <em>equivalents</em>:</p> <p>As $\sin^28n$ is bounded, $6n^5+\sin^28n\sim_\infty 6n^5$; $n^4+6\sim_\infty n^4$, so $$\frac{6n^5+\sin^28n}{n^4+6}\sim_\infty\frac{6n^5}{n^4}=6n,$$ and the limit is $+\infty$.</p> Answer 5: <p>For your first question, we can rewrite as follows:</p> <p>$$\frac{6n^5+\sin^2(8n)}{n^4+6}=\frac{6n^5}{n^4+6}+\frac{\sin^2(8n)}{n^4+6}$$</p> <p>The first fraction is the real problem, but the second fraction can easily be handled with squeeze theorem for the reason you gave $\sin(x)$ shouldn't be a problem here.</p> <p>Second question:</p> <p>Upon then dividing by highest powers, you get $\frac60$, and you want to know if the result is positive or negative infinity. This may be done by noticing that</p> <p>$$n^4+6\ge6\\6n^5&gt;0;\quad n&gt;0\\\sin^2(8n)\ge0$$</p> <p>Since anything squared is positive, and I don't think I need to give much explanation as to why $6n^5$ will be positive.</p> <p>Putting all that together, you should be able to conclude it diverges to $+\infty$.</p> Answer 6: <p>1) You are not wrong here. $\sin(ax)$ is always bounded between $0$ and $1$, and when you square any value in $[0,1]$ you get another value in $[0,1]$. We thus don't have to worry about the sine function exceeding this bound.</p> <p>2) It doesn't matter if $n\to+\infty$ or $n\to-\infty$, as both will give you the same answer (although $n\to\infty$ usually means $n \to +\infty$). Note that all the time $n$ shows up it is raised to an even power, meaning that it doesn't matter if $n$ is negative or positive. Symbolically, this means that $(-n)^{2x} = (-n)^{2^x} = \left((-1)^2\right)^x n^{2x}= 1^xn^{2x} = n^{2x}$ </p> <p>As far as actually investigating your limit goes, I would first divide by $n^4$. We get that $$\lim\limits_{n \to\infty} \frac{6n^5+(\sin(8n))^2}{n^4+6}$$ $$=\lim\limits_{n \to\infty} \frac{6n+\frac{(\sin(8n))^2}{n^4}}{1+\frac{6}{n^4}}$$ As you noted, the sine is bounded, and so dividing by an increasing $n$ will make the term tend to zero. Likewise, $\lim_{n\to\infty} \frac{6}{n^4} = 0$. Applying these, we get that the limit is just $$\lim\limits_{n \to\infty} 6n$$<br> Which clearly diverges to $+\infty$. I am not sure how to got $\frac{6}{0}$ and not $\frac{6}{1}$, though I would be happy to investigate your work if you post it!</p> Answer 7: <p>$\lim\limits_{n \to\infty} \frac{6n^5+\sin^28n}{n^4+6}=\lim\limits_{n \to\infty} \frac{6n+\frac{\sin^28n}{n^4}}{1+\frac{6}{n^4}} =\infty $</p> Answer 8: <p>It is much simpler to use <em>equivalents</em>:</p> <p>As $\sin^28n$ is bounded, $6n^5+\sin^28n\sim_\infty 6n^5$; $n^4+6\sim_\infty n^4$, so $$\frac{6n^5+\sin^28n}{n^4+6}\sim_\infty\frac{6n^5}{n^4}=6n,$$ and the limit is $+\infty$.</p>
Explicit Bijection between Triangles and Partitions
2,088,165
4
https://math.stackexchange.com/questions/2088165/explicit-bijection-between-triangles-and-partitions
<p>Let $t(n)$ denote the number of nondegenerate triangles with integer side lengths and perimeter $n$ (where two triangles are considered the same if they have the same side lengths).</p> <p>With some case work, it can be shown that $$\sum_{n\ge 0} t(n)x^n = \frac{x^3}{(1-x^2)(1-x^3)(1-x^4)}.$$</p> <p>However, is there a more direct proof of the above identity?</p> <p>That is, is there an example of an explicit bijection between the set of the nondegenerate triangles with integer side lengths of perimeter $n$ and the set of partitions of $(n-3)$ with parts taken from $\{2,3,4\}$ ?</p>
[ { "answer_id": 2088388, "score": 2, "text": "<p>Suppose that $n-3=2a+3b+4c$, where $a,b$, and $c$ are non-negative integers. We may think of this as a partition of $n-3$ into parts taken from $\\{2,3,4\\}$. If $p_1=p_2=a+b+c$, $p_3=b+c$, and $p_4=c$, the conjugate partition is $n-3=p_1+p_2+p_3+p_4$, with the proviso that $p_3$ and $p_4$ can be $0$.</p>\n\n<p>Let </p>\n\n<p>$$\\begin{align*}\nr&amp;=p_1+p_4+1=a+b+2c+1\\;,\\\\\ns&amp;=p_2+1=a+b+c+1\\;,\\text{ and}\\\\\nt&amp;=p_3+1=b+c+1\\;;\n\\end{align*}$$</p>\n\n<p>clearly $r+s+t=n$, and $r\\ge s\\ge t\\ge 1$. Moreover, $s+t-r=b+1&gt;0$, so there is a non-degenerate triangle with sides $r,s$, and $t$ and perimeter $n$.</p>\n\n<p>Now suppose that $r,s$, and $t$ are the sides of a non-degenerate triangle with perimeter $n$, where $r\\ge s\\ge t$. Let</p>\n\n<p>$$\\begin{align*}\np_1&amp;=s-1\\;,\\\\\np_2&amp;=s-1\\;,\\\\\np_3&amp;=t-1\\;,\\text{ and}\\\\\np_4&amp;=r-s\\;.\n\\end{align*}$$</p>\n\n<p>Clearly $p_1+p_2+p_3+p_4=r+s+t-3=n-3$, and $p_1=p_2\\ge p_3$. Moreover, if $p_3&lt;p_4$, then $t+s&lt;r+1$ and hence $t+s\\le r$, contrary to the assumption that the triangle is non-degenerate. Thus, $p_1\\ge p_2\\ge p_3\\ge p_4$. Finally, let</p>\n\n<p>$$\\begin{align*}\nc&amp;=p_4\\;,\\\\\nb&amp;=p_3-p_4\\;,\\text{ and}\\\\\na&amp;=p_2-p_3\\;;\n\\end{align*}$$</p>\n\n<p>then</p>\n\n<p>$$2a+3b+4c=2p_2+p_3+p_4=p_1+p_2+p_3+p_4=n-3\\;.$$</p>\n\n<p>This establishes the desired bijection.</p>\n\n<p>By the way, this is <a href=\"https://oeis.org/A005044\" rel=\"nofollow noreferrer\">OEIS A005044</a>, with several interesting references; I especially liked <a href=\"http://www.themathcircle.org/integertriangles.pdf\" rel=\"nofollow noreferrer\">this PDF</a> and <a href=\"http://www2.edc.org/makingmath/handbook/Teacher/GettingInformation/TrianglesAndBarrels.pdf\" rel=\"nofollow noreferrer\">this one</a>.</p>\n" }, { "answer_id": 2091059, "score": 0, "text": "<p>After some thinking, I've found a more visual correspondence between triangles with integer side lengths and the relevant type of partitions. This should be equivalent to Brian's answer, but I haven't checked.</p>\n\n<p>The idea is as follows. Suppose we have a triangle with side lengths $a\\ge b\\ge c$ so that the triangle has perimeter $a+ b + c = n$. Draw three rows of dots such that the first row has $a$ dots, the second row has $b$ dots, and the third row has $c$ dots.</p>\n\n<p>Since $a,b,c &gt; 0,$ the first column in the resulting image will have three dots. Cut these dots off from the image.</p>\n\n<p>If $a &gt; b,$ the final few columns of the pictures will each only contain one dot. Move these singleton dots to columns with three dots, leading to columns with four dots. This is always possible since \n$$a - b \\le c-1$$\nby the triangle inequality.</p>\n\n<p>Now read off the number of dots in each column to get a partition of $(n-3)$ with parts taken from the set $\\{2,3,4\\}.$</p>\n\n<p>For example, below is an image showing how to start off with a triangle of perimeter $16$ with side lengths $7, 5,$ and $4,$ and then follow the above steps to get the partition $4 + 4 + 3 + 2$ of $13.$ </p>\n\n<p><a href=\"https://i.stack.imgur.com/j1oJrm.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/j1oJrm.png\" alt=\"enter image description here\"></a></p>\n\n<p>To show that this procedure actually yields a bijection takes a bit more work, but since these steps are easy to reverse it is not too hard to show that it works out and that the triangle inequality ends up holding.</p>\n" }, { "answer_id": 2088388, "score": 2, "text": "<p>Suppose that $n-3=2a+3b+4c$, where $a,b$, and $c$ are non-negative integers. We may think of this as a partition of $n-3$ into parts taken from $\\{2,3,4\\}$. If $p_1=p_2=a+b+c$, $p_3=b+c$, and $p_4=c$, the conjugate partition is $n-3=p_1+p_2+p_3+p_4$, with the proviso that $p_3$ and $p_4$ can be $0$.</p>\n\n<p>Let </p>\n\n<p>$$\\begin{align*}\nr&amp;=p_1+p_4+1=a+b+2c+1\\;,\\\\\ns&amp;=p_2+1=a+b+c+1\\;,\\text{ and}\\\\\nt&amp;=p_3+1=b+c+1\\;;\n\\end{align*}$$</p>\n\n<p>clearly $r+s+t=n$, and $r\\ge s\\ge t\\ge 1$. Moreover, $s+t-r=b+1&gt;0$, so there is a non-degenerate triangle with sides $r,s$, and $t$ and perimeter $n$.</p>\n\n<p>Now suppose that $r,s$, and $t$ are the sides of a non-degenerate triangle with perimeter $n$, where $r\\ge s\\ge t$. Let</p>\n\n<p>$$\\begin{align*}\np_1&amp;=s-1\\;,\\\\\np_2&amp;=s-1\\;,\\\\\np_3&amp;=t-1\\;,\\text{ and}\\\\\np_4&amp;=r-s\\;.\n\\end{align*}$$</p>\n\n<p>Clearly $p_1+p_2+p_3+p_4=r+s+t-3=n-3$, and $p_1=p_2\\ge p_3$. Moreover, if $p_3&lt;p_4$, then $t+s&lt;r+1$ and hence $t+s\\le r$, contrary to the assumption that the triangle is non-degenerate. Thus, $p_1\\ge p_2\\ge p_3\\ge p_4$. Finally, let</p>\n\n<p>$$\\begin{align*}\nc&amp;=p_4\\;,\\\\\nb&amp;=p_3-p_4\\;,\\text{ and}\\\\\na&amp;=p_2-p_3\\;;\n\\end{align*}$$</p>\n\n<p>then</p>\n\n<p>$$2a+3b+4c=2p_2+p_3+p_4=p_1+p_2+p_3+p_4=n-3\\;.$$</p>\n\n<p>This establishes the desired bijection.</p>\n\n<p>By the way, this is <a href=\"https://oeis.org/A005044\" rel=\"nofollow noreferrer\">OEIS A005044</a>, with several interesting references; I especially liked <a href=\"http://www.themathcircle.org/integertriangles.pdf\" rel=\"nofollow noreferrer\">this PDF</a> and <a href=\"http://www2.edc.org/makingmath/handbook/Teacher/GettingInformation/TrianglesAndBarrels.pdf\" rel=\"nofollow noreferrer\">this one</a>.</p>\n" }, { "answer_id": 2091059, "score": 0, "text": "<p>After some thinking, I've found a more visual correspondence between triangles with integer side lengths and the relevant type of partitions. This should be equivalent to Brian's answer, but I haven't checked.</p>\n\n<p>The idea is as follows. Suppose we have a triangle with side lengths $a\\ge b\\ge c$ so that the triangle has perimeter $a+ b + c = n$. Draw three rows of dots such that the first row has $a$ dots, the second row has $b$ dots, and the third row has $c$ dots.</p>\n\n<p>Since $a,b,c &gt; 0,$ the first column in the resulting image will have three dots. Cut these dots off from the image.</p>\n\n<p>If $a &gt; b,$ the final few columns of the pictures will each only contain one dot. Move these singleton dots to columns with three dots, leading to columns with four dots. This is always possible since \n$$a - b \\le c-1$$\nby the triangle inequality.</p>\n\n<p>Now read off the number of dots in each column to get a partition of $(n-3)$ with parts taken from the set $\\{2,3,4\\}.$</p>\n\n<p>For example, below is an image showing how to start off with a triangle of perimeter $16$ with side lengths $7, 5,$ and $4,$ and then follow the above steps to get the partition $4 + 4 + 3 + 2$ of $13.$ </p>\n\n<p><a href=\"https://i.stack.imgur.com/j1oJrm.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/j1oJrm.png\" alt=\"enter image description here\"></a></p>\n\n<p>To show that this procedure actually yields a bijection takes a bit more work, but since these steps are easy to reverse it is not too hard to show that it works out and that the triangle inequality ends up holding.</p>\n" } ]
Question: Explicit Bijection between Triangles and Partitions Answer 1: <p>Suppose that $n-3=2a+3b+4c$, where $a,b$, and $c$ are non-negative integers. We may think of this as a partition of $n-3$ into parts taken from $\{2,3,4\}$. If $p_1=p_2=a+b+c$, $p_3=b+c$, and $p_4=c$, the conjugate partition is $n-3=p_1+p_2+p_3+p_4$, with the proviso that $p_3$ and $p_4$ can be $0$.</p> <p>Let </p> <p>$$\begin{align*} r&amp;=p_1+p_4+1=a+b+2c+1\;,\\ s&amp;=p_2+1=a+b+c+1\;,\text{ and}\\ t&amp;=p_3+1=b+c+1\;; \end{align*}$$</p> <p>clearly $r+s+t=n$, and $r\ge s\ge t\ge 1$. Moreover, $s+t-r=b+1&gt;0$, so there is a non-degenerate triangle with sides $r,s$, and $t$ and perimeter $n$.</p> <p>Now suppose that $r,s$, and $t$ are the sides of a non-degenerate triangle with perimeter $n$, where $r\ge s\ge t$. Let</p> <p>$$\begin{align*} p_1&amp;=s-1\;,\\ p_2&amp;=s-1\;,\\ p_3&amp;=t-1\;,\text{ and}\\ p_4&amp;=r-s\;. \end{align*}$$</p> <p>Clearly $p_1+p_2+p_3+p_4=r+s+t-3=n-3$, and $p_1=p_2\ge p_3$. Moreover, if $p_3&lt;p_4$, then $t+s&lt;r+1$ and hence $t+s\le r$, contrary to the assumption that the triangle is non-degenerate. Thus, $p_1\ge p_2\ge p_3\ge p_4$. Finally, let</p> <p>$$\begin{align*} c&amp;=p_4\;,\\ b&amp;=p_3-p_4\;,\text{ and}\\ a&amp;=p_2-p_3\;; \end{align*}$$</p> <p>then</p> <p>$$2a+3b+4c=2p_2+p_3+p_4=p_1+p_2+p_3+p_4=n-3\;.$$</p> <p>This establishes the desired bijection.</p> <p>By the way, this is <a href="https://oeis.org/A005044" rel="nofollow noreferrer">OEIS A005044</a>, with several interesting references; I especially liked <a href="http://www.themathcircle.org/integertriangles.pdf" rel="nofollow noreferrer">this PDF</a> and <a href="http://www2.edc.org/makingmath/handbook/Teacher/GettingInformation/TrianglesAndBarrels.pdf" rel="nofollow noreferrer">this one</a>.</p> Answer 2: <p>After some thinking, I've found a more visual correspondence between triangles with integer side lengths and the relevant type of partitions. This should be equivalent to Brian's answer, but I haven't checked.</p> <p>The idea is as follows. Suppose we have a triangle with side lengths $a\ge b\ge c$ so that the triangle has perimeter $a+ b + c = n$. Draw three rows of dots such that the first row has $a$ dots, the second row has $b$ dots, and the third row has $c$ dots.</p> <p>Since $a,b,c &gt; 0,$ the first column in the resulting image will have three dots. Cut these dots off from the image.</p> <p>If $a &gt; b,$ the final few columns of the pictures will each only contain one dot. Move these singleton dots to columns with three dots, leading to columns with four dots. This is always possible since $$a - b \le c-1$$ by the triangle inequality.</p> <p>Now read off the number of dots in each column to get a partition of $(n-3)$ with parts taken from the set $\{2,3,4\}.$</p> <p>For example, below is an image showing how to start off with a triangle of perimeter $16$ with side lengths $7, 5,$ and $4,$ and then follow the above steps to get the partition $4 + 4 + 3 + 2$ of $13.$ </p> <p><a href="https://i.stack.imgur.com/j1oJrm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j1oJrm.png" alt="enter image description here"></a></p> <p>To show that this procedure actually yields a bijection takes a bit more work, but since these steps are easy to reverse it is not too hard to show that it works out and that the triangle inequality ends up holding.</p> Answer 3: <p>Suppose that $n-3=2a+3b+4c$, where $a,b$, and $c$ are non-negative integers. We may think of this as a partition of $n-3$ into parts taken from $\{2,3,4\}$. If $p_1=p_2=a+b+c$, $p_3=b+c$, and $p_4=c$, the conjugate partition is $n-3=p_1+p_2+p_3+p_4$, with the proviso that $p_3$ and $p_4$ can be $0$.</p> <p>Let </p> <p>$$\begin{align*} r&amp;=p_1+p_4+1=a+b+2c+1\;,\\ s&amp;=p_2+1=a+b+c+1\;,\text{ and}\\ t&amp;=p_3+1=b+c+1\;; \end{align*}$$</p> <p>clearly $r+s+t=n$, and $r\ge s\ge t\ge 1$. Moreover, $s+t-r=b+1&gt;0$, so there is a non-degenerate triangle with sides $r,s$, and $t$ and perimeter $n$.</p> <p>Now suppose that $r,s$, and $t$ are the sides of a non-degenerate triangle with perimeter $n$, where $r\ge s\ge t$. Let</p> <p>$$\begin{align*} p_1&amp;=s-1\;,\\ p_2&amp;=s-1\;,\\ p_3&amp;=t-1\;,\text{ and}\\ p_4&amp;=r-s\;. \end{align*}$$</p> <p>Clearly $p_1+p_2+p_3+p_4=r+s+t-3=n-3$, and $p_1=p_2\ge p_3$. Moreover, if $p_3&lt;p_4$, then $t+s&lt;r+1$ and hence $t+s\le r$, contrary to the assumption that the triangle is non-degenerate. Thus, $p_1\ge p_2\ge p_3\ge p_4$. Finally, let</p> <p>$$\begin{align*} c&amp;=p_4\;,\\ b&amp;=p_3-p_4\;,\text{ and}\\ a&amp;=p_2-p_3\;; \end{align*}$$</p> <p>then</p> <p>$$2a+3b+4c=2p_2+p_3+p_4=p_1+p_2+p_3+p_4=n-3\;.$$</p> <p>This establishes the desired bijection.</p> <p>By the way, this is <a href="https://oeis.org/A005044" rel="nofollow noreferrer">OEIS A005044</a>, with several interesting references; I especially liked <a href="http://www.themathcircle.org/integertriangles.pdf" rel="nofollow noreferrer">this PDF</a> and <a href="http://www2.edc.org/makingmath/handbook/Teacher/GettingInformation/TrianglesAndBarrels.pdf" rel="nofollow noreferrer">this one</a>.</p> Answer 4: <p>After some thinking, I've found a more visual correspondence between triangles with integer side lengths and the relevant type of partitions. This should be equivalent to Brian's answer, but I haven't checked.</p> <p>The idea is as follows. Suppose we have a triangle with side lengths $a\ge b\ge c$ so that the triangle has perimeter $a+ b + c = n$. Draw three rows of dots such that the first row has $a$ dots, the second row has $b$ dots, and the third row has $c$ dots.</p> <p>Since $a,b,c &gt; 0,$ the first column in the resulting image will have three dots. Cut these dots off from the image.</p> <p>If $a &gt; b,$ the final few columns of the pictures will each only contain one dot. Move these singleton dots to columns with three dots, leading to columns with four dots. This is always possible since $$a - b \le c-1$$ by the triangle inequality.</p> <p>Now read off the number of dots in each column to get a partition of $(n-3)$ with parts taken from the set $\{2,3,4\}.$</p> <p>For example, below is an image showing how to start off with a triangle of perimeter $16$ with side lengths $7, 5,$ and $4,$ and then follow the above steps to get the partition $4 + 4 + 3 + 2$ of $13.$ </p> <p><a href="https://i.stack.imgur.com/j1oJrm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j1oJrm.png" alt="enter image description here"></a></p> <p>To show that this procedure actually yields a bijection takes a bit more work, but since these steps are easy to reverse it is not too hard to show that it works out and that the triangle inequality ends up holding.</p>
How to solve $ \tan (m \theta) + \cos(n \theta) =0$
2,088,200
4
https://math.stackexchange.com/questions/2088200/how-to-solve-tan-m-theta-cosn-theta-0
<p>$$ \tan (2m \theta) + \cos(2n \theta) =0 $$</p> <p>I am trying to solve this trigonometric equation of theta to get a general solution. </p> <p>I tried this question by using the substitutions $t_1= \tan(n\theta)$ and $t_2= \tan(m \theta)$.</p> <p>But i was not able to get a good answer.</p> <p>Any ideas ?</p>
[ { "answer_id": 2088392, "score": 2, "text": "<p>As marty cohen answered, for the most general case, only numerical methods will be able to solve the equation $$\\tan (2m \\theta) + \\cos(2n \\theta) =0$$ which, as he wrote, would be better written as $$\\tan(t)+\\cos(rt)=0$$ The problem is that the function $$f(t)=\\tan(t)+\\cos(rt)$$ presents an infinite number of discontinuities at $t=(2k+1)\\frac \\pi 2$ and thius is never very good.</p>\n\n<p>Assuming $\\cos(t)\\neq 0$ as a possible solution, I suggest that you look instead for the zero's of $$g(t)=\\sin(t)+\\cos(t)\\cos(rt)$$ which is continuous everywhere.</p>\n\n<p>Starting froma \"reasonable\" guess $t_0$, Newton method will updtate it according to $$t_{n+1}=t_n+\\frac{\\cos (t_n) \\cos (r t_n)+\\sin (t_n)}{\\sin (t_n) \\cos (r t_n)+\\cos (t_n) (r \\sin (r t_n)-1)}$$</p>\n" }, { "answer_id": 2088313, "score": 1, "text": "<p>I think that it is unlikely\nthat there is a general solution.</p>\n\n<p>By letting\n$t \n= 2m\\theta$\nand\n$r = n/m$,\n$\\tan (2m \\theta) + \\cos(2n \\theta) =0\n$ becomes\n$\\tan (t)\n= \\cos(rt)\n$.</p>\n\n<p>Looking at the graphs\nof $\\tan(t)$\nand\n$\\cos(rt)$,\nwe see that there is at least\none solution in each range\n$k\\pi \\le t &lt; (k+1)\\pi$.\nTherefore there are a\ncountable infinity of solutions\nindependent of the values\nof $n$ and $m$.</p>\n\n<p>Also,\nif $r &lt; 1$\n(or $n &lt; m$),\nthere is exactly\none root in each interval.</p>\n\n<p>So it seems to me that\nonly numerical calculations\ncan find the roots in general.</p>\n\n<p>However,\nfor small values of $n$ or $m$,\nthere may be explicit solutions.</p>\n" }, { "answer_id": 2088392, "score": 2, "text": "<p>As marty cohen answered, for the most general case, only numerical methods will be able to solve the equation $$\\tan (2m \\theta) + \\cos(2n \\theta) =0$$ which, as he wrote, would be better written as $$\\tan(t)+\\cos(rt)=0$$ The problem is that the function $$f(t)=\\tan(t)+\\cos(rt)$$ presents an infinite number of discontinuities at $t=(2k+1)\\frac \\pi 2$ and thius is never very good.</p>\n\n<p>Assuming $\\cos(t)\\neq 0$ as a possible solution, I suggest that you look instead for the zero's of $$g(t)=\\sin(t)+\\cos(t)\\cos(rt)$$ which is continuous everywhere.</p>\n\n<p>Starting froma \"reasonable\" guess $t_0$, Newton method will updtate it according to $$t_{n+1}=t_n+\\frac{\\cos (t_n) \\cos (r t_n)+\\sin (t_n)}{\\sin (t_n) \\cos (r t_n)+\\cos (t_n) (r \\sin (r t_n)-1)}$$</p>\n" }, { "answer_id": 2088313, "score": 1, "text": "<p>I think that it is unlikely\nthat there is a general solution.</p>\n\n<p>By letting\n$t \n= 2m\\theta$\nand\n$r = n/m$,\n$\\tan (2m \\theta) + \\cos(2n \\theta) =0\n$ becomes\n$\\tan (t)\n= \\cos(rt)\n$.</p>\n\n<p>Looking at the graphs\nof $\\tan(t)$\nand\n$\\cos(rt)$,\nwe see that there is at least\none solution in each range\n$k\\pi \\le t &lt; (k+1)\\pi$.\nTherefore there are a\ncountable infinity of solutions\nindependent of the values\nof $n$ and $m$.</p>\n\n<p>Also,\nif $r &lt; 1$\n(or $n &lt; m$),\nthere is exactly\none root in each interval.</p>\n\n<p>So it seems to me that\nonly numerical calculations\ncan find the roots in general.</p>\n\n<p>However,\nfor small values of $n$ or $m$,\nthere may be explicit solutions.</p>\n" } ]
Question: How to solve $ \tan (m \theta) + \cos(n \theta) =0$ Answer 1: <p>As marty cohen answered, for the most general case, only numerical methods will be able to solve the equation $$\tan (2m \theta) + \cos(2n \theta) =0$$ which, as he wrote, would be better written as $$\tan(t)+\cos(rt)=0$$ The problem is that the function $$f(t)=\tan(t)+\cos(rt)$$ presents an infinite number of discontinuities at $t=(2k+1)\frac \pi 2$ and thius is never very good.</p> <p>Assuming $\cos(t)\neq 0$ as a possible solution, I suggest that you look instead for the zero's of $$g(t)=\sin(t)+\cos(t)\cos(rt)$$ which is continuous everywhere.</p> <p>Starting froma "reasonable" guess $t_0$, Newton method will updtate it according to $$t_{n+1}=t_n+\frac{\cos (t_n) \cos (r t_n)+\sin (t_n)}{\sin (t_n) \cos (r t_n)+\cos (t_n) (r \sin (r t_n)-1)}$$</p> Answer 2: <p>I think that it is unlikely that there is a general solution.</p> <p>By letting $t = 2m\theta$ and $r = n/m$, $\tan (2m \theta) + \cos(2n \theta) =0 $ becomes $\tan (t) = \cos(rt) $.</p> <p>Looking at the graphs of $\tan(t)$ and $\cos(rt)$, we see that there is at least one solution in each range $k\pi \le t &lt; (k+1)\pi$. Therefore there are a countable infinity of solutions independent of the values of $n$ and $m$.</p> <p>Also, if $r &lt; 1$ (or $n &lt; m$), there is exactly one root in each interval.</p> <p>So it seems to me that only numerical calculations can find the roots in general.</p> <p>However, for small values of $n$ or $m$, there may be explicit solutions.</p> Answer 3: <p>As marty cohen answered, for the most general case, only numerical methods will be able to solve the equation $$\tan (2m \theta) + \cos(2n \theta) =0$$ which, as he wrote, would be better written as $$\tan(t)+\cos(rt)=0$$ The problem is that the function $$f(t)=\tan(t)+\cos(rt)$$ presents an infinite number of discontinuities at $t=(2k+1)\frac \pi 2$ and thius is never very good.</p> <p>Assuming $\cos(t)\neq 0$ as a possible solution, I suggest that you look instead for the zero's of $$g(t)=\sin(t)+\cos(t)\cos(rt)$$ which is continuous everywhere.</p> <p>Starting froma "reasonable" guess $t_0$, Newton method will updtate it according to $$t_{n+1}=t_n+\frac{\cos (t_n) \cos (r t_n)+\sin (t_n)}{\sin (t_n) \cos (r t_n)+\cos (t_n) (r \sin (r t_n)-1)}$$</p> Answer 4: <p>I think that it is unlikely that there is a general solution.</p> <p>By letting $t = 2m\theta$ and $r = n/m$, $\tan (2m \theta) + \cos(2n \theta) =0 $ becomes $\tan (t) = \cos(rt) $.</p> <p>Looking at the graphs of $\tan(t)$ and $\cos(rt)$, we see that there is at least one solution in each range $k\pi \le t &lt; (k+1)\pi$. Therefore there are a countable infinity of solutions independent of the values of $n$ and $m$.</p> <p>Also, if $r &lt; 1$ (or $n &lt; m$), there is exactly one root in each interval.</p> <p>So it seems to me that only numerical calculations can find the roots in general.</p> <p>However, for small values of $n$ or $m$, there may be explicit solutions.</p>
Find largest dividing integer within range
2,088,229
4
https://math.stackexchange.com/questions/2088229/find-largest-dividing-integer-within-range
<p>Is there a neat way to find the largest integer that divides another integer fully, within a range. As an example, I would like to find the largest integer smaller than 1131 that divides 3500 completely. </p> <p>So far I have just tried by breaking up 3500 into its prime components and guessing, coming to 875, but is there a more structured way?</p> <p>EDIT: I guess the problem is somewhat equivalent to get all dividing integers, and then just pick the largest within my range?</p>
[ { "answer_id": 2088245, "score": 2, "text": "<p>You could look for the smallest integer greater than $\\dfrac{3500}{1131}\\approx 3.09$ that divides $3500$ exactly </p>\n\n<p>This is obviously $4$ (there is no smaller integer greater then $3.09$ and $4$ does divide $3500$ as it divides $100$) so the answer to your original question is $\\dfrac{3500}{4}=875$</p>\n" }, { "answer_id": 2088267, "score": 1, "text": "<p>There is a <a href=\"https://cstheory.stackexchange.com/questions/4769/an-np-complete-variant-of-factoring/4785\">randomized reduction from SUBSET-SUM</a> to this problem, and it is NP-complete assuming a weak version of Cramér's conjecture. So there is no neat general solution, but of course there are many special cases that can be determined quickly (for example if the suspected multiple is prime we can test that in polynomial time and know that there is no divisor in the range, or if the range is small enough that the divisibility by each member can be tested individually).</p>\n" }, { "answer_id": 2088245, "score": 2, "text": "<p>You could look for the smallest integer greater than $\\dfrac{3500}{1131}\\approx 3.09$ that divides $3500$ exactly </p>\n\n<p>This is obviously $4$ (there is no smaller integer greater then $3.09$ and $4$ does divide $3500$ as it divides $100$) so the answer to your original question is $\\dfrac{3500}{4}=875$</p>\n" }, { "answer_id": 2088267, "score": 1, "text": "<p>There is a <a href=\"https://cstheory.stackexchange.com/questions/4769/an-np-complete-variant-of-factoring/4785\">randomized reduction from SUBSET-SUM</a> to this problem, and it is NP-complete assuming a weak version of Cramér's conjecture. So there is no neat general solution, but of course there are many special cases that can be determined quickly (for example if the suspected multiple is prime we can test that in polynomial time and know that there is no divisor in the range, or if the range is small enough that the divisibility by each member can be tested individually).</p>\n" } ]
Question: Find largest dividing integer within range Answer 1: <p>You could look for the smallest integer greater than $\dfrac{3500}{1131}\approx 3.09$ that divides $3500$ exactly </p> <p>This is obviously $4$ (there is no smaller integer greater then $3.09$ and $4$ does divide $3500$ as it divides $100$) so the answer to your original question is $\dfrac{3500}{4}=875$</p> Answer 2: <p>There is a <a href="https://cstheory.stackexchange.com/questions/4769/an-np-complete-variant-of-factoring/4785">randomized reduction from SUBSET-SUM</a> to this problem, and it is NP-complete assuming a weak version of Cramér's conjecture. So there is no neat general solution, but of course there are many special cases that can be determined quickly (for example if the suspected multiple is prime we can test that in polynomial time and know that there is no divisor in the range, or if the range is small enough that the divisibility by each member can be tested individually).</p> Answer 3: <p>You could look for the smallest integer greater than $\dfrac{3500}{1131}\approx 3.09$ that divides $3500$ exactly </p> <p>This is obviously $4$ (there is no smaller integer greater then $3.09$ and $4$ does divide $3500$ as it divides $100$) so the answer to your original question is $\dfrac{3500}{4}=875$</p> Answer 4: <p>There is a <a href="https://cstheory.stackexchange.com/questions/4769/an-np-complete-variant-of-factoring/4785">randomized reduction from SUBSET-SUM</a> to this problem, and it is NP-complete assuming a weak version of Cramér's conjecture. So there is no neat general solution, but of course there are many special cases that can be determined quickly (for example if the suspected multiple is prime we can test that in polynomial time and know that there is no divisor in the range, or if the range is small enough that the divisibility by each member can be tested individually).</p>
Linearly independent and dependent columns
2,088,477
4
https://math.stackexchange.com/questions/2088477/linearly-independent-and-dependent-columns
<p>My book presents the following matrix:</p> <p>\begin{bmatrix}1&amp;0&amp;3&amp;0&amp;-4\\0&amp;1&amp;-1&amp;0&amp;2\\0&amp;0&amp;0&amp;1&amp;-2\\0&amp;0&amp;0&amp;0&amp;0\end{bmatrix}</p> <p>The book denotes the columns as $a_1, \ldots, a_5$ and asks us to say whether the following columns are linearly independent:</p> <p>$$\{a_1, a_2, a_4\}, \{a_1, a_2, a_3\}, \{a_1, a_3, a_5\}$$</p> <p>The book's answer is as follows:</p> <blockquote> <p>Because $a_3 = 3a_1 - a_2, \{a_1, a_2, a_3\}$ is linearly dependent. So, $\{a_1, a_2, a_4\}$ and $\{a_1, a_3, a_5\}$ are linearly independent.</p> </blockquote> <p>I already understand how to get the correct answer for this problem; you just look at the matrix and see whether one of them can be represented as a linear combination of the others or not. I'm just confused on the book's way of doing it. I'm confused on where they say "So, $\{a_1, a_2, a_4\}$ and $\{a_1, a_3, a_5\}$ are linearly independent" -- it seems like they're coming to that conclusion from seeing that $a_3 = 3a_1 - a_2$ , and I don't understand how those other sets' linear independence can be concluded from that?</p>
[ { "answer_id": 2088488, "score": 3, "text": "<p>You are right that the book should not conclude it that way unless you have additional information such as only one of those sets are not linearly independent. </p>\n\n<p>Otherwise, without additional information about $a_4$ and $a_5$ we cannot make such conclusion, for example, I can replace $a_4$ by $a_1+a_2$ and the conclusion would have been false.</p>\n" }, { "answer_id": 2088553, "score": 0, "text": "<p>Note that the matrix is in reduced row echelon form.</p>\n\n<p>Since the set $\\{a_1,a_2,a_4\\}$ consists of the pivot (or dominant) columns, it is linearly independent.</p>\n\n<p>Moreover, the nonpivot columns can be easily written as linear combination of the pivot ones, namely\n$$\na_3=3a_1-a_2,\\quad\na_5=-4a_1+2a_2-2a_4\n$$\nThis implies that $\\{a_1,a_2,a_3\\}$ is linearly dependent.</p>\n\n<p>For the last set, we can reason as follows: remove columns $2$ and $4$ to get a matrix in (nonreduced) row echelon form, where every column is dominant.</p>\n" }, { "answer_id": 2088488, "score": 3, "text": "<p>You are right that the book should not conclude it that way unless you have additional information such as only one of those sets are not linearly independent. </p>\n\n<p>Otherwise, without additional information about $a_4$ and $a_5$ we cannot make such conclusion, for example, I can replace $a_4$ by $a_1+a_2$ and the conclusion would have been false.</p>\n" }, { "answer_id": 2088553, "score": 0, "text": "<p>Note that the matrix is in reduced row echelon form.</p>\n\n<p>Since the set $\\{a_1,a_2,a_4\\}$ consists of the pivot (or dominant) columns, it is linearly independent.</p>\n\n<p>Moreover, the nonpivot columns can be easily written as linear combination of the pivot ones, namely\n$$\na_3=3a_1-a_2,\\quad\na_5=-4a_1+2a_2-2a_4\n$$\nThis implies that $\\{a_1,a_2,a_3\\}$ is linearly dependent.</p>\n\n<p>For the last set, we can reason as follows: remove columns $2$ and $4$ to get a matrix in (nonreduced) row echelon form, where every column is dominant.</p>\n" } ]
Question: Linearly independent and dependent columns Answer 1: <p>You are right that the book should not conclude it that way unless you have additional information such as only one of those sets are not linearly independent. </p> <p>Otherwise, without additional information about $a_4$ and $a_5$ we cannot make such conclusion, for example, I can replace $a_4$ by $a_1+a_2$ and the conclusion would have been false.</p> Answer 2: <p>Note that the matrix is in reduced row echelon form.</p> <p>Since the set $\{a_1,a_2,a_4\}$ consists of the pivot (or dominant) columns, it is linearly independent.</p> <p>Moreover, the nonpivot columns can be easily written as linear combination of the pivot ones, namely $$ a_3=3a_1-a_2,\quad a_5=-4a_1+2a_2-2a_4 $$ This implies that $\{a_1,a_2,a_3\}$ is linearly dependent.</p> <p>For the last set, we can reason as follows: remove columns $2$ and $4$ to get a matrix in (nonreduced) row echelon form, where every column is dominant.</p> Answer 3: <p>You are right that the book should not conclude it that way unless you have additional information such as only one of those sets are not linearly independent. </p> <p>Otherwise, without additional information about $a_4$ and $a_5$ we cannot make such conclusion, for example, I can replace $a_4$ by $a_1+a_2$ and the conclusion would have been false.</p> Answer 4: <p>Note that the matrix is in reduced row echelon form.</p> <p>Since the set $\{a_1,a_2,a_4\}$ consists of the pivot (or dominant) columns, it is linearly independent.</p> <p>Moreover, the nonpivot columns can be easily written as linear combination of the pivot ones, namely $$ a_3=3a_1-a_2,\quad a_5=-4a_1+2a_2-2a_4 $$ This implies that $\{a_1,a_2,a_3\}$ is linearly dependent.</p> <p>For the last set, we can reason as follows: remove columns $2$ and $4$ to get a matrix in (nonreduced) row echelon form, where every column is dominant.</p>
Sum of binomial coefficients involving $n,p,q,r$
2,088,564
4
https://math.stackexchange.com/questions/2088564/sum-of-binomial-coefficients-involving-n-p-q-r
<blockquote> <p>Sum of binomial product $\displaystyle \sum^{n}_{r=0}\binom{p}{r}\binom{q}{r}\binom{n+r}{p+q}$</p> </blockquote> <p>Simplifying $\displaystyle \frac{p!}{r!\cdot (p-r)!} \cdot \frac{q!}{r!\cdot (q-r)!}\cdot \frac{(n+r)!}{(p+q)! \cdot (n+r-p-q)!}$.</p> <p>Could some help me with this, thanks</p>
[ { "answer_id": 2089319, "score": 7, "text": "<p>It is convenient to use the <em>coefficient of</em> operator <span class=\"math-container\">$[t^r]$</span> to denote the coefficient of <span class=\"math-container\">$t^r$</span> in a series. This way we can write e.g.\n<span class=\"math-container\">\\begin{align*}\n\\binom{p}{r}=[t^r](1+t)^p\n\\end{align*}</span></p>\n<p>The following is valid\n<span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sum_{r=0}^n\\binom{p}{r}\\binom{q}{r}\\binom{n+r}{p+q}=\\binom{n}{p}\\binom{n}{q}}\n\\end{align*}</span></p>\n<blockquote>\n<p>We obtain\n<span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sum_{r=0}^n}&amp;\\color{blue}{\\binom{p}{r}\\binom{q}{r}\\binom{n+r}{p+q}}\\\\\n&amp;=\\sum_{r=0}^n\\binom{p}{r}\\binom{q}{q-r}\\binom{n+r}{p+q}\\\\\n&amp;=\\sum_{r=0}^\\infty[t^r](1+t)^p[v^{q-r}](1+v)^q[w^{p+q}](1+w)^{n+r}\\tag{1}\\\\\n&amp;=[v^q](1+v)^q[w^{p+q}](1+w)^n\\sum_{r=0}^\\infty(v(1+w))^r[t^r](1+t)^p\\tag{2}\\\\\n&amp;=[v^q](1+v)^q[w^{p+q}](1+w)^n(1+v(1+w))^p\\tag{3}\\\\\n&amp;=[v^q](1+v)^{p+q}[w^{p+q}](1+w)^n\\left(1+\\frac{vw}{1+v}\\right)^p\\tag{4}\\\\\n&amp;=[v^q](1+v)^{p+q}[w^{p+q}]\\sum_{k=0}^{p+q}\\binom{n}{k}w^k\\sum_{j=0}^p\\binom{p}{j}\\left(\\frac{v}{1+v}\\right)^jw^j\\tag{5}\\\\\n&amp;=[v^q](1+v)^{p+q}\\sum_{k=0}^{p+q}\\binom{n}{k}[w^{p+q-k}]\\sum_{j=0}^p\\binom{p}{j}\\left(\\frac{v}{1+v}\\right)^jw^j\\\\\n&amp;=[v^q](1+v)^{p+q}\\sum_{k=0}^{p+q}\\binom{n}{k}\\binom{p}{p+q-k}\\frac{v^{p+q-k}}{(1+v)^{p+q-k}}\\tag{6}\\\\\n&amp;=\\sum_{k=p}^{p+q}\\binom{n}{k}\\binom{p}{k-q}[v^{k-p}](1+v)^k\\tag{7}\\\\\n&amp;=\\sum_{k=p}^{p+q}\\binom{n}{k}\\binom{p}{k-q}\\binom{k}{p}\\\\\n&amp;=\\binom{n}{p}\\sum_{k=p}^{p+q}\\binom{p}{k-q}\\binom{n-p}{k-p}\\tag{8}\\\\\n&amp;=\\binom{n}{p}\\sum_{k=0}^{q}\\binom{p}{k+p-q}\\binom{n-p}{k}\\tag{9}\\\\\n&amp;=\\binom{n}{p}\\sum_{k=0}^{q}\\binom{p}{q-k}\\binom{n-p}{k}\\tag{10}\\\\\n&amp;\\color{blue}{=\\binom{n}{p}\\binom{n}{q}}\n\\end{align*}</span>\nand the claim follows.</p>\n</blockquote>\n<p><em>Comment:</em></p>\n<ul>\n<li><p>In (1) we apply the <em>coefficient of</em> operator for each binomial coefficient.</p>\n</li>\n<li><p>In (2) we use the linearity of the <em>coefficient of</em> operator and apply the rule <span class=\"math-container\">$[t^{p}]t^qA(t)=[t^{p-q}]A(t)$</span>.</p>\n</li>\n<li><p>In (3) we apply the substitution rule of the <em>coefficient of</em> operator with <span class=\"math-container\">$t:=v(1+w)$</span>\n<span class=\"math-container\">\\begin{align*}\nA(z)=\\sum_{r=0}^\\infty a_rz^r=\\sum_{r=0}^\\infty z^r[t^r]A(t)\n\\end{align*}</span></p>\n</li>\n<li><p>In (4) we factor out <span class=\"math-container\">$(1+v)^p$</span>.</p>\n</li>\n<li><p>In (5) we apply the binomial summation formula twice. Since we want to select the coefficient of <span class=\"math-container\">$w^{p+q}$</span> we can restrict the upper limit of the left sum with <span class=\"math-container\">$k=p+q$</span>.</p>\n</li>\n<li><p>In (6) we select the coefficient of <span class=\"math-container\">$w^{p+q-k}$</span>.</p>\n</li>\n<li><p>In (7) we do some simplifications and can restrict the lower limit of the sum with <span class=\"math-container\">$k=p$</span>.</p>\n</li>\n<li><p>In (8) we apply the <em>cross product</em> <span class=\"math-container\">$\\binom{n}{k}\\binom{k}{j}=\\binom{n}{j}\\binom{n-j}{k-j}$</span>.</p>\n</li>\n<li><p>In (9) we shift the summation index and start from <span class=\"math-container\">$k=0$</span>.</p>\n</li>\n<li><p>In (10) we apply the <em><a href=\"https://en.wikipedia.org/wiki/Vandermonde%27s_identity#Chu.E2.80.93Vandermonde_identity\" rel=\"nofollow noreferrer\">Chu-Vandermonde identity</a></em>.</p>\n</li>\n</ul>\n" }, { "answer_id": 2088579, "score": 4, "text": "<p>This is Bizley’s identity (6.42 in Gould's book \"Combinatorial identities\"):\n$$\\sum^{n}_{r=0}\\binom{p}{r}\\binom{q}{r}\\binom{n+r}{p+q}=\\binom{n}{p}\\binom{n}{q}.$$\nA nice combinatorial proof can be found in the following paper by Székely as a special case of a more general identity (see Corollary 3):</p>\n\n<p><a href=\"http://www.sciencedirect.com/science/article/pii/0097316585900573\" rel=\"nofollow noreferrer\">Common origin of cubic binomial identities. A generalization of Surányi's proof on Le Jen Shoo's formula</a></p>\n" }, { "answer_id": 2245619, "score": 3, "text": "<p>The proposed identity can be derived (putting $r=s$) from this more general one\n$$\n\\sum\\limits_{\\left( {0\\, \\le } \\right)\\,k\\,\\left( { \\le \\,n} \\right)} {\\left( \\matrix{\n m - \\left( {r - s} \\right) \\cr \n k \\cr} \\right)\\left( \\matrix{\n n + \\left( {r - s} \\right) \\cr \n n - k \\cr} \\right)\\left( \\matrix{\n r + k \\cr \n m + n \\cr} \\right)} = \\left( \\matrix{\n r \\cr \n m \\cr} \\right)\\left( \\matrix{\n s \\cr \n n \\cr} \\right)\\quad \\quad \\left| \\matrix{\n \\,{\\rm 0} \\le {\\rm integer \\, }m,n \\hfill \\cr \n \\;{\\rm real}\\;r,s \\hfill \\cr} \\right.\n$$\nreported in <em>\"Concrete Mathematics\" - pag. 171</em>, and which is demonstrated through the following passages:\n$$\n\\begin{gathered}\n \\sum\\limits_{\\left( {0\\, \\leqslant } \\right)\\,k\\,\\left( { \\leqslant \\,n} \\right)} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n n + \\left( {r - s} \\right) \\\\ \n n - k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r + k \\\\ \n m + n \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\sum\\limits_{\\begin{subarray}{l} \n \\left( {0\\, \\leqslant } \\right)\\,k\\,\\left( { \\leqslant \\,n} \\right) \\\\ \n \\left( {0\\, \\leqslant } \\right)\\,j\\,\\left( { \\leqslant \\,n} \\right) \n\\end{subarray}} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n n + \\left( {r - s} \\right) \\\\ \n n - k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r \\\\ \n m + n - j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n k \\\\ \n j \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\sum\\limits_{\\begin{subarray}{l} \n \\left( {0\\, \\leqslant } \\right)\\,k\\,\\left( { \\leqslant \\,n} \\right) \\\\ \n \\left( {0\\, \\leqslant } \\right)\\,j\\,\\left( { \\leqslant \\,m + n} \\right) \n\\end{subarray}} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n k \\\\ \n j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n n + \\left( {r - s} \\right) \\\\ \n n - k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r \\\\ \n m + n - j \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\sum\\limits_{\\begin{subarray}{l} \n \\left( {0\\, \\leqslant } \\right)\\,k\\,\\left( { \\leqslant \\,n} \\right) \\\\ \n \\left( {0\\, \\leqslant } \\right)\\,j\\,\\left( { \\leqslant \\,m + n} \\right) \n\\end{subarray}} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) - j \\\\ \n k - j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n n + \\left( {r - s} \\right) \\\\ \n n - k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r \\\\ \n m + n - j \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\sum\\limits_{\\left( {0\\, \\leqslant } \\right)\\,j\\,\\left( { \\leqslant \\,m + n} \\right)} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n m + n - j \\\\ \n n - j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r \\\\ \n m + n - j \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\sum\\limits_{\\left( {0\\, \\leqslant } \\right)\\,j\\,\\left( { \\leqslant \\,m + n} \\right)} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n m + n - j \\\\ \n m \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r \\\\ \n m + n - j \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\sum\\limits_{\\left( {0\\, \\leqslant } \\right)\\,j\\,\\left( { \\leqslant \\,m + n} \\right)} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r \\\\ \n m \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r - m \\\\ \n n - j \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\left( \\begin{gathered}\n r \\\\ \n m \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n s \\\\ \n n \\\\ \n\\end{gathered} \\right) \\hfill \\\\ \n\\end{gathered}\n$$\nconsisting in: </p>\n\n<ul>\n<li>Vandermonde de-convolution</li>\n<li>shift of the 4th binomial</li>\n<li>\"trinomial revision\"</li>\n<li>sum on $k$ by Vandermonde convolution</li>\n<li>symmetry</li>\n<li>\"trinomial revision\"</li>\n<li>sum on $j$ by Vandermonde convolution</li>\n</ul>\n" }, { "answer_id": 4705275, "score": 1, "text": "<p>Here is a derivation based upon <em><a href=\"https://mathworld.wolfram.com/GeneralizedHypergeometricFunction.html\" rel=\"nofollow noreferrer\">generalised hypergeometric functions</a></em>. We obtain using rising factorials <span class=\"math-container\">$(a)_{k}:=a(a+1)\\cdots(a+k-1)$</span>\n<span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sum_{r=0}^n}&amp;\\underbrace{\\color{blue}{\\binom{p}{r}\\binom{q}{r}\\binom{n+r}{p+q}}}_{=: t_r}=\\sum_{r=0}^{n}t_r=t_0\\sum_{r=0}^n\\prod_{j=0}^{r-1}\\frac{t_{j+1}}{t_j}\\\\\n&amp;=\\binom{n}{p+q}\\sum_{r=0}^n\\prod_{j=0}^{r-1}\\binom{p}{j+1}\\binom{q}{j+1}\\binom{n+j+1}{p+q}\\\\\n&amp;\\qquad\\qquad\\qquad\\cdot\\binom{p}{j}^{-1}\\binom{q}{j}^{-1}\\binom{n+j}{p+q}^{-1}\\\\\n&amp;=\\binom{n}{p+q}\\sum_{r=0}^n\\prod_{j=0}^{r-1}\n\\frac{\\left(n+j+1\\right)\\left(p-j\\right)\\left(q-j\\right)}{\\left(j+1\\right)^2\\left(n-p-q+j+1\\right)}\\\\\n&amp;=\\binom{n}{p+q}\\sum_{r=0}^n\\frac{(n+1)_r(-p)_r(-q)_r}{(1)_r^2(n-p-q+1)_r}\\\\\n&amp;=\\binom{n}{p+q}{_3F_2}\\left(n+1,-p,-q;1,n-p-q+1;1\\right)\\tag{1}\\\\\n&amp;=\\binom{n}{p+q}\\frac{(-n)_q(1+p)_q}{(1)_q(-n+p)_q}\\tag{2}\\\\\n&amp;=\\binom{n}{p+q}\\frac{(-1)^qn!}{(n-q)!}\\,\\frac{(p+q)!}{p!}\\,\\frac{1}{q!}\\,\\frac{(-1)^q(n-p-q)!}{(n-p)!}\\\\\n&amp;=\\frac{n!}{p!(n-p)!}\\,\\frac{n!}{q!(n-q)!}\\\\\n&amp;\\,\\,\\color{blue}{=\\binom{n}{p}\\binom{n}{q}}\n\\end{align*}</span>\nand the claim follows.</p>\n<p>In (1) we write the sum as <span class=\"math-container\">$_3F_2$</span> generalized hypergeometric function. This function has the nice property that the sum of the denominator parameters is one more than the sum of the numerator parameters. It is evaluated at <span class=\"math-container\">$z=1$</span> and one of the numerator parameters is a negative integer. Such a series is called <em>balanced</em> and it obeys the <em><a href=\"https://mathworld.wolfram.com/SaalschuetzsTheorem.html\" rel=\"nofollow noreferrer\">Pfaff-Saalschütz identity</a></em>\n<span class=\"math-container\">\\begin{align*}\n{_3F_2}\\left(-n,a,b;c,1+a+b-c-n;1\\right)=\\frac{(c-a)_n(c-b)_n}{(c)_n(c-a-b)_n}\n\\end{align*}</span>\nwhich is used in (2).</p>\n" }, { "answer_id": 2089319, "score": 7, "text": "<p>It is convenient to use the <em>coefficient of</em> operator <span class=\"math-container\">$[t^r]$</span> to denote the coefficient of <span class=\"math-container\">$t^r$</span> in a series. This way we can write e.g.\n<span class=\"math-container\">\\begin{align*}\n\\binom{p}{r}=[t^r](1+t)^p\n\\end{align*}</span></p>\n<p>The following is valid\n<span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sum_{r=0}^n\\binom{p}{r}\\binom{q}{r}\\binom{n+r}{p+q}=\\binom{n}{p}\\binom{n}{q}}\n\\end{align*}</span></p>\n<blockquote>\n<p>We obtain\n<span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sum_{r=0}^n}&amp;\\color{blue}{\\binom{p}{r}\\binom{q}{r}\\binom{n+r}{p+q}}\\\\\n&amp;=\\sum_{r=0}^n\\binom{p}{r}\\binom{q}{q-r}\\binom{n+r}{p+q}\\\\\n&amp;=\\sum_{r=0}^\\infty[t^r](1+t)^p[v^{q-r}](1+v)^q[w^{p+q}](1+w)^{n+r}\\tag{1}\\\\\n&amp;=[v^q](1+v)^q[w^{p+q}](1+w)^n\\sum_{r=0}^\\infty(v(1+w))^r[t^r](1+t)^p\\tag{2}\\\\\n&amp;=[v^q](1+v)^q[w^{p+q}](1+w)^n(1+v(1+w))^p\\tag{3}\\\\\n&amp;=[v^q](1+v)^{p+q}[w^{p+q}](1+w)^n\\left(1+\\frac{vw}{1+v}\\right)^p\\tag{4}\\\\\n&amp;=[v^q](1+v)^{p+q}[w^{p+q}]\\sum_{k=0}^{p+q}\\binom{n}{k}w^k\\sum_{j=0}^p\\binom{p}{j}\\left(\\frac{v}{1+v}\\right)^jw^j\\tag{5}\\\\\n&amp;=[v^q](1+v)^{p+q}\\sum_{k=0}^{p+q}\\binom{n}{k}[w^{p+q-k}]\\sum_{j=0}^p\\binom{p}{j}\\left(\\frac{v}{1+v}\\right)^jw^j\\\\\n&amp;=[v^q](1+v)^{p+q}\\sum_{k=0}^{p+q}\\binom{n}{k}\\binom{p}{p+q-k}\\frac{v^{p+q-k}}{(1+v)^{p+q-k}}\\tag{6}\\\\\n&amp;=\\sum_{k=p}^{p+q}\\binom{n}{k}\\binom{p}{k-q}[v^{k-p}](1+v)^k\\tag{7}\\\\\n&amp;=\\sum_{k=p}^{p+q}\\binom{n}{k}\\binom{p}{k-q}\\binom{k}{p}\\\\\n&amp;=\\binom{n}{p}\\sum_{k=p}^{p+q}\\binom{p}{k-q}\\binom{n-p}{k-p}\\tag{8}\\\\\n&amp;=\\binom{n}{p}\\sum_{k=0}^{q}\\binom{p}{k+p-q}\\binom{n-p}{k}\\tag{9}\\\\\n&amp;=\\binom{n}{p}\\sum_{k=0}^{q}\\binom{p}{q-k}\\binom{n-p}{k}\\tag{10}\\\\\n&amp;\\color{blue}{=\\binom{n}{p}\\binom{n}{q}}\n\\end{align*}</span>\nand the claim follows.</p>\n</blockquote>\n<p><em>Comment:</em></p>\n<ul>\n<li><p>In (1) we apply the <em>coefficient of</em> operator for each binomial coefficient.</p>\n</li>\n<li><p>In (2) we use the linearity of the <em>coefficient of</em> operator and apply the rule <span class=\"math-container\">$[t^{p}]t^qA(t)=[t^{p-q}]A(t)$</span>.</p>\n</li>\n<li><p>In (3) we apply the substitution rule of the <em>coefficient of</em> operator with <span class=\"math-container\">$t:=v(1+w)$</span>\n<span class=\"math-container\">\\begin{align*}\nA(z)=\\sum_{r=0}^\\infty a_rz^r=\\sum_{r=0}^\\infty z^r[t^r]A(t)\n\\end{align*}</span></p>\n</li>\n<li><p>In (4) we factor out <span class=\"math-container\">$(1+v)^p$</span>.</p>\n</li>\n<li><p>In (5) we apply the binomial summation formula twice. Since we want to select the coefficient of <span class=\"math-container\">$w^{p+q}$</span> we can restrict the upper limit of the left sum with <span class=\"math-container\">$k=p+q$</span>.</p>\n</li>\n<li><p>In (6) we select the coefficient of <span class=\"math-container\">$w^{p+q-k}$</span>.</p>\n</li>\n<li><p>In (7) we do some simplifications and can restrict the lower limit of the sum with <span class=\"math-container\">$k=p$</span>.</p>\n</li>\n<li><p>In (8) we apply the <em>cross product</em> <span class=\"math-container\">$\\binom{n}{k}\\binom{k}{j}=\\binom{n}{j}\\binom{n-j}{k-j}$</span>.</p>\n</li>\n<li><p>In (9) we shift the summation index and start from <span class=\"math-container\">$k=0$</span>.</p>\n</li>\n<li><p>In (10) we apply the <em><a href=\"https://en.wikipedia.org/wiki/Vandermonde%27s_identity#Chu.E2.80.93Vandermonde_identity\" rel=\"nofollow noreferrer\">Chu-Vandermonde identity</a></em>.</p>\n</li>\n</ul>\n" }, { "answer_id": 2088579, "score": 4, "text": "<p>This is Bizley’s identity (6.42 in Gould's book \"Combinatorial identities\"):\n$$\\sum^{n}_{r=0}\\binom{p}{r}\\binom{q}{r}\\binom{n+r}{p+q}=\\binom{n}{p}\\binom{n}{q}.$$\nA nice combinatorial proof can be found in the following paper by Székely as a special case of a more general identity (see Corollary 3):</p>\n\n<p><a href=\"http://www.sciencedirect.com/science/article/pii/0097316585900573\" rel=\"nofollow noreferrer\">Common origin of cubic binomial identities. A generalization of Surányi's proof on Le Jen Shoo's formula</a></p>\n" }, { "answer_id": 2245619, "score": 3, "text": "<p>The proposed identity can be derived (putting $r=s$) from this more general one\n$$\n\\sum\\limits_{\\left( {0\\, \\le } \\right)\\,k\\,\\left( { \\le \\,n} \\right)} {\\left( \\matrix{\n m - \\left( {r - s} \\right) \\cr \n k \\cr} \\right)\\left( \\matrix{\n n + \\left( {r - s} \\right) \\cr \n n - k \\cr} \\right)\\left( \\matrix{\n r + k \\cr \n m + n \\cr} \\right)} = \\left( \\matrix{\n r \\cr \n m \\cr} \\right)\\left( \\matrix{\n s \\cr \n n \\cr} \\right)\\quad \\quad \\left| \\matrix{\n \\,{\\rm 0} \\le {\\rm integer \\, }m,n \\hfill \\cr \n \\;{\\rm real}\\;r,s \\hfill \\cr} \\right.\n$$\nreported in <em>\"Concrete Mathematics\" - pag. 171</em>, and which is demonstrated through the following passages:\n$$\n\\begin{gathered}\n \\sum\\limits_{\\left( {0\\, \\leqslant } \\right)\\,k\\,\\left( { \\leqslant \\,n} \\right)} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n n + \\left( {r - s} \\right) \\\\ \n n - k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r + k \\\\ \n m + n \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\sum\\limits_{\\begin{subarray}{l} \n \\left( {0\\, \\leqslant } \\right)\\,k\\,\\left( { \\leqslant \\,n} \\right) \\\\ \n \\left( {0\\, \\leqslant } \\right)\\,j\\,\\left( { \\leqslant \\,n} \\right) \n\\end{subarray}} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n n + \\left( {r - s} \\right) \\\\ \n n - k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r \\\\ \n m + n - j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n k \\\\ \n j \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\sum\\limits_{\\begin{subarray}{l} \n \\left( {0\\, \\leqslant } \\right)\\,k\\,\\left( { \\leqslant \\,n} \\right) \\\\ \n \\left( {0\\, \\leqslant } \\right)\\,j\\,\\left( { \\leqslant \\,m + n} \\right) \n\\end{subarray}} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n k \\\\ \n j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n n + \\left( {r - s} \\right) \\\\ \n n - k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r \\\\ \n m + n - j \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\sum\\limits_{\\begin{subarray}{l} \n \\left( {0\\, \\leqslant } \\right)\\,k\\,\\left( { \\leqslant \\,n} \\right) \\\\ \n \\left( {0\\, \\leqslant } \\right)\\,j\\,\\left( { \\leqslant \\,m + n} \\right) \n\\end{subarray}} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) - j \\\\ \n k - j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n n + \\left( {r - s} \\right) \\\\ \n n - k \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r \\\\ \n m + n - j \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\sum\\limits_{\\left( {0\\, \\leqslant } \\right)\\,j\\,\\left( { \\leqslant \\,m + n} \\right)} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n m + n - j \\\\ \n n - j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r \\\\ \n m + n - j \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\sum\\limits_{\\left( {0\\, \\leqslant } \\right)\\,j\\,\\left( { \\leqslant \\,m + n} \\right)} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n m + n - j \\\\ \n m \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r \\\\ \n m + n - j \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\sum\\limits_{\\left( {0\\, \\leqslant } \\right)\\,j\\,\\left( { \\leqslant \\,m + n} \\right)} {\\left( \\begin{gathered}\n m - \\left( {r - s} \\right) \\\\ \n j \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r \\\\ \n m \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n r - m \\\\ \n n - j \\\\ \n\\end{gathered} \\right)} = \\hfill \\\\\n = \\left( \\begin{gathered}\n r \\\\ \n m \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n s \\\\ \n n \\\\ \n\\end{gathered} \\right) \\hfill \\\\ \n\\end{gathered}\n$$\nconsisting in: </p>\n\n<ul>\n<li>Vandermonde de-convolution</li>\n<li>shift of the 4th binomial</li>\n<li>\"trinomial revision\"</li>\n<li>sum on $k$ by Vandermonde convolution</li>\n<li>symmetry</li>\n<li>\"trinomial revision\"</li>\n<li>sum on $j$ by Vandermonde convolution</li>\n</ul>\n" }, { "answer_id": 4705275, "score": 1, "text": "<p>Here is a derivation based upon <em><a href=\"https://mathworld.wolfram.com/GeneralizedHypergeometricFunction.html\" rel=\"nofollow noreferrer\">generalised hypergeometric functions</a></em>. We obtain using rising factorials <span class=\"math-container\">$(a)_{k}:=a(a+1)\\cdots(a+k-1)$</span>\n<span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sum_{r=0}^n}&amp;\\underbrace{\\color{blue}{\\binom{p}{r}\\binom{q}{r}\\binom{n+r}{p+q}}}_{=: t_r}=\\sum_{r=0}^{n}t_r=t_0\\sum_{r=0}^n\\prod_{j=0}^{r-1}\\frac{t_{j+1}}{t_j}\\\\\n&amp;=\\binom{n}{p+q}\\sum_{r=0}^n\\prod_{j=0}^{r-1}\\binom{p}{j+1}\\binom{q}{j+1}\\binom{n+j+1}{p+q}\\\\\n&amp;\\qquad\\qquad\\qquad\\cdot\\binom{p}{j}^{-1}\\binom{q}{j}^{-1}\\binom{n+j}{p+q}^{-1}\\\\\n&amp;=\\binom{n}{p+q}\\sum_{r=0}^n\\prod_{j=0}^{r-1}\n\\frac{\\left(n+j+1\\right)\\left(p-j\\right)\\left(q-j\\right)}{\\left(j+1\\right)^2\\left(n-p-q+j+1\\right)}\\\\\n&amp;=\\binom{n}{p+q}\\sum_{r=0}^n\\frac{(n+1)_r(-p)_r(-q)_r}{(1)_r^2(n-p-q+1)_r}\\\\\n&amp;=\\binom{n}{p+q}{_3F_2}\\left(n+1,-p,-q;1,n-p-q+1;1\\right)\\tag{1}\\\\\n&amp;=\\binom{n}{p+q}\\frac{(-n)_q(1+p)_q}{(1)_q(-n+p)_q}\\tag{2}\\\\\n&amp;=\\binom{n}{p+q}\\frac{(-1)^qn!}{(n-q)!}\\,\\frac{(p+q)!}{p!}\\,\\frac{1}{q!}\\,\\frac{(-1)^q(n-p-q)!}{(n-p)!}\\\\\n&amp;=\\frac{n!}{p!(n-p)!}\\,\\frac{n!}{q!(n-q)!}\\\\\n&amp;\\,\\,\\color{blue}{=\\binom{n}{p}\\binom{n}{q}}\n\\end{align*}</span>\nand the claim follows.</p>\n<p>In (1) we write the sum as <span class=\"math-container\">$_3F_2$</span> generalized hypergeometric function. This function has the nice property that the sum of the denominator parameters is one more than the sum of the numerator parameters. It is evaluated at <span class=\"math-container\">$z=1$</span> and one of the numerator parameters is a negative integer. Such a series is called <em>balanced</em> and it obeys the <em><a href=\"https://mathworld.wolfram.com/SaalschuetzsTheorem.html\" rel=\"nofollow noreferrer\">Pfaff-Saalschütz identity</a></em>\n<span class=\"math-container\">\\begin{align*}\n{_3F_2}\\left(-n,a,b;c,1+a+b-c-n;1\\right)=\\frac{(c-a)_n(c-b)_n}{(c)_n(c-a-b)_n}\n\\end{align*}</span>\nwhich is used in (2).</p>\n" } ]
Question: Sum of binomial coefficients involving $n,p,q,r$ Answer 1: <p>It is convenient to use the <em>coefficient of</em> operator <span class="math-container">$[t^r]$</span> to denote the coefficient of <span class="math-container">$t^r$</span> in a series. This way we can write e.g. <span class="math-container">\begin{align*} \binom{p}{r}=[t^r](1+t)^p \end{align*}</span></p> <p>The following is valid <span class="math-container">\begin{align*} \color{blue}{\sum_{r=0}^n\binom{p}{r}\binom{q}{r}\binom{n+r}{p+q}=\binom{n}{p}\binom{n}{q}} \end{align*}</span></p> <blockquote> <p>We obtain <span class="math-container">\begin{align*} \color{blue}{\sum_{r=0}^n}&amp;\color{blue}{\binom{p}{r}\binom{q}{r}\binom{n+r}{p+q}}\\ &amp;=\sum_{r=0}^n\binom{p}{r}\binom{q}{q-r}\binom{n+r}{p+q}\\ &amp;=\sum_{r=0}^\infty[t^r](1+t)^p[v^{q-r}](1+v)^q[w^{p+q}](1+w)^{n+r}\tag{1}\\ &amp;=[v^q](1+v)^q[w^{p+q}](1+w)^n\sum_{r=0}^\infty(v(1+w))^r[t^r](1+t)^p\tag{2}\\ &amp;=[v^q](1+v)^q[w^{p+q}](1+w)^n(1+v(1+w))^p\tag{3}\\ &amp;=[v^q](1+v)^{p+q}[w^{p+q}](1+w)^n\left(1+\frac{vw}{1+v}\right)^p\tag{4}\\ &amp;=[v^q](1+v)^{p+q}[w^{p+q}]\sum_{k=0}^{p+q}\binom{n}{k}w^k\sum_{j=0}^p\binom{p}{j}\left(\frac{v}{1+v}\right)^jw^j\tag{5}\\ &amp;=[v^q](1+v)^{p+q}\sum_{k=0}^{p+q}\binom{n}{k}[w^{p+q-k}]\sum_{j=0}^p\binom{p}{j}\left(\frac{v}{1+v}\right)^jw^j\\ &amp;=[v^q](1+v)^{p+q}\sum_{k=0}^{p+q}\binom{n}{k}\binom{p}{p+q-k}\frac{v^{p+q-k}}{(1+v)^{p+q-k}}\tag{6}\\ &amp;=\sum_{k=p}^{p+q}\binom{n}{k}\binom{p}{k-q}[v^{k-p}](1+v)^k\tag{7}\\ &amp;=\sum_{k=p}^{p+q}\binom{n}{k}\binom{p}{k-q}\binom{k}{p}\\ &amp;=\binom{n}{p}\sum_{k=p}^{p+q}\binom{p}{k-q}\binom{n-p}{k-p}\tag{8}\\ &amp;=\binom{n}{p}\sum_{k=0}^{q}\binom{p}{k+p-q}\binom{n-p}{k}\tag{9}\\ &amp;=\binom{n}{p}\sum_{k=0}^{q}\binom{p}{q-k}\binom{n-p}{k}\tag{10}\\ &amp;\color{blue}{=\binom{n}{p}\binom{n}{q}} \end{align*}</span> and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (1) we apply the <em>coefficient of</em> operator for each binomial coefficient.</p> </li> <li><p>In (2) we use the linearity of the <em>coefficient of</em> operator and apply the rule <span class="math-container">$[t^{p}]t^qA(t)=[t^{p-q}]A(t)$</span>.</p> </li> <li><p>In (3) we apply the substitution rule of the <em>coefficient of</em> operator with <span class="math-container">$t:=v(1+w)$</span> <span class="math-container">\begin{align*} A(z)=\sum_{r=0}^\infty a_rz^r=\sum_{r=0}^\infty z^r[t^r]A(t) \end{align*}</span></p> </li> <li><p>In (4) we factor out <span class="math-container">$(1+v)^p$</span>.</p> </li> <li><p>In (5) we apply the binomial summation formula twice. Since we want to select the coefficient of <span class="math-container">$w^{p+q}$</span> we can restrict the upper limit of the left sum with <span class="math-container">$k=p+q$</span>.</p> </li> <li><p>In (6) we select the coefficient of <span class="math-container">$w^{p+q-k}$</span>.</p> </li> <li><p>In (7) we do some simplifications and can restrict the lower limit of the sum with <span class="math-container">$k=p$</span>.</p> </li> <li><p>In (8) we apply the <em>cross product</em> <span class="math-container">$\binom{n}{k}\binom{k}{j}=\binom{n}{j}\binom{n-j}{k-j}$</span>.</p> </li> <li><p>In (9) we shift the summation index and start from <span class="math-container">$k=0$</span>.</p> </li> <li><p>In (10) we apply the <em><a href="https://en.wikipedia.org/wiki/Vandermonde%27s_identity#Chu.E2.80.93Vandermonde_identity" rel="nofollow noreferrer">Chu-Vandermonde identity</a></em>.</p> </li> </ul> Answer 2: <p>This is Bizley’s identity (6.42 in Gould's book "Combinatorial identities"): $$\sum^{n}_{r=0}\binom{p}{r}\binom{q}{r}\binom{n+r}{p+q}=\binom{n}{p}\binom{n}{q}.$$ A nice combinatorial proof can be found in the following paper by Székely as a special case of a more general identity (see Corollary 3):</p> <p><a href="http://www.sciencedirect.com/science/article/pii/0097316585900573" rel="nofollow noreferrer">Common origin of cubic binomial identities. A generalization of Surányi's proof on Le Jen Shoo's formula</a></p> Answer 3: <p>The proposed identity can be derived (putting $r=s$) from this more general one $$ \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,n} \right)} {\left( \matrix{ m - \left( {r - s} \right) \cr k \cr} \right)\left( \matrix{ n + \left( {r - s} \right) \cr n - k \cr} \right)\left( \matrix{ r + k \cr m + n \cr} \right)} = \left( \matrix{ r \cr m \cr} \right)\left( \matrix{ s \cr n \cr} \right)\quad \quad \left| \matrix{ \,{\rm 0} \le {\rm integer \, }m,n \hfill \cr \;{\rm real}\;r,s \hfill \cr} \right. $$ reported in <em>"Concrete Mathematics" - pag. 171</em>, and which is demonstrated through the following passages: $$ \begin{gathered} \sum\limits_{\left( {0\, \leqslant } \right)\,k\,\left( { \leqslant \,n} \right)} {\left( \begin{gathered} m - \left( {r - s} \right) \\ k \\ \end{gathered} \right)\left( \begin{gathered} n + \left( {r - s} \right) \\ n - k \\ \end{gathered} \right)\left( \begin{gathered} r + k \\ m + n \\ \end{gathered} \right)} = \hfill \\ = \sum\limits_{\begin{subarray}{l} \left( {0\, \leqslant } \right)\,k\,\left( { \leqslant \,n} \right) \\ \left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,n} \right) \end{subarray}} {\left( \begin{gathered} m - \left( {r - s} \right) \\ k \\ \end{gathered} \right)\left( \begin{gathered} n + \left( {r - s} \right) \\ n - k \\ \end{gathered} \right)\left( \begin{gathered} r \\ m + n - j \\ \end{gathered} \right)\left( \begin{gathered} k \\ j \\ \end{gathered} \right)} = \hfill \\ = \sum\limits_{\begin{subarray}{l} \left( {0\, \leqslant } \right)\,k\,\left( { \leqslant \,n} \right) \\ \left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,m + n} \right) \end{subarray}} {\left( \begin{gathered} m - \left( {r - s} \right) \\ k \\ \end{gathered} \right)\left( \begin{gathered} k \\ j \\ \end{gathered} \right)\left( \begin{gathered} n + \left( {r - s} \right) \\ n - k \\ \end{gathered} \right)\left( \begin{gathered} r \\ m + n - j \\ \end{gathered} \right)} = \hfill \\ = \sum\limits_{\begin{subarray}{l} \left( {0\, \leqslant } \right)\,k\,\left( { \leqslant \,n} \right) \\ \left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,m + n} \right) \end{subarray}} {\left( \begin{gathered} m - \left( {r - s} \right) \\ j \\ \end{gathered} \right)\left( \begin{gathered} m - \left( {r - s} \right) - j \\ k - j \\ \end{gathered} \right)\left( \begin{gathered} n + \left( {r - s} \right) \\ n - k \\ \end{gathered} \right)\left( \begin{gathered} r \\ m + n - j \\ \end{gathered} \right)} = \hfill \\ = \sum\limits_{\left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,m + n} \right)} {\left( \begin{gathered} m - \left( {r - s} \right) \\ j \\ \end{gathered} \right)\left( \begin{gathered} m + n - j \\ n - j \\ \end{gathered} \right)\left( \begin{gathered} r \\ m + n - j \\ \end{gathered} \right)} = \hfill \\ = \sum\limits_{\left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,m + n} \right)} {\left( \begin{gathered} m - \left( {r - s} \right) \\ j \\ \end{gathered} \right)\left( \begin{gathered} m + n - j \\ m \\ \end{gathered} \right)\left( \begin{gathered} r \\ m + n - j \\ \end{gathered} \right)} = \hfill \\ = \sum\limits_{\left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,m + n} \right)} {\left( \begin{gathered} m - \left( {r - s} \right) \\ j \\ \end{gathered} \right)\left( \begin{gathered} r \\ m \\ \end{gathered} \right)\left( \begin{gathered} r - m \\ n - j \\ \end{gathered} \right)} = \hfill \\ = \left( \begin{gathered} r \\ m \\ \end{gathered} \right)\left( \begin{gathered} s \\ n \\ \end{gathered} \right) \hfill \\ \end{gathered} $$ consisting in: </p> <ul> <li>Vandermonde de-convolution</li> <li>shift of the 4th binomial</li> <li>"trinomial revision"</li> <li>sum on $k$ by Vandermonde convolution</li> <li>symmetry</li> <li>"trinomial revision"</li> <li>sum on $j$ by Vandermonde convolution</li> </ul> Answer 4: <p>Here is a derivation based upon <em><a href="https://mathworld.wolfram.com/GeneralizedHypergeometricFunction.html" rel="nofollow noreferrer">generalised hypergeometric functions</a></em>. We obtain using rising factorials <span class="math-container">$(a)_{k}:=a(a+1)\cdots(a+k-1)$</span> <span class="math-container">\begin{align*} \color{blue}{\sum_{r=0}^n}&amp;\underbrace{\color{blue}{\binom{p}{r}\binom{q}{r}\binom{n+r}{p+q}}}_{=: t_r}=\sum_{r=0}^{n}t_r=t_0\sum_{r=0}^n\prod_{j=0}^{r-1}\frac{t_{j+1}}{t_j}\\ &amp;=\binom{n}{p+q}\sum_{r=0}^n\prod_{j=0}^{r-1}\binom{p}{j+1}\binom{q}{j+1}\binom{n+j+1}{p+q}\\ &amp;\qquad\qquad\qquad\cdot\binom{p}{j}^{-1}\binom{q}{j}^{-1}\binom{n+j}{p+q}^{-1}\\ &amp;=\binom{n}{p+q}\sum_{r=0}^n\prod_{j=0}^{r-1} \frac{\left(n+j+1\right)\left(p-j\right)\left(q-j\right)}{\left(j+1\right)^2\left(n-p-q+j+1\right)}\\ &amp;=\binom{n}{p+q}\sum_{r=0}^n\frac{(n+1)_r(-p)_r(-q)_r}{(1)_r^2(n-p-q+1)_r}\\ &amp;=\binom{n}{p+q}{_3F_2}\left(n+1,-p,-q;1,n-p-q+1;1\right)\tag{1}\\ &amp;=\binom{n}{p+q}\frac{(-n)_q(1+p)_q}{(1)_q(-n+p)_q}\tag{2}\\ &amp;=\binom{n}{p+q}\frac{(-1)^qn!}{(n-q)!}\,\frac{(p+q)!}{p!}\,\frac{1}{q!}\,\frac{(-1)^q(n-p-q)!}{(n-p)!}\\ &amp;=\frac{n!}{p!(n-p)!}\,\frac{n!}{q!(n-q)!}\\ &amp;\,\,\color{blue}{=\binom{n}{p}\binom{n}{q}} \end{align*}</span> and the claim follows.</p> <p>In (1) we write the sum as <span class="math-container">$_3F_2$</span> generalized hypergeometric function. This function has the nice property that the sum of the denominator parameters is one more than the sum of the numerator parameters. It is evaluated at <span class="math-container">$z=1$</span> and one of the numerator parameters is a negative integer. Such a series is called <em>balanced</em> and it obeys the <em><a href="https://mathworld.wolfram.com/SaalschuetzsTheorem.html" rel="nofollow noreferrer">Pfaff-Saalschütz identity</a></em> <span class="math-container">\begin{align*} {_3F_2}\left(-n,a,b;c,1+a+b-c-n;1\right)=\frac{(c-a)_n(c-b)_n}{(c)_n(c-a-b)_n} \end{align*}</span> which is used in (2).</p> Answer 5: <p>It is convenient to use the <em>coefficient of</em> operator <span class="math-container">$[t^r]$</span> to denote the coefficient of <span class="math-container">$t^r$</span> in a series. This way we can write e.g. <span class="math-container">\begin{align*} \binom{p}{r}=[t^r](1+t)^p \end{align*}</span></p> <p>The following is valid <span class="math-container">\begin{align*} \color{blue}{\sum_{r=0}^n\binom{p}{r}\binom{q}{r}\binom{n+r}{p+q}=\binom{n}{p}\binom{n}{q}} \end{align*}</span></p> <blockquote> <p>We obtain <span class="math-container">\begin{align*} \color{blue}{\sum_{r=0}^n}&amp;\color{blue}{\binom{p}{r}\binom{q}{r}\binom{n+r}{p+q}}\\ &amp;=\sum_{r=0}^n\binom{p}{r}\binom{q}{q-r}\binom{n+r}{p+q}\\ &amp;=\sum_{r=0}^\infty[t^r](1+t)^p[v^{q-r}](1+v)^q[w^{p+q}](1+w)^{n+r}\tag{1}\\ &amp;=[v^q](1+v)^q[w^{p+q}](1+w)^n\sum_{r=0}^\infty(v(1+w))^r[t^r](1+t)^p\tag{2}\\ &amp;=[v^q](1+v)^q[w^{p+q}](1+w)^n(1+v(1+w))^p\tag{3}\\ &amp;=[v^q](1+v)^{p+q}[w^{p+q}](1+w)^n\left(1+\frac{vw}{1+v}\right)^p\tag{4}\\ &amp;=[v^q](1+v)^{p+q}[w^{p+q}]\sum_{k=0}^{p+q}\binom{n}{k}w^k\sum_{j=0}^p\binom{p}{j}\left(\frac{v}{1+v}\right)^jw^j\tag{5}\\ &amp;=[v^q](1+v)^{p+q}\sum_{k=0}^{p+q}\binom{n}{k}[w^{p+q-k}]\sum_{j=0}^p\binom{p}{j}\left(\frac{v}{1+v}\right)^jw^j\\ &amp;=[v^q](1+v)^{p+q}\sum_{k=0}^{p+q}\binom{n}{k}\binom{p}{p+q-k}\frac{v^{p+q-k}}{(1+v)^{p+q-k}}\tag{6}\\ &amp;=\sum_{k=p}^{p+q}\binom{n}{k}\binom{p}{k-q}[v^{k-p}](1+v)^k\tag{7}\\ &amp;=\sum_{k=p}^{p+q}\binom{n}{k}\binom{p}{k-q}\binom{k}{p}\\ &amp;=\binom{n}{p}\sum_{k=p}^{p+q}\binom{p}{k-q}\binom{n-p}{k-p}\tag{8}\\ &amp;=\binom{n}{p}\sum_{k=0}^{q}\binom{p}{k+p-q}\binom{n-p}{k}\tag{9}\\ &amp;=\binom{n}{p}\sum_{k=0}^{q}\binom{p}{q-k}\binom{n-p}{k}\tag{10}\\ &amp;\color{blue}{=\binom{n}{p}\binom{n}{q}} \end{align*}</span> and the claim follows.</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (1) we apply the <em>coefficient of</em> operator for each binomial coefficient.</p> </li> <li><p>In (2) we use the linearity of the <em>coefficient of</em> operator and apply the rule <span class="math-container">$[t^{p}]t^qA(t)=[t^{p-q}]A(t)$</span>.</p> </li> <li><p>In (3) we apply the substitution rule of the <em>coefficient of</em> operator with <span class="math-container">$t:=v(1+w)$</span> <span class="math-container">\begin{align*} A(z)=\sum_{r=0}^\infty a_rz^r=\sum_{r=0}^\infty z^r[t^r]A(t) \end{align*}</span></p> </li> <li><p>In (4) we factor out <span class="math-container">$(1+v)^p$</span>.</p> </li> <li><p>In (5) we apply the binomial summation formula twice. Since we want to select the coefficient of <span class="math-container">$w^{p+q}$</span> we can restrict the upper limit of the left sum with <span class="math-container">$k=p+q$</span>.</p> </li> <li><p>In (6) we select the coefficient of <span class="math-container">$w^{p+q-k}$</span>.</p> </li> <li><p>In (7) we do some simplifications and can restrict the lower limit of the sum with <span class="math-container">$k=p$</span>.</p> </li> <li><p>In (8) we apply the <em>cross product</em> <span class="math-container">$\binom{n}{k}\binom{k}{j}=\binom{n}{j}\binom{n-j}{k-j}$</span>.</p> </li> <li><p>In (9) we shift the summation index and start from <span class="math-container">$k=0$</span>.</p> </li> <li><p>In (10) we apply the <em><a href="https://en.wikipedia.org/wiki/Vandermonde%27s_identity#Chu.E2.80.93Vandermonde_identity" rel="nofollow noreferrer">Chu-Vandermonde identity</a></em>.</p> </li> </ul> Answer 6: <p>This is Bizley’s identity (6.42 in Gould's book "Combinatorial identities"): $$\sum^{n}_{r=0}\binom{p}{r}\binom{q}{r}\binom{n+r}{p+q}=\binom{n}{p}\binom{n}{q}.$$ A nice combinatorial proof can be found in the following paper by Székely as a special case of a more general identity (see Corollary 3):</p> <p><a href="http://www.sciencedirect.com/science/article/pii/0097316585900573" rel="nofollow noreferrer">Common origin of cubic binomial identities. A generalization of Surányi's proof on Le Jen Shoo's formula</a></p> Answer 7: <p>The proposed identity can be derived (putting $r=s$) from this more general one $$ \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,n} \right)} {\left( \matrix{ m - \left( {r - s} \right) \cr k \cr} \right)\left( \matrix{ n + \left( {r - s} \right) \cr n - k \cr} \right)\left( \matrix{ r + k \cr m + n \cr} \right)} = \left( \matrix{ r \cr m \cr} \right)\left( \matrix{ s \cr n \cr} \right)\quad \quad \left| \matrix{ \,{\rm 0} \le {\rm integer \, }m,n \hfill \cr \;{\rm real}\;r,s \hfill \cr} \right. $$ reported in <em>"Concrete Mathematics" - pag. 171</em>, and which is demonstrated through the following passages: $$ \begin{gathered} \sum\limits_{\left( {0\, \leqslant } \right)\,k\,\left( { \leqslant \,n} \right)} {\left( \begin{gathered} m - \left( {r - s} \right) \\ k \\ \end{gathered} \right)\left( \begin{gathered} n + \left( {r - s} \right) \\ n - k \\ \end{gathered} \right)\left( \begin{gathered} r + k \\ m + n \\ \end{gathered} \right)} = \hfill \\ = \sum\limits_{\begin{subarray}{l} \left( {0\, \leqslant } \right)\,k\,\left( { \leqslant \,n} \right) \\ \left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,n} \right) \end{subarray}} {\left( \begin{gathered} m - \left( {r - s} \right) \\ k \\ \end{gathered} \right)\left( \begin{gathered} n + \left( {r - s} \right) \\ n - k \\ \end{gathered} \right)\left( \begin{gathered} r \\ m + n - j \\ \end{gathered} \right)\left( \begin{gathered} k \\ j \\ \end{gathered} \right)} = \hfill \\ = \sum\limits_{\begin{subarray}{l} \left( {0\, \leqslant } \right)\,k\,\left( { \leqslant \,n} \right) \\ \left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,m + n} \right) \end{subarray}} {\left( \begin{gathered} m - \left( {r - s} \right) \\ k \\ \end{gathered} \right)\left( \begin{gathered} k \\ j \\ \end{gathered} \right)\left( \begin{gathered} n + \left( {r - s} \right) \\ n - k \\ \end{gathered} \right)\left( \begin{gathered} r \\ m + n - j \\ \end{gathered} \right)} = \hfill \\ = \sum\limits_{\begin{subarray}{l} \left( {0\, \leqslant } \right)\,k\,\left( { \leqslant \,n} \right) \\ \left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,m + n} \right) \end{subarray}} {\left( \begin{gathered} m - \left( {r - s} \right) \\ j \\ \end{gathered} \right)\left( \begin{gathered} m - \left( {r - s} \right) - j \\ k - j \\ \end{gathered} \right)\left( \begin{gathered} n + \left( {r - s} \right) \\ n - k \\ \end{gathered} \right)\left( \begin{gathered} r \\ m + n - j \\ \end{gathered} \right)} = \hfill \\ = \sum\limits_{\left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,m + n} \right)} {\left( \begin{gathered} m - \left( {r - s} \right) \\ j \\ \end{gathered} \right)\left( \begin{gathered} m + n - j \\ n - j \\ \end{gathered} \right)\left( \begin{gathered} r \\ m + n - j \\ \end{gathered} \right)} = \hfill \\ = \sum\limits_{\left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,m + n} \right)} {\left( \begin{gathered} m - \left( {r - s} \right) \\ j \\ \end{gathered} \right)\left( \begin{gathered} m + n - j \\ m \\ \end{gathered} \right)\left( \begin{gathered} r \\ m + n - j \\ \end{gathered} \right)} = \hfill \\ = \sum\limits_{\left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,m + n} \right)} {\left( \begin{gathered} m - \left( {r - s} \right) \\ j \\ \end{gathered} \right)\left( \begin{gathered} r \\ m \\ \end{gathered} \right)\left( \begin{gathered} r - m \\ n - j \\ \end{gathered} \right)} = \hfill \\ = \left( \begin{gathered} r \\ m \\ \end{gathered} \right)\left( \begin{gathered} s \\ n \\ \end{gathered} \right) \hfill \\ \end{gathered} $$ consisting in: </p> <ul> <li>Vandermonde de-convolution</li> <li>shift of the 4th binomial</li> <li>"trinomial revision"</li> <li>sum on $k$ by Vandermonde convolution</li> <li>symmetry</li> <li>"trinomial revision"</li> <li>sum on $j$ by Vandermonde convolution</li> </ul> Answer 8: <p>Here is a derivation based upon <em><a href="https://mathworld.wolfram.com/GeneralizedHypergeometricFunction.html" rel="nofollow noreferrer">generalised hypergeometric functions</a></em>. We obtain using rising factorials <span class="math-container">$(a)_{k}:=a(a+1)\cdots(a+k-1)$</span> <span class="math-container">\begin{align*} \color{blue}{\sum_{r=0}^n}&amp;\underbrace{\color{blue}{\binom{p}{r}\binom{q}{r}\binom{n+r}{p+q}}}_{=: t_r}=\sum_{r=0}^{n}t_r=t_0\sum_{r=0}^n\prod_{j=0}^{r-1}\frac{t_{j+1}}{t_j}\\ &amp;=\binom{n}{p+q}\sum_{r=0}^n\prod_{j=0}^{r-1}\binom{p}{j+1}\binom{q}{j+1}\binom{n+j+1}{p+q}\\ &amp;\qquad\qquad\qquad\cdot\binom{p}{j}^{-1}\binom{q}{j}^{-1}\binom{n+j}{p+q}^{-1}\\ &amp;=\binom{n}{p+q}\sum_{r=0}^n\prod_{j=0}^{r-1} \frac{\left(n+j+1\right)\left(p-j\right)\left(q-j\right)}{\left(j+1\right)^2\left(n-p-q+j+1\right)}\\ &amp;=\binom{n}{p+q}\sum_{r=0}^n\frac{(n+1)_r(-p)_r(-q)_r}{(1)_r^2(n-p-q+1)_r}\\ &amp;=\binom{n}{p+q}{_3F_2}\left(n+1,-p,-q;1,n-p-q+1;1\right)\tag{1}\\ &amp;=\binom{n}{p+q}\frac{(-n)_q(1+p)_q}{(1)_q(-n+p)_q}\tag{2}\\ &amp;=\binom{n}{p+q}\frac{(-1)^qn!}{(n-q)!}\,\frac{(p+q)!}{p!}\,\frac{1}{q!}\,\frac{(-1)^q(n-p-q)!}{(n-p)!}\\ &amp;=\frac{n!}{p!(n-p)!}\,\frac{n!}{q!(n-q)!}\\ &amp;\,\,\color{blue}{=\binom{n}{p}\binom{n}{q}} \end{align*}</span> and the claim follows.</p> <p>In (1) we write the sum as <span class="math-container">$_3F_2$</span> generalized hypergeometric function. This function has the nice property that the sum of the denominator parameters is one more than the sum of the numerator parameters. It is evaluated at <span class="math-container">$z=1$</span> and one of the numerator parameters is a negative integer. Such a series is called <em>balanced</em> and it obeys the <em><a href="https://mathworld.wolfram.com/SaalschuetzsTheorem.html" rel="nofollow noreferrer">Pfaff-Saalschütz identity</a></em> <span class="math-container">\begin{align*} {_3F_2}\left(-n,a,b;c,1+a+b-c-n;1\right)=\frac{(c-a)_n(c-b)_n}{(c)_n(c-a-b)_n} \end{align*}</span> which is used in (2).</p>
Derivates of $f(x,y)=|x| + |y| − ||x| − |y||.$
2,088,912
4
https://math.stackexchange.com/questions/2088912/derivates-of-fx-y-x-y-%e2%88%92-x-%e2%88%92-y
<p>I worked on one my problem and I want you to check my work or give my some good idea for solution. $$f(x,y)=|x| + |y| − ||x| − |y||.$$ a) Find all $(x,y)$ such that $f$ is continuous at given $(x,y)$.</p> <p>Solution a) it's very easy to see that given function is continuous at every $(x,y)$ because absolute value function $|x|$ is continuous at every $x$.</p> <p>b) Count partial derivatives of function $f$ at the point $(0,0).$ </p> <p>Solution: My idea is not to get partial derivatives for every $x$ and $y$, but just use limit definition of it.</p> <p>$$\lim_{h\to0} [\frac{f(h,0)-f(0,0)}{h}=\frac{|h|+|0|-||h|-|0||}{h}=\frac{0}{h}=0]$$ This is partial derivatives whith respect of x. Similarly we get $0$ for partial derivatives with respect to y at the point $(0,0)$.<br> c) Is it function $f$ differentiable at $(0,0)$? <p> My work: Idea is to use definition, so we know how looks our differential operator - it's null operator (from b) part). <br> $$\lim_{(x,y)\to0} \frac{f(x,y)-f(0,0)-A(x,y)}{||(x,y)||}=\frac{|x| + |y| − ||x| − |y||}{\sqrt{(x^2+y^2)}}$$. If we look at $x=0$ limit is $0$, but if $y=x$ than limit isn't equal to zero, so this limit doesn't exist. </p>
[ { "answer_id": 2092116, "score": 5, "text": "<p>As you may be aware, if $a$ and $b$ are real numbers, then\n\\begin{align*}\n\\max(a, b) &amp;= \\tfrac{1}{2}(a + b + |a - b|), \\\\\n\\min(a, b) &amp;= \\tfrac{1}{2}(a + b - |a - b|).\n\\end{align*}\n(If $I$ denotes the closed interval with endpoint $a$ and $b$, each formula may be interpreted as \"starting at the midpoint of $I$ and traveling to the right (max) or left (min) by half the length of $I$.)</p>\n\n<p>Particularly,\n$$\nf(x, y) = |x| + |y| - \\bigl||x| - |y|\\bigr| = 2\\min(|x|, |y|).\n$$\nThis observation makes your conclusions geometrically apparent.</p>\n" }, { "answer_id": 2092063, "score": 2, "text": "<p>Your work looks very good. Assuming this is homework, the only thing I would add is a bit more information about the very last statement. That the limit is $0$ if $x = 0$ is evident but you might show a little more explicitly why the limit is not $0$ if $y = x$. </p>\n" }, { "answer_id": 2102865, "score": 1, "text": "<p>Just a <B>tip / hint</B>.<BR>\nAn isoline chart / contour plot of the function at hand looks like this; darker means lower values.</p>\n\n<p><a href=\"https://i.stack.imgur.com/HDpgj.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HDpgj.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2092116, "score": 5, "text": "<p>As you may be aware, if $a$ and $b$ are real numbers, then\n\\begin{align*}\n\\max(a, b) &amp;= \\tfrac{1}{2}(a + b + |a - b|), \\\\\n\\min(a, b) &amp;= \\tfrac{1}{2}(a + b - |a - b|).\n\\end{align*}\n(If $I$ denotes the closed interval with endpoint $a$ and $b$, each formula may be interpreted as \"starting at the midpoint of $I$ and traveling to the right (max) or left (min) by half the length of $I$.)</p>\n\n<p>Particularly,\n$$\nf(x, y) = |x| + |y| - \\bigl||x| - |y|\\bigr| = 2\\min(|x|, |y|).\n$$\nThis observation makes your conclusions geometrically apparent.</p>\n" }, { "answer_id": 2092063, "score": 2, "text": "<p>Your work looks very good. Assuming this is homework, the only thing I would add is a bit more information about the very last statement. That the limit is $0$ if $x = 0$ is evident but you might show a little more explicitly why the limit is not $0$ if $y = x$. </p>\n" }, { "answer_id": 2102865, "score": 1, "text": "<p>Just a <B>tip / hint</B>.<BR>\nAn isoline chart / contour plot of the function at hand looks like this; darker means lower values.</p>\n\n<p><a href=\"https://i.stack.imgur.com/HDpgj.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HDpgj.jpg\" alt=\"enter image description here\"></a></p>\n" } ]
Question: Derivates of $f(x,y)=|x| + |y| − ||x| − |y||.$ Answer 1: <p>As you may be aware, if $a$ and $b$ are real numbers, then \begin{align*} \max(a, b) &amp;= \tfrac{1}{2}(a + b + |a - b|), \\ \min(a, b) &amp;= \tfrac{1}{2}(a + b - |a - b|). \end{align*} (If $I$ denotes the closed interval with endpoint $a$ and $b$, each formula may be interpreted as "starting at the midpoint of $I$ and traveling to the right (max) or left (min) by half the length of $I$.)</p> <p>Particularly, $$ f(x, y) = |x| + |y| - \bigl||x| - |y|\bigr| = 2\min(|x|, |y|). $$ This observation makes your conclusions geometrically apparent.</p> Answer 2: <p>Your work looks very good. Assuming this is homework, the only thing I would add is a bit more information about the very last statement. That the limit is $0$ if $x = 0$ is evident but you might show a little more explicitly why the limit is not $0$ if $y = x$. </p> Answer 3: <p>Just a <B>tip / hint</B>.<BR> An isoline chart / contour plot of the function at hand looks like this; darker means lower values.</p> <p><a href="https://i.stack.imgur.com/HDpgj.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HDpgj.jpg" alt="enter image description here"></a></p> Answer 4: <p>As you may be aware, if $a$ and $b$ are real numbers, then \begin{align*} \max(a, b) &amp;= \tfrac{1}{2}(a + b + |a - b|), \\ \min(a, b) &amp;= \tfrac{1}{2}(a + b - |a - b|). \end{align*} (If $I$ denotes the closed interval with endpoint $a$ and $b$, each formula may be interpreted as "starting at the midpoint of $I$ and traveling to the right (max) or left (min) by half the length of $I$.)</p> <p>Particularly, $$ f(x, y) = |x| + |y| - \bigl||x| - |y|\bigr| = 2\min(|x|, |y|). $$ This observation makes your conclusions geometrically apparent.</p> Answer 5: <p>Your work looks very good. Assuming this is homework, the only thing I would add is a bit more information about the very last statement. That the limit is $0$ if $x = 0$ is evident but you might show a little more explicitly why the limit is not $0$ if $y = x$. </p> Answer 6: <p>Just a <B>tip / hint</B>.<BR> An isoline chart / contour plot of the function at hand looks like this; darker means lower values.</p> <p><a href="https://i.stack.imgur.com/HDpgj.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HDpgj.jpg" alt="enter image description here"></a></p>
Equation system with two unknown variables $xy=6930$ and $y/x=1.41$
2,089,191
4
https://math.stackexchange.com/questions/2089191/equation-system-with-two-unknown-variables-xy-6930-and-y-x-1-41
<p>I am sorry if this is too easy for you, but it's been very many decades since I had math at school. ;-) Please also consider that English is a foreign language for me.</p> <p>I need to resolve X and Y for this equation system:</p> <p>\begin{align} x\cdot y &amp;= 6930\\ \frac yx &amp;= 1.41 \end{align}</p> <p>How can this be done and which are the steps to the solution? Please write in easy to understand arithmetic language (no advanced mathematical symbols please).</p> <p>Many thanks in advance!</p>
[ { "answer_id": 2089198, "score": 4, "text": "<p>If $y/x = 1.41$, then $y = 1.41x$. So plugging this in the first equation yields $1.41x^2 = 6930$ and thus $x = \\pm \\sqrt{6930/1.41}$. Now it should be easy to find also $y$.</p>\n" }, { "answer_id": 2089217, "score": 4, "text": "<p>The first equation\n$$\nx \\cdot y = 6930 \\quad (*)\n$$\nimplies that both $x$ and $y$ can not be zero.</p>\n\n<p>So the second equation\n$$\n\\frac{y}{x} = 1.41 \\quad (**)\n$$\nis equivalent (has the same solutions) to \n$$\ny = 1.41 \\cdot x\n$$\nWe can insert this into the first equation and get\n$$\nx \\cdot (1.41 \\cdot x) = 6930\n$$\nwhich simplifies to\n$$\nx^2 = \\frac{6930}{1.41}\n$$\nor \n$$\nx = \\pm \\sqrt{\\frac{6930}{1.41}}\n$$\nwhere the $\\pm$ is short hand notation for two solutions, one positive, one negative.</p>\n\n<p>Finally we get\n$$\ny \n= 1.41 \\cdot x \n= 1.41 \\cdot \\pm \\sqrt{\\frac{6930}{1.41}}\n= \\pm \\sqrt{1.41 \\cdot 6930}\n$$\nSo we got four solutions in total, however equation $(*)$ or $(**)$ will only permit pairs of same signs, as we need a positive product or fraction, so we end up with two solutions:\n$$\n(x, y) = \n\\pm \\left( \\sqrt{\\frac{6930}{1.41}}, \\sqrt{1.41 \\cdot 6930}\\right)\n$$</p>\n\n<p><strong>Update:</strong></p>\n\n<p>Here you can fiddle with GeoGebra: <a href=\"https://ggbm.at/hxugY3FE\" rel=\"nofollow noreferrer\">link</a></p>\n\n<p>This free software allows you to solve the problem graphically, symbolic and numeric.</p>\n\n<p><a href=\"https://i.stack.imgur.com/zNzgU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zNzgU.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089252, "score": 3, "text": "<p>Start with$$\\begin{align*} &amp; xy=6930\\tag1\\\\ &amp; \\tfrac yx=1.41\\tag2\\end{align*}$$\nMultiplying the numerator and denominator of $(2)$ by $y$ gives$$\\begin{align*} &amp; \\dfrac {y^2}{xy}=1.41\\tag3\\\\ &amp; y^2=1.41\\cdot xy\\tag4\\\\ &amp; y^2=1.41\\cdot 6930\\tag5\\\\ &amp; y=\\pm\\sqrt{9771.3}\\end{align*}$$\nSubstitute that back into $(1)$ to get the corresponding $x$ values.</p>\n" }, { "answer_id": 2089220, "score": 2, "text": "<p>First divide the first equation by X:</p>\n\n<p>$$Y = 6930/X$$</p>\n\n<p>Then substitute the Y into the 2nd equation.</p>\n\n<p>$$(6930/X)/X=1.41$$</p>\n\n<p>Multiply by X:</p>\n\n<p>$$6930/X = 1.41X$$</p>\n\n<p>... again</p>\n\n<p>$$6930=1.41X^2$$</p>\n\n<p>Divide by 1.41</p>\n\n<p>$$4914.89=X^2$$</p>\n\n<p>Take square root:</p>\n\n<p>$$X= +/-70.10$$</p>\n\n<p>Solve the Y by yourself :).</p>\n" }, { "answer_id": 2089198, "score": 4, "text": "<p>If $y/x = 1.41$, then $y = 1.41x$. So plugging this in the first equation yields $1.41x^2 = 6930$ and thus $x = \\pm \\sqrt{6930/1.41}$. Now it should be easy to find also $y$.</p>\n" }, { "answer_id": 2089217, "score": 4, "text": "<p>The first equation\n$$\nx \\cdot y = 6930 \\quad (*)\n$$\nimplies that both $x$ and $y$ can not be zero.</p>\n\n<p>So the second equation\n$$\n\\frac{y}{x} = 1.41 \\quad (**)\n$$\nis equivalent (has the same solutions) to \n$$\ny = 1.41 \\cdot x\n$$\nWe can insert this into the first equation and get\n$$\nx \\cdot (1.41 \\cdot x) = 6930\n$$\nwhich simplifies to\n$$\nx^2 = \\frac{6930}{1.41}\n$$\nor \n$$\nx = \\pm \\sqrt{\\frac{6930}{1.41}}\n$$\nwhere the $\\pm$ is short hand notation for two solutions, one positive, one negative.</p>\n\n<p>Finally we get\n$$\ny \n= 1.41 \\cdot x \n= 1.41 \\cdot \\pm \\sqrt{\\frac{6930}{1.41}}\n= \\pm \\sqrt{1.41 \\cdot 6930}\n$$\nSo we got four solutions in total, however equation $(*)$ or $(**)$ will only permit pairs of same signs, as we need a positive product or fraction, so we end up with two solutions:\n$$\n(x, y) = \n\\pm \\left( \\sqrt{\\frac{6930}{1.41}}, \\sqrt{1.41 \\cdot 6930}\\right)\n$$</p>\n\n<p><strong>Update:</strong></p>\n\n<p>Here you can fiddle with GeoGebra: <a href=\"https://ggbm.at/hxugY3FE\" rel=\"nofollow noreferrer\">link</a></p>\n\n<p>This free software allows you to solve the problem graphically, symbolic and numeric.</p>\n\n<p><a href=\"https://i.stack.imgur.com/zNzgU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zNzgU.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089252, "score": 3, "text": "<p>Start with$$\\begin{align*} &amp; xy=6930\\tag1\\\\ &amp; \\tfrac yx=1.41\\tag2\\end{align*}$$\nMultiplying the numerator and denominator of $(2)$ by $y$ gives$$\\begin{align*} &amp; \\dfrac {y^2}{xy}=1.41\\tag3\\\\ &amp; y^2=1.41\\cdot xy\\tag4\\\\ &amp; y^2=1.41\\cdot 6930\\tag5\\\\ &amp; y=\\pm\\sqrt{9771.3}\\end{align*}$$\nSubstitute that back into $(1)$ to get the corresponding $x$ values.</p>\n" }, { "answer_id": 2089220, "score": 2, "text": "<p>First divide the first equation by X:</p>\n\n<p>$$Y = 6930/X$$</p>\n\n<p>Then substitute the Y into the 2nd equation.</p>\n\n<p>$$(6930/X)/X=1.41$$</p>\n\n<p>Multiply by X:</p>\n\n<p>$$6930/X = 1.41X$$</p>\n\n<p>... again</p>\n\n<p>$$6930=1.41X^2$$</p>\n\n<p>Divide by 1.41</p>\n\n<p>$$4914.89=X^2$$</p>\n\n<p>Take square root:</p>\n\n<p>$$X= +/-70.10$$</p>\n\n<p>Solve the Y by yourself :).</p>\n" } ]
Question: Equation system with two unknown variables $xy=6930$ and $y/x=1.41$ Answer 1: <p>If $y/x = 1.41$, then $y = 1.41x$. So plugging this in the first equation yields $1.41x^2 = 6930$ and thus $x = \pm \sqrt{6930/1.41}$. Now it should be easy to find also $y$.</p> Answer 2: <p>The first equation $$ x \cdot y = 6930 \quad (*) $$ implies that both $x$ and $y$ can not be zero.</p> <p>So the second equation $$ \frac{y}{x} = 1.41 \quad (**) $$ is equivalent (has the same solutions) to $$ y = 1.41 \cdot x $$ We can insert this into the first equation and get $$ x \cdot (1.41 \cdot x) = 6930 $$ which simplifies to $$ x^2 = \frac{6930}{1.41} $$ or $$ x = \pm \sqrt{\frac{6930}{1.41}} $$ where the $\pm$ is short hand notation for two solutions, one positive, one negative.</p> <p>Finally we get $$ y = 1.41 \cdot x = 1.41 \cdot \pm \sqrt{\frac{6930}{1.41}} = \pm \sqrt{1.41 \cdot 6930} $$ So we got four solutions in total, however equation $(*)$ or $(**)$ will only permit pairs of same signs, as we need a positive product or fraction, so we end up with two solutions: $$ (x, y) = \pm \left( \sqrt{\frac{6930}{1.41}}, \sqrt{1.41 \cdot 6930}\right) $$</p> <p><strong>Update:</strong></p> <p>Here you can fiddle with GeoGebra: <a href="https://ggbm.at/hxugY3FE" rel="nofollow noreferrer">link</a></p> <p>This free software allows you to solve the problem graphically, symbolic and numeric.</p> <p><a href="https://i.stack.imgur.com/zNzgU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zNzgU.png" alt="enter image description here"></a></p> Answer 3: <p>Start with$$\begin{align*} &amp; xy=6930\tag1\\ &amp; \tfrac yx=1.41\tag2\end{align*}$$ Multiplying the numerator and denominator of $(2)$ by $y$ gives$$\begin{align*} &amp; \dfrac {y^2}{xy}=1.41\tag3\\ &amp; y^2=1.41\cdot xy\tag4\\ &amp; y^2=1.41\cdot 6930\tag5\\ &amp; y=\pm\sqrt{9771.3}\end{align*}$$ Substitute that back into $(1)$ to get the corresponding $x$ values.</p> Answer 4: <p>First divide the first equation by X:</p> <p>$$Y = 6930/X$$</p> <p>Then substitute the Y into the 2nd equation.</p> <p>$$(6930/X)/X=1.41$$</p> <p>Multiply by X:</p> <p>$$6930/X = 1.41X$$</p> <p>... again</p> <p>$$6930=1.41X^2$$</p> <p>Divide by 1.41</p> <p>$$4914.89=X^2$$</p> <p>Take square root:</p> <p>$$X= +/-70.10$$</p> <p>Solve the Y by yourself :).</p> Answer 5: <p>If $y/x = 1.41$, then $y = 1.41x$. So plugging this in the first equation yields $1.41x^2 = 6930$ and thus $x = \pm \sqrt{6930/1.41}$. Now it should be easy to find also $y$.</p> Answer 6: <p>The first equation $$ x \cdot y = 6930 \quad (*) $$ implies that both $x$ and $y$ can not be zero.</p> <p>So the second equation $$ \frac{y}{x} = 1.41 \quad (**) $$ is equivalent (has the same solutions) to $$ y = 1.41 \cdot x $$ We can insert this into the first equation and get $$ x \cdot (1.41 \cdot x) = 6930 $$ which simplifies to $$ x^2 = \frac{6930}{1.41} $$ or $$ x = \pm \sqrt{\frac{6930}{1.41}} $$ where the $\pm$ is short hand notation for two solutions, one positive, one negative.</p> <p>Finally we get $$ y = 1.41 \cdot x = 1.41 \cdot \pm \sqrt{\frac{6930}{1.41}} = \pm \sqrt{1.41 \cdot 6930} $$ So we got four solutions in total, however equation $(*)$ or $(**)$ will only permit pairs of same signs, as we need a positive product or fraction, so we end up with two solutions: $$ (x, y) = \pm \left( \sqrt{\frac{6930}{1.41}}, \sqrt{1.41 \cdot 6930}\right) $$</p> <p><strong>Update:</strong></p> <p>Here you can fiddle with GeoGebra: <a href="https://ggbm.at/hxugY3FE" rel="nofollow noreferrer">link</a></p> <p>This free software allows you to solve the problem graphically, symbolic and numeric.</p> <p><a href="https://i.stack.imgur.com/zNzgU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zNzgU.png" alt="enter image description here"></a></p> Answer 7: <p>Start with$$\begin{align*} &amp; xy=6930\tag1\\ &amp; \tfrac yx=1.41\tag2\end{align*}$$ Multiplying the numerator and denominator of $(2)$ by $y$ gives$$\begin{align*} &amp; \dfrac {y^2}{xy}=1.41\tag3\\ &amp; y^2=1.41\cdot xy\tag4\\ &amp; y^2=1.41\cdot 6930\tag5\\ &amp; y=\pm\sqrt{9771.3}\end{align*}$$ Substitute that back into $(1)$ to get the corresponding $x$ values.</p> Answer 8: <p>First divide the first equation by X:</p> <p>$$Y = 6930/X$$</p> <p>Then substitute the Y into the 2nd equation.</p> <p>$$(6930/X)/X=1.41$$</p> <p>Multiply by X:</p> <p>$$6930/X = 1.41X$$</p> <p>... again</p> <p>$$6930=1.41X^2$$</p> <p>Divide by 1.41</p> <p>$$4914.89=X^2$$</p> <p>Take square root:</p> <p>$$X= +/-70.10$$</p> <p>Solve the Y by yourself :).</p>
Irreducibility of reciprocal polynomials
2,089,464
4
https://math.stackexchange.com/questions/2089464/irreducibility-of-reciprocal-polynomials
<p>Given that $f(x)=\sum\limits_{k=0}^n a_kx^k$ is an irreducible polynomial, show that $g(x)=\sum\limits_{k=0}^n a_kx^{n-k}$ is irreducible too. I don't see how this specific coefficient exchange would preserve the irreducibility. Is there something else I should be looking at?</p>
[ { "answer_id": 2089473, "score": 1, "text": "<p>Since $f$ is irreducible, $f\\neq a_n X^n$, so $\\deg g\\geq 1$.</p>\n\n<p>Note that $g(X)=X^nf(\\frac 1X)$. Supposing $g(X)=r(X)s(X)$ with $\\deg r\\geq1$, you get $$X^nr(\\frac 1X)s(\\frac 1X)=f(X)$$\nthat is $$ X^{n-\\deg g} \\left(X^{\\deg r}r(\\frac 1X)\\right) \\left(X^{\\deg s}s(\\frac 1X)\\right)=f(X)$$</p>\n\n<p>If $\\deg (X^{\\deg r}r(\\frac 1X))\\geq 1$ or $\\deg (X^{\\deg s}s(\\frac 1X))\\geq 1$ we get a contradiction with the irreducibility of $f$.</p>\n\n<p>Otherwise, $X^{\\deg r}r(\\frac 1X)$ and $X^{\\deg s}s(\\frac 1X)$ are both constant, which implies $n-\\deg g = \\deg f = n$ hence $\\deg g = 0$, which is in contradiction with the first line of my answer.</p>\n" }, { "answer_id": 2089482, "score": 1, "text": "<p><strong>Hint</strong> The reversal map $\\, f\\mapsto x^d f(x^{-1}),\\ d=\\deg f\\,$ is <em>multiplicative</em>, being the product of two multiplicative maps, namely $\\,f(x)\\mapsto f(x^{-1}),\\,$ and $\\,f(x)\\mapsto x^{\\,\\deg f}\\ $ (an \"exponental\" of the <em>additive</em> degree map). Being multiplicative, it preserves (ir)reducibility.</p>\n" }, { "answer_id": 2089474, "score": 0, "text": "<p><strong>Hint:</strong></p>\n<p>If <span class=\"math-container\">$\\deg f=n$</span>, <span class=\"math-container\">$g(x)=x^nf\\left(\\frac1x\\right)$</span> and <span class=\"math-container\">$f(x)=x^n g\\left(\\frac1x\\right)$</span>.</p>\n" }, { "answer_id": 2089473, "score": 1, "text": "<p>Since $f$ is irreducible, $f\\neq a_n X^n$, so $\\deg g\\geq 1$.</p>\n\n<p>Note that $g(X)=X^nf(\\frac 1X)$. Supposing $g(X)=r(X)s(X)$ with $\\deg r\\geq1$, you get $$X^nr(\\frac 1X)s(\\frac 1X)=f(X)$$\nthat is $$ X^{n-\\deg g} \\left(X^{\\deg r}r(\\frac 1X)\\right) \\left(X^{\\deg s}s(\\frac 1X)\\right)=f(X)$$</p>\n\n<p>If $\\deg (X^{\\deg r}r(\\frac 1X))\\geq 1$ or $\\deg (X^{\\deg s}s(\\frac 1X))\\geq 1$ we get a contradiction with the irreducibility of $f$.</p>\n\n<p>Otherwise, $X^{\\deg r}r(\\frac 1X)$ and $X^{\\deg s}s(\\frac 1X)$ are both constant, which implies $n-\\deg g = \\deg f = n$ hence $\\deg g = 0$, which is in contradiction with the first line of my answer.</p>\n" }, { "answer_id": 2089482, "score": 1, "text": "<p><strong>Hint</strong> The reversal map $\\, f\\mapsto x^d f(x^{-1}),\\ d=\\deg f\\,$ is <em>multiplicative</em>, being the product of two multiplicative maps, namely $\\,f(x)\\mapsto f(x^{-1}),\\,$ and $\\,f(x)\\mapsto x^{\\,\\deg f}\\ $ (an \"exponental\" of the <em>additive</em> degree map). Being multiplicative, it preserves (ir)reducibility.</p>\n" }, { "answer_id": 2089474, "score": 0, "text": "<p><strong>Hint:</strong></p>\n<p>If <span class=\"math-container\">$\\deg f=n$</span>, <span class=\"math-container\">$g(x)=x^nf\\left(\\frac1x\\right)$</span> and <span class=\"math-container\">$f(x)=x^n g\\left(\\frac1x\\right)$</span>.</p>\n" } ]
Question: Irreducibility of reciprocal polynomials Answer 1: <p>Since $f$ is irreducible, $f\neq a_n X^n$, so $\deg g\geq 1$.</p> <p>Note that $g(X)=X^nf(\frac 1X)$. Supposing $g(X)=r(X)s(X)$ with $\deg r\geq1$, you get $$X^nr(\frac 1X)s(\frac 1X)=f(X)$$ that is $$ X^{n-\deg g} \left(X^{\deg r}r(\frac 1X)\right) \left(X^{\deg s}s(\frac 1X)\right)=f(X)$$</p> <p>If $\deg (X^{\deg r}r(\frac 1X))\geq 1$ or $\deg (X^{\deg s}s(\frac 1X))\geq 1$ we get a contradiction with the irreducibility of $f$.</p> <p>Otherwise, $X^{\deg r}r(\frac 1X)$ and $X^{\deg s}s(\frac 1X)$ are both constant, which implies $n-\deg g = \deg f = n$ hence $\deg g = 0$, which is in contradiction with the first line of my answer.</p> Answer 2: <p><strong>Hint</strong> The reversal map $\, f\mapsto x^d f(x^{-1}),\ d=\deg f\,$ is <em>multiplicative</em>, being the product of two multiplicative maps, namely $\,f(x)\mapsto f(x^{-1}),\,$ and $\,f(x)\mapsto x^{\,\deg f}\ $ (an "exponental" of the <em>additive</em> degree map). Being multiplicative, it preserves (ir)reducibility.</p> Answer 3: <p><strong>Hint:</strong></p> <p>If <span class="math-container">$\deg f=n$</span>, <span class="math-container">$g(x)=x^nf\left(\frac1x\right)$</span> and <span class="math-container">$f(x)=x^n g\left(\frac1x\right)$</span>.</p> Answer 4: <p>Since $f$ is irreducible, $f\neq a_n X^n$, so $\deg g\geq 1$.</p> <p>Note that $g(X)=X^nf(\frac 1X)$. Supposing $g(X)=r(X)s(X)$ with $\deg r\geq1$, you get $$X^nr(\frac 1X)s(\frac 1X)=f(X)$$ that is $$ X^{n-\deg g} \left(X^{\deg r}r(\frac 1X)\right) \left(X^{\deg s}s(\frac 1X)\right)=f(X)$$</p> <p>If $\deg (X^{\deg r}r(\frac 1X))\geq 1$ or $\deg (X^{\deg s}s(\frac 1X))\geq 1$ we get a contradiction with the irreducibility of $f$.</p> <p>Otherwise, $X^{\deg r}r(\frac 1X)$ and $X^{\deg s}s(\frac 1X)$ are both constant, which implies $n-\deg g = \deg f = n$ hence $\deg g = 0$, which is in contradiction with the first line of my answer.</p> Answer 5: <p><strong>Hint</strong> The reversal map $\, f\mapsto x^d f(x^{-1}),\ d=\deg f\,$ is <em>multiplicative</em>, being the product of two multiplicative maps, namely $\,f(x)\mapsto f(x^{-1}),\,$ and $\,f(x)\mapsto x^{\,\deg f}\ $ (an "exponental" of the <em>additive</em> degree map). Being multiplicative, it preserves (ir)reducibility.</p> Answer 6: <p><strong>Hint:</strong></p> <p>If <span class="math-container">$\deg f=n$</span>, <span class="math-container">$g(x)=x^nf\left(\frac1x\right)$</span> and <span class="math-container">$f(x)=x^n g\left(\frac1x\right)$</span>.</p>
Proving that the set of the natural numbers is infinite
2,089,619
4
https://math.stackexchange.com/questions/2089619/proving-that-the-set-of-the-natural-numbers-is-infinite
<p>I have proven that if $X $ is a finite set and $Y$ is a proper set of $X$, then does not exist $f:X \rightarrow Y $ such that $f$ is a bijection.</p> <p>I'm pretending to show that the naturals is an infinite set. Let $P=\{2,4,6...\} $. So, by contraposity i just have to show that $f: \mathbb{N} \rightarrow P$, given by $f(n)=2n$, is actually a bijection.</p> <p>Logically speaking (my book), i have yet constructed only the natural numbers, its addition and multiplication.</p> <p>By far, i have shown that $f$ is injective. But in order to prove that $f$ is a surjection, how can i do that without using the usual: "Take any $y\in P$. Given $x=\frac{y}{2}$ ..."? Because $x$ is actually a rational number given in a "strange form", which I yet didn't construct.</p>
[ { "answer_id": 2089630, "score": 1, "text": "<p>The function $f$ you gave is <em>surjective onto its image</em>, and this image is a proper subset (can $2n=1$ be realized by some $n \\in\\mathbb{N}$?). </p>\n\n<p>By the fact that there is a bijection $f: \\mathbb{N} \\to 2\\mathbb{N}$, and $2\\mathbb{N}$ is a proper subset of $\\mathbb{N}$, it follows that $\\mathbb{N}$ cannot be finite.</p>\n\n<p>Showing there is a bijection $f: \\mathbb{N} \\to \\mathbb{N}$ doesn't do much, because such a bijection does not contradict the theorem you've mentioned. </p>\n" }, { "answer_id": 2089670, "score": 1, "text": "<p>You are not using the definition of even number.</p>\n\n<p><strong>Definition 1</strong> <em>(Even number).</em> We say that a natural number $n$ is an <em>even number</em> if $n=2k$ for some natural number $k$.</p>\n\n<p>In order to prove that $f\\colon P\\to\\mathbb N$ is a surjection, let $y\\in P$. Now we need to find some $x\\in\\mathbb N$ such that $f(x)=2x=y$. But such $x$ exists by Definition 1 as desired.</p>\n\n<blockquote>\n <p><strong>Definition 2</strong> <em>(Surjective functions).</em> A function $f$ is <em>surjective</em> if every element in $Y$ comes from applying $f$ to some element in $X$: $$\\text{For every }y\\in Y\\text{, there exists }x\\in X\\text{ such that }f(x)=y.$$</p>\n</blockquote>\n" }, { "answer_id": 2089630, "score": 1, "text": "<p>The function $f$ you gave is <em>surjective onto its image</em>, and this image is a proper subset (can $2n=1$ be realized by some $n \\in\\mathbb{N}$?). </p>\n\n<p>By the fact that there is a bijection $f: \\mathbb{N} \\to 2\\mathbb{N}$, and $2\\mathbb{N}$ is a proper subset of $\\mathbb{N}$, it follows that $\\mathbb{N}$ cannot be finite.</p>\n\n<p>Showing there is a bijection $f: \\mathbb{N} \\to \\mathbb{N}$ doesn't do much, because such a bijection does not contradict the theorem you've mentioned. </p>\n" }, { "answer_id": 2089670, "score": 1, "text": "<p>You are not using the definition of even number.</p>\n\n<p><strong>Definition 1</strong> <em>(Even number).</em> We say that a natural number $n$ is an <em>even number</em> if $n=2k$ for some natural number $k$.</p>\n\n<p>In order to prove that $f\\colon P\\to\\mathbb N$ is a surjection, let $y\\in P$. Now we need to find some $x\\in\\mathbb N$ such that $f(x)=2x=y$. But such $x$ exists by Definition 1 as desired.</p>\n\n<blockquote>\n <p><strong>Definition 2</strong> <em>(Surjective functions).</em> A function $f$ is <em>surjective</em> if every element in $Y$ comes from applying $f$ to some element in $X$: $$\\text{For every }y\\in Y\\text{, there exists }x\\in X\\text{ such that }f(x)=y.$$</p>\n</blockquote>\n" } ]
Question: Proving that the set of the natural numbers is infinite Answer 1: <p>The function $f$ you gave is <em>surjective onto its image</em>, and this image is a proper subset (can $2n=1$ be realized by some $n \in\mathbb{N}$?). </p> <p>By the fact that there is a bijection $f: \mathbb{N} \to 2\mathbb{N}$, and $2\mathbb{N}$ is a proper subset of $\mathbb{N}$, it follows that $\mathbb{N}$ cannot be finite.</p> <p>Showing there is a bijection $f: \mathbb{N} \to \mathbb{N}$ doesn't do much, because such a bijection does not contradict the theorem you've mentioned. </p> Answer 2: <p>You are not using the definition of even number.</p> <p><strong>Definition 1</strong> <em>(Even number).</em> We say that a natural number $n$ is an <em>even number</em> if $n=2k$ for some natural number $k$.</p> <p>In order to prove that $f\colon P\to\mathbb N$ is a surjection, let $y\in P$. Now we need to find some $x\in\mathbb N$ such that $f(x)=2x=y$. But such $x$ exists by Definition 1 as desired.</p> <blockquote> <p><strong>Definition 2</strong> <em>(Surjective functions).</em> A function $f$ is <em>surjective</em> if every element in $Y$ comes from applying $f$ to some element in $X$: $$\text{For every }y\in Y\text{, there exists }x\in X\text{ such that }f(x)=y.$$</p> </blockquote> Answer 3: <p>The function $f$ you gave is <em>surjective onto its image</em>, and this image is a proper subset (can $2n=1$ be realized by some $n \in\mathbb{N}$?). </p> <p>By the fact that there is a bijection $f: \mathbb{N} \to 2\mathbb{N}$, and $2\mathbb{N}$ is a proper subset of $\mathbb{N}$, it follows that $\mathbb{N}$ cannot be finite.</p> <p>Showing there is a bijection $f: \mathbb{N} \to \mathbb{N}$ doesn't do much, because such a bijection does not contradict the theorem you've mentioned. </p> Answer 4: <p>You are not using the definition of even number.</p> <p><strong>Definition 1</strong> <em>(Even number).</em> We say that a natural number $n$ is an <em>even number</em> if $n=2k$ for some natural number $k$.</p> <p>In order to prove that $f\colon P\to\mathbb N$ is a surjection, let $y\in P$. Now we need to find some $x\in\mathbb N$ such that $f(x)=2x=y$. But such $x$ exists by Definition 1 as desired.</p> <blockquote> <p><strong>Definition 2</strong> <em>(Surjective functions).</em> A function $f$ is <em>surjective</em> if every element in $Y$ comes from applying $f$ to some element in $X$: $$\text{For every }y\in Y\text{, there exists }x\in X\text{ such that }f(x)=y.$$</p> </blockquote>
How to prove the positive-definiteness of this matrix?
2,089,766
4
https://math.stackexchange.com/questions/2089766/how-to-prove-the-positive-definiteness-of-this-matrix
<p>I have this matrix $$ C= \left( \begin{array}{c|c} A+\alpha I_m &amp; -A-\alpha I_m \\ \hline -A-\alpha I_m &amp; A+\alpha I_m \end{array} \right)\\ A= \left( \begin{array}{cccc} (y_1,y_1) &amp;(y_2,y_1)&amp; \cdots&amp;(y_m,y_1) \\ (y_1,y_2) &amp;(y_2,y_2)&amp;\cdots&amp;(y_m,y_2) \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ (y_1,y_m) &amp;(y_2,y_m)&amp;\cdots&amp;(y_m,y_m) \\ \end{array} \right) $$ where $y_i\in L^2(\Omega)$ for all $i=1,\dots,m$ , $\alpha\geq 0$ and $(\cdot,\cdot)$ denote the inner product in $L^2(\Omega)$ with $\Omega\subset\mathbb{R}^2$. I would like to prove this matrix is Positive-definite, any suggestions? </p>
[ { "answer_id": 2089786, "score": 7, "text": "<p>Since $A$ is a Gram matrix, $A$ is positive semidefinite, so $A+\\alpha I$ is positive semidefinite for $\\alpha \\ge 0$.</p>\n\n<p>Then, for any vector $w = \\begin{bmatrix}x\\\\y\\end{bmatrix} \\in \\mathbb{R}^{2m}$ (where $x,y \\in \\mathbb{R}^m$) we have </p>\n\n<p>$w^TCw$ $= \\begin{bmatrix}x^T &amp; y^T\\end{bmatrix}\\begin{bmatrix}A+\\alpha I &amp; -A-\\alpha I \\\\ -A-\\alpha I &amp; A+\\alpha I\\end{bmatrix}\\begin{bmatrix}x\\\\y\\end{bmatrix}$</p>\n\n<p>$= x^T(A+\\alpha I)x - x^T(A+\\alpha I)y - y^T(A+\\alpha I)x + y^T(A+\\alpha I)y$</p>\n\n<p>$= x^T(A+\\alpha I)(x-y) - y^T(A+\\alpha I)(x-y)$ $= (x-y)^T(A+\\alpha I)(x-y) \\ge 0$,</p>\n\n<p>where we have used the fact that $A+\\alpha I$ is positive semidefinite. </p>\n\n<p>Since $w^TCw \\ge 0$ for all $w \\in \\mathbb{R}^{2m}$ and $C$ is symmetric, we have that $C$ is positive semidefinite.</p>\n\n<hr>\n\n<p>Note that for any $x \\in \\mathbb{R}^m$, the vector $w = \\begin{bmatrix}x\\\\x\\end{bmatrix} \\in \\mathbb{R}^{2m}$ satisfies $w^TCw = 0$. </p>\n\n<p>Thus, $m$ eigenvalues of $C$ are zero. So, $C$ is not positive definite.</p>\n" }, { "answer_id": 2089791, "score": 7, "text": "<p>One approach is to note that \n$$\nC= \\pmatrix{1&amp;-1\\\\-1&amp;1} \\otimes\n(A+\\alpha I)\n$$\nwhere $\\otimes$ denotes the Kronecker product. From there, it suffices to note that the Kronecker product of positive semidefinite matrices is positive semidefinite.</p>\n" }, { "answer_id": 2089786, "score": 7, "text": "<p>Since $A$ is a Gram matrix, $A$ is positive semidefinite, so $A+\\alpha I$ is positive semidefinite for $\\alpha \\ge 0$.</p>\n\n<p>Then, for any vector $w = \\begin{bmatrix}x\\\\y\\end{bmatrix} \\in \\mathbb{R}^{2m}$ (where $x,y \\in \\mathbb{R}^m$) we have </p>\n\n<p>$w^TCw$ $= \\begin{bmatrix}x^T &amp; y^T\\end{bmatrix}\\begin{bmatrix}A+\\alpha I &amp; -A-\\alpha I \\\\ -A-\\alpha I &amp; A+\\alpha I\\end{bmatrix}\\begin{bmatrix}x\\\\y\\end{bmatrix}$</p>\n\n<p>$= x^T(A+\\alpha I)x - x^T(A+\\alpha I)y - y^T(A+\\alpha I)x + y^T(A+\\alpha I)y$</p>\n\n<p>$= x^T(A+\\alpha I)(x-y) - y^T(A+\\alpha I)(x-y)$ $= (x-y)^T(A+\\alpha I)(x-y) \\ge 0$,</p>\n\n<p>where we have used the fact that $A+\\alpha I$ is positive semidefinite. </p>\n\n<p>Since $w^TCw \\ge 0$ for all $w \\in \\mathbb{R}^{2m}$ and $C$ is symmetric, we have that $C$ is positive semidefinite.</p>\n\n<hr>\n\n<p>Note that for any $x \\in \\mathbb{R}^m$, the vector $w = \\begin{bmatrix}x\\\\x\\end{bmatrix} \\in \\mathbb{R}^{2m}$ satisfies $w^TCw = 0$. </p>\n\n<p>Thus, $m$ eigenvalues of $C$ are zero. So, $C$ is not positive definite.</p>\n" }, { "answer_id": 2089791, "score": 7, "text": "<p>One approach is to note that \n$$\nC= \\pmatrix{1&amp;-1\\\\-1&amp;1} \\otimes\n(A+\\alpha I)\n$$\nwhere $\\otimes$ denotes the Kronecker product. From there, it suffices to note that the Kronecker product of positive semidefinite matrices is positive semidefinite.</p>\n" } ]
Question: How to prove the positive-definiteness of this matrix? Answer 1: <p>Since $A$ is a Gram matrix, $A$ is positive semidefinite, so $A+\alpha I$ is positive semidefinite for $\alpha \ge 0$.</p> <p>Then, for any vector $w = \begin{bmatrix}x\\y\end{bmatrix} \in \mathbb{R}^{2m}$ (where $x,y \in \mathbb{R}^m$) we have </p> <p>$w^TCw$ $= \begin{bmatrix}x^T &amp; y^T\end{bmatrix}\begin{bmatrix}A+\alpha I &amp; -A-\alpha I \\ -A-\alpha I &amp; A+\alpha I\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}$</p> <p>$= x^T(A+\alpha I)x - x^T(A+\alpha I)y - y^T(A+\alpha I)x + y^T(A+\alpha I)y$</p> <p>$= x^T(A+\alpha I)(x-y) - y^T(A+\alpha I)(x-y)$ $= (x-y)^T(A+\alpha I)(x-y) \ge 0$,</p> <p>where we have used the fact that $A+\alpha I$ is positive semidefinite. </p> <p>Since $w^TCw \ge 0$ for all $w \in \mathbb{R}^{2m}$ and $C$ is symmetric, we have that $C$ is positive semidefinite.</p> <hr> <p>Note that for any $x \in \mathbb{R}^m$, the vector $w = \begin{bmatrix}x\\x\end{bmatrix} \in \mathbb{R}^{2m}$ satisfies $w^TCw = 0$. </p> <p>Thus, $m$ eigenvalues of $C$ are zero. So, $C$ is not positive definite.</p> Answer 2: <p>One approach is to note that $$ C= \pmatrix{1&amp;-1\\-1&amp;1} \otimes (A+\alpha I) $$ where $\otimes$ denotes the Kronecker product. From there, it suffices to note that the Kronecker product of positive semidefinite matrices is positive semidefinite.</p> Answer 3: <p>Since $A$ is a Gram matrix, $A$ is positive semidefinite, so $A+\alpha I$ is positive semidefinite for $\alpha \ge 0$.</p> <p>Then, for any vector $w = \begin{bmatrix}x\\y\end{bmatrix} \in \mathbb{R}^{2m}$ (where $x,y \in \mathbb{R}^m$) we have </p> <p>$w^TCw$ $= \begin{bmatrix}x^T &amp; y^T\end{bmatrix}\begin{bmatrix}A+\alpha I &amp; -A-\alpha I \\ -A-\alpha I &amp; A+\alpha I\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}$</p> <p>$= x^T(A+\alpha I)x - x^T(A+\alpha I)y - y^T(A+\alpha I)x + y^T(A+\alpha I)y$</p> <p>$= x^T(A+\alpha I)(x-y) - y^T(A+\alpha I)(x-y)$ $= (x-y)^T(A+\alpha I)(x-y) \ge 0$,</p> <p>where we have used the fact that $A+\alpha I$ is positive semidefinite. </p> <p>Since $w^TCw \ge 0$ for all $w \in \mathbb{R}^{2m}$ and $C$ is symmetric, we have that $C$ is positive semidefinite.</p> <hr> <p>Note that for any $x \in \mathbb{R}^m$, the vector $w = \begin{bmatrix}x\\x\end{bmatrix} \in \mathbb{R}^{2m}$ satisfies $w^TCw = 0$. </p> <p>Thus, $m$ eigenvalues of $C$ are zero. So, $C$ is not positive definite.</p> Answer 4: <p>One approach is to note that $$ C= \pmatrix{1&amp;-1\\-1&amp;1} \otimes (A+\alpha I) $$ where $\otimes$ denotes the Kronecker product. From there, it suffices to note that the Kronecker product of positive semidefinite matrices is positive semidefinite.</p>
Comparing discrete sums and integrals
2,089,929
4
https://math.stackexchange.com/questions/2089929/comparing-discrete-sums-and-integrals
<p>Integral is the summation of continuous numbers. Taking for example the graph for $y=x$ for $x$ between $[0,2]$, the area/integral is $2$. Now if I take summation for discrete $y=x$, $x \in \mathbb{Z}$ between $[0,2]$, the summation is 3.</p> <p>What I'm wondering is how the summation over continuous terms (which does include the discrete terms) is less than the summation over discrete terms?</p>
[ { "answer_id": 2089956, "score": 6, "text": "<p>Thinking of integral as sum of continuous numbers is unsound at best, and your question is a good example of why. Instead, a better comparison can be made with sum of areas of rectangles of a certain width (say, <code>w</code>) whose heights match the function (i.e. <code>x</code>). The smaller the width of the bars, the closer to the integral you get. They both become equal as the width of the bars becomes infinitesimally small.</p>\n\n<p>Let me try to illustrate this with a few of pictures. Firstly, here's a picture representing your example. The sum of integers is represented by yellow bars of width 1, while the integral is represented by the green area.</p>\n\n<p><a href=\"https://i.stack.imgur.com/hSGHe.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/hSGHe.png\" alt=\"coarse sum\"></a>\nYou can see that the sum of bars has an extra rectangular space, of height 2 and width 0.5, at the very right. That's why your sum is 1 more than the integral.</p>\n\n<p>A better approximation can be made if we reduced the width of the bars. For example, in the following picture, <code>w = 0.25</code>:\n<a href=\"https://i.stack.imgur.com/5EMAY.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/5EMAY.png\" alt=\"fine sum\"></a>\nClearly, the area of the bars is now closer to the actual integral than the previous example. The same trend continues as you keep making the bars narrower, ultimately converging exactly to the integral as <code>w</code> tends to 0.</p>\n" }, { "answer_id": 2089972, "score": 1, "text": "<p>As already pointed out by aksadv and the comments, sums and integrals produce similar results, but are different concepts that have to be evaluated differently. That said, the similarity of their results is often exploited by using simple integrals to approximate difficult sums and vice versa.</p>\n\n<p>For instance, $\\sum_{n=1}^N n^k$ is a difficult sum, but $\\int_1^N x^k \\, dx$ is quite simple, so using the integral and the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Summation Formula</a> which characterizes the difference between the two, we can find an exact formula for the sum.</p>\n" }, { "answer_id": 2089956, "score": 6, "text": "<p>Thinking of integral as sum of continuous numbers is unsound at best, and your question is a good example of why. Instead, a better comparison can be made with sum of areas of rectangles of a certain width (say, <code>w</code>) whose heights match the function (i.e. <code>x</code>). The smaller the width of the bars, the closer to the integral you get. They both become equal as the width of the bars becomes infinitesimally small.</p>\n\n<p>Let me try to illustrate this with a few of pictures. Firstly, here's a picture representing your example. The sum of integers is represented by yellow bars of width 1, while the integral is represented by the green area.</p>\n\n<p><a href=\"https://i.stack.imgur.com/hSGHe.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/hSGHe.png\" alt=\"coarse sum\"></a>\nYou can see that the sum of bars has an extra rectangular space, of height 2 and width 0.5, at the very right. That's why your sum is 1 more than the integral.</p>\n\n<p>A better approximation can be made if we reduced the width of the bars. For example, in the following picture, <code>w = 0.25</code>:\n<a href=\"https://i.stack.imgur.com/5EMAY.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/5EMAY.png\" alt=\"fine sum\"></a>\nClearly, the area of the bars is now closer to the actual integral than the previous example. The same trend continues as you keep making the bars narrower, ultimately converging exactly to the integral as <code>w</code> tends to 0.</p>\n" }, { "answer_id": 2089972, "score": 1, "text": "<p>As already pointed out by aksadv and the comments, sums and integrals produce similar results, but are different concepts that have to be evaluated differently. That said, the similarity of their results is often exploited by using simple integrals to approximate difficult sums and vice versa.</p>\n\n<p>For instance, $\\sum_{n=1}^N n^k$ is a difficult sum, but $\\int_1^N x^k \\, dx$ is quite simple, so using the integral and the <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin Summation Formula</a> which characterizes the difference between the two, we can find an exact formula for the sum.</p>\n" } ]
Question: Comparing discrete sums and integrals Answer 1: <p>Thinking of integral as sum of continuous numbers is unsound at best, and your question is a good example of why. Instead, a better comparison can be made with sum of areas of rectangles of a certain width (say, <code>w</code>) whose heights match the function (i.e. <code>x</code>). The smaller the width of the bars, the closer to the integral you get. They both become equal as the width of the bars becomes infinitesimally small.</p> <p>Let me try to illustrate this with a few of pictures. Firstly, here's a picture representing your example. The sum of integers is represented by yellow bars of width 1, while the integral is represented by the green area.</p> <p><a href="https://i.stack.imgur.com/hSGHe.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hSGHe.png" alt="coarse sum"></a> You can see that the sum of bars has an extra rectangular space, of height 2 and width 0.5, at the very right. That's why your sum is 1 more than the integral.</p> <p>A better approximation can be made if we reduced the width of the bars. For example, in the following picture, <code>w = 0.25</code>: <a href="https://i.stack.imgur.com/5EMAY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5EMAY.png" alt="fine sum"></a> Clearly, the area of the bars is now closer to the actual integral than the previous example. The same trend continues as you keep making the bars narrower, ultimately converging exactly to the integral as <code>w</code> tends to 0.</p> Answer 2: <p>As already pointed out by aksadv and the comments, sums and integrals produce similar results, but are different concepts that have to be evaluated differently. That said, the similarity of their results is often exploited by using simple integrals to approximate difficult sums and vice versa.</p> <p>For instance, $\sum_{n=1}^N n^k$ is a difficult sum, but $\int_1^N x^k \, dx$ is quite simple, so using the integral and the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Summation Formula</a> which characterizes the difference between the two, we can find an exact formula for the sum.</p> Answer 3: <p>Thinking of integral as sum of continuous numbers is unsound at best, and your question is a good example of why. Instead, a better comparison can be made with sum of areas of rectangles of a certain width (say, <code>w</code>) whose heights match the function (i.e. <code>x</code>). The smaller the width of the bars, the closer to the integral you get. They both become equal as the width of the bars becomes infinitesimally small.</p> <p>Let me try to illustrate this with a few of pictures. Firstly, here's a picture representing your example. The sum of integers is represented by yellow bars of width 1, while the integral is represented by the green area.</p> <p><a href="https://i.stack.imgur.com/hSGHe.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hSGHe.png" alt="coarse sum"></a> You can see that the sum of bars has an extra rectangular space, of height 2 and width 0.5, at the very right. That's why your sum is 1 more than the integral.</p> <p>A better approximation can be made if we reduced the width of the bars. For example, in the following picture, <code>w = 0.25</code>: <a href="https://i.stack.imgur.com/5EMAY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5EMAY.png" alt="fine sum"></a> Clearly, the area of the bars is now closer to the actual integral than the previous example. The same trend continues as you keep making the bars narrower, ultimately converging exactly to the integral as <code>w</code> tends to 0.</p> Answer 4: <p>As already pointed out by aksadv and the comments, sums and integrals produce similar results, but are different concepts that have to be evaluated differently. That said, the similarity of their results is often exploited by using simple integrals to approximate difficult sums and vice versa.</p> <p>For instance, $\sum_{n=1}^N n^k$ is a difficult sum, but $\int_1^N x^k \, dx$ is quite simple, so using the integral and the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin Summation Formula</a> which characterizes the difference between the two, we can find an exact formula for the sum.</p>
If $f$ bounded and $f&#39;&#39;&gt;f$ then $f$ decreases exponentially
2,090,295
4
https://math.stackexchange.com/questions/2090295/if-f-bounded-and-ff-then-f-decreases-exponentially
<p>Let $f$ be a function of class $C^2$ from $ \mathbb{R^{+,*}}$ to $\mathbb{R^{+, *}}$ (where $\mathbb{R^{+,*}}$ denotes the set of nonnegative reals) such that $f''&gt;f$ and $f$ is bounded above. Show that $f \le f(0)e^{-x}$.</p>
[ { "answer_id": 2090617, "score": 3, "text": "<p>$f'$ is <em>increasing</em> because $f''(x) &gt; f(x) \\ge 0$. Then $f'$ must be\n<em>negative</em> because $f$ would be unbounded otherwise.</p>\n\n<p>First define $g(x) = e^{-x} (f'(x) + f(x))$. Then\n$$\n g'(x) = e^{-x}(f''(x) - f(x)) &gt; 0\n$$\nso that $g$ is increasing. It follows that for $ x &lt; a$\n$$\n e^{-x} (f'(x) + f(x)) \\le e^{-a} (f'(a) + f(a)) \\le e^{-a}M \n$$\nwhere $M$ is an upper bound for $f$. Letting $a \\to \\infty$ we get\nthat the left-hand side is $\\le 0$, i.e.\n$$\n f'(x) + f(x) \\le 0 \\text{ for all } x \\ge 0 \\, .\n$$</p>\n\n<p>Finally define $h(x) = e^x f(x)$. Then\n$$\n h'(x) = e^x(f'(x) + f(x)) \\le 0\n$$\nso that $h$ is decreasing. It follows that\n$$\n e^x f(x) = h(x) \\le h(0) = f(0) \\, .\n$$</p>\n" }, { "answer_id": 2090611, "score": 1, "text": "<p>Let $0\\le a \\le x$. It follows that\n$$\nf''(x)-f'(x)&gt;-f'(x)+f(x)\n\\\\\n\\frac{d}{dx}(e^{-x}f'(x))&gt;-\\frac{d}{dx}(e^{-x}f(x))\n\\\\\ne^{-x}f'(x)-e^{-a}f'(a)&gt;-e^{-x}f(x)+e^{-a}f(a)\n\\\\\n\\frac{d}{dx}(e^{x}f(x))&gt;e^{2x-a}(f'(a)+f(a))\n\\\\\ne^{x}f(x)-e^{a}f(a)&gt;\\frac12e^{-a}(e^{2x}-1)(f'(a)+f(a))\n\\\\\nf(x)&gt;e^{a-x}f(a)+\\frac12(e^{x-a}-e^{-x-a})(f'(a)+f(a))\n$$\nAs the left side is bounded, so is the right side. This can only be if\n$$\n\\frac12e^{x-a}(f'(a)+f(a))\\le0,\n$$\nand this must hold for all $a&gt;0$.</p>\n\n<p>Thus $\\frac{d}{dx}(e^xf(x))\\le0$ which directly leads to the claim.</p>\n" }, { "answer_id": 2090617, "score": 3, "text": "<p>$f'$ is <em>increasing</em> because $f''(x) &gt; f(x) \\ge 0$. Then $f'$ must be\n<em>negative</em> because $f$ would be unbounded otherwise.</p>\n\n<p>First define $g(x) = e^{-x} (f'(x) + f(x))$. Then\n$$\n g'(x) = e^{-x}(f''(x) - f(x)) &gt; 0\n$$\nso that $g$ is increasing. It follows that for $ x &lt; a$\n$$\n e^{-x} (f'(x) + f(x)) \\le e^{-a} (f'(a) + f(a)) \\le e^{-a}M \n$$\nwhere $M$ is an upper bound for $f$. Letting $a \\to \\infty$ we get\nthat the left-hand side is $\\le 0$, i.e.\n$$\n f'(x) + f(x) \\le 0 \\text{ for all } x \\ge 0 \\, .\n$$</p>\n\n<p>Finally define $h(x) = e^x f(x)$. Then\n$$\n h'(x) = e^x(f'(x) + f(x)) \\le 0\n$$\nso that $h$ is decreasing. It follows that\n$$\n e^x f(x) = h(x) \\le h(0) = f(0) \\, .\n$$</p>\n" }, { "answer_id": 2090611, "score": 1, "text": "<p>Let $0\\le a \\le x$. It follows that\n$$\nf''(x)-f'(x)&gt;-f'(x)+f(x)\n\\\\\n\\frac{d}{dx}(e^{-x}f'(x))&gt;-\\frac{d}{dx}(e^{-x}f(x))\n\\\\\ne^{-x}f'(x)-e^{-a}f'(a)&gt;-e^{-x}f(x)+e^{-a}f(a)\n\\\\\n\\frac{d}{dx}(e^{x}f(x))&gt;e^{2x-a}(f'(a)+f(a))\n\\\\\ne^{x}f(x)-e^{a}f(a)&gt;\\frac12e^{-a}(e^{2x}-1)(f'(a)+f(a))\n\\\\\nf(x)&gt;e^{a-x}f(a)+\\frac12(e^{x-a}-e^{-x-a})(f'(a)+f(a))\n$$\nAs the left side is bounded, so is the right side. This can only be if\n$$\n\\frac12e^{x-a}(f'(a)+f(a))\\le0,\n$$\nand this must hold for all $a&gt;0$.</p>\n\n<p>Thus $\\frac{d}{dx}(e^xf(x))\\le0$ which directly leads to the claim.</p>\n" } ]
Question: If $f$ bounded and $f&#39;&#39;&gt;f$ then $f$ decreases exponentially Answer 1: <p>$f'$ is <em>increasing</em> because $f''(x) &gt; f(x) \ge 0$. Then $f'$ must be <em>negative</em> because $f$ would be unbounded otherwise.</p> <p>First define $g(x) = e^{-x} (f'(x) + f(x))$. Then $$ g'(x) = e^{-x}(f''(x) - f(x)) &gt; 0 $$ so that $g$ is increasing. It follows that for $ x &lt; a$ $$ e^{-x} (f'(x) + f(x)) \le e^{-a} (f'(a) + f(a)) \le e^{-a}M $$ where $M$ is an upper bound for $f$. Letting $a \to \infty$ we get that the left-hand side is $\le 0$, i.e. $$ f'(x) + f(x) \le 0 \text{ for all } x \ge 0 \, . $$</p> <p>Finally define $h(x) = e^x f(x)$. Then $$ h'(x) = e^x(f'(x) + f(x)) \le 0 $$ so that $h$ is decreasing. It follows that $$ e^x f(x) = h(x) \le h(0) = f(0) \, . $$</p> Answer 2: <p>Let $0\le a \le x$. It follows that $$ f''(x)-f'(x)&gt;-f'(x)+f(x) \\ \frac{d}{dx}(e^{-x}f'(x))&gt;-\frac{d}{dx}(e^{-x}f(x)) \\ e^{-x}f'(x)-e^{-a}f'(a)&gt;-e^{-x}f(x)+e^{-a}f(a) \\ \frac{d}{dx}(e^{x}f(x))&gt;e^{2x-a}(f'(a)+f(a)) \\ e^{x}f(x)-e^{a}f(a)&gt;\frac12e^{-a}(e^{2x}-1)(f'(a)+f(a)) \\ f(x)&gt;e^{a-x}f(a)+\frac12(e^{x-a}-e^{-x-a})(f'(a)+f(a)) $$ As the left side is bounded, so is the right side. This can only be if $$ \frac12e^{x-a}(f'(a)+f(a))\le0, $$ and this must hold for all $a&gt;0$.</p> <p>Thus $\frac{d}{dx}(e^xf(x))\le0$ which directly leads to the claim.</p> Answer 3: <p>$f'$ is <em>increasing</em> because $f''(x) &gt; f(x) \ge 0$. Then $f'$ must be <em>negative</em> because $f$ would be unbounded otherwise.</p> <p>First define $g(x) = e^{-x} (f'(x) + f(x))$. Then $$ g'(x) = e^{-x}(f''(x) - f(x)) &gt; 0 $$ so that $g$ is increasing. It follows that for $ x &lt; a$ $$ e^{-x} (f'(x) + f(x)) \le e^{-a} (f'(a) + f(a)) \le e^{-a}M $$ where $M$ is an upper bound for $f$. Letting $a \to \infty$ we get that the left-hand side is $\le 0$, i.e. $$ f'(x) + f(x) \le 0 \text{ for all } x \ge 0 \, . $$</p> <p>Finally define $h(x) = e^x f(x)$. Then $$ h'(x) = e^x(f'(x) + f(x)) \le 0 $$ so that $h$ is decreasing. It follows that $$ e^x f(x) = h(x) \le h(0) = f(0) \, . $$</p> Answer 4: <p>Let $0\le a \le x$. It follows that $$ f''(x)-f'(x)&gt;-f'(x)+f(x) \\ \frac{d}{dx}(e^{-x}f'(x))&gt;-\frac{d}{dx}(e^{-x}f(x)) \\ e^{-x}f'(x)-e^{-a}f'(a)&gt;-e^{-x}f(x)+e^{-a}f(a) \\ \frac{d}{dx}(e^{x}f(x))&gt;e^{2x-a}(f'(a)+f(a)) \\ e^{x}f(x)-e^{a}f(a)&gt;\frac12e^{-a}(e^{2x}-1)(f'(a)+f(a)) \\ f(x)&gt;e^{a-x}f(a)+\frac12(e^{x-a}-e^{-x-a})(f'(a)+f(a)) $$ As the left side is bounded, so is the right side. This can only be if $$ \frac12e^{x-a}(f'(a)+f(a))\le0, $$ and this must hold for all $a&gt;0$.</p> <p>Thus $\frac{d}{dx}(e^xf(x))\le0$ which directly leads to the claim.</p>
Find the sum of first $n$ terms of the series: $\frac{1}{1\times2}+\frac{1}{2\times3}+\frac{1}{3\times4}+\cdots +\frac{1}{n\times(n+1)}$
2,090,435
4
https://math.stackexchange.com/questions/2090435/find-the-sum-of-first-n-terms-of-the-series-frac11-times2-frac12-ti
<blockquote> <p>I have the series $$\frac{1}{1\times2}+\frac{1}{2\times3}+\frac{1}{3\times4}+\cdots +\frac{1}{n\times(n+1)}$$ </p> </blockquote> <p>I know the following formulas: $$1+2+3+\cdots +n=\frac {n (n+1)}{2}\tag1$$ $$1^2+2^2+3^2+\cdots +n^2=\frac{n(n+1)(2n+1)}{6}\tag2$$ $$1^3+2^3+3^3+\cdots +n^3=\left(\frac{n(n+1)}{2}\right)^2\tag3$$ But none of $(1)(2)$ and $(3)$ worked.</p> <p>Please help___.</p>
[ { "answer_id": 2090440, "score": 11, "text": "<p>Try to observe that $$\\frac{1}{n\\times (n+1)}=\\frac{1}{n}-\\frac{1}{n+1}$$ $\\therefore$ The given series can be written as $$1-\\frac12+\\frac12-\\frac13+\\frac13+\\cdots -\\frac{1}{n}+\\frac1n-\\frac{1}{n+1}$$ Each term will cancel out other term except $1$ and $\\frac{1}{n+1}$ .</p>\n\n<p>$\\therefore$ $$=1-\\frac1{n+1}$$ $$=\\frac{n}{n+1}$$ Hope it helps!!!</p>\n" }, { "answer_id": 2090442, "score": 2, "text": "<p><strong>Hint</strong>: We can write $$\\frac {1}{n(n+1)} =\\frac {1}{n} -\\frac {1}{n+1} $$ Can you take it from here?</p>\n" }, { "answer_id": 2090466, "score": 1, "text": "<p>Generally, for $a_n = a_0+(n-1)d$,$\\frac{1}{a_n a_{n+1}}=\\frac{1}{d}(\\frac{1}{a_n}-\\frac{1}{a_{n+1}})$.</p>\n\n<p>Similarly, higher order fractions $\\frac{1}{a_n a_{n+1} a_{n+2}}\n=\\frac{1}{2d}(\\frac{1}{a_n a_{n+1}} - \\frac{1}{a_{n+1}a_{n+2}})$.</p>\n\n<p>When seeing these kinds of sequence series, you can have a try to split items cancel each other.</p>\n" }, { "answer_id": 2090440, "score": 11, "text": "<p>Try to observe that $$\\frac{1}{n\\times (n+1)}=\\frac{1}{n}-\\frac{1}{n+1}$$ $\\therefore$ The given series can be written as $$1-\\frac12+\\frac12-\\frac13+\\frac13+\\cdots -\\frac{1}{n}+\\frac1n-\\frac{1}{n+1}$$ Each term will cancel out other term except $1$ and $\\frac{1}{n+1}$ .</p>\n\n<p>$\\therefore$ $$=1-\\frac1{n+1}$$ $$=\\frac{n}{n+1}$$ Hope it helps!!!</p>\n" }, { "answer_id": 2090442, "score": 2, "text": "<p><strong>Hint</strong>: We can write $$\\frac {1}{n(n+1)} =\\frac {1}{n} -\\frac {1}{n+1} $$ Can you take it from here?</p>\n" }, { "answer_id": 2090466, "score": 1, "text": "<p>Generally, for $a_n = a_0+(n-1)d$,$\\frac{1}{a_n a_{n+1}}=\\frac{1}{d}(\\frac{1}{a_n}-\\frac{1}{a_{n+1}})$.</p>\n\n<p>Similarly, higher order fractions $\\frac{1}{a_n a_{n+1} a_{n+2}}\n=\\frac{1}{2d}(\\frac{1}{a_n a_{n+1}} - \\frac{1}{a_{n+1}a_{n+2}})$.</p>\n\n<p>When seeing these kinds of sequence series, you can have a try to split items cancel each other.</p>\n" } ]
Question: Find the sum of first $n$ terms of the series: $\frac{1}{1\times2}+\frac{1}{2\times3}+\frac{1}{3\times4}+\cdots +\frac{1}{n\times(n+1)}$ Answer 1: <p>Try to observe that $$\frac{1}{n\times (n+1)}=\frac{1}{n}-\frac{1}{n+1}$$ $\therefore$ The given series can be written as $$1-\frac12+\frac12-\frac13+\frac13+\cdots -\frac{1}{n}+\frac1n-\frac{1}{n+1}$$ Each term will cancel out other term except $1$ and $\frac{1}{n+1}$ .</p> <p>$\therefore$ $$=1-\frac1{n+1}$$ $$=\frac{n}{n+1}$$ Hope it helps!!!</p> Answer 2: <p><strong>Hint</strong>: We can write $$\frac {1}{n(n+1)} =\frac {1}{n} -\frac {1}{n+1} $$ Can you take it from here?</p> Answer 3: <p>Generally, for $a_n = a_0+(n-1)d$,$\frac{1}{a_n a_{n+1}}=\frac{1}{d}(\frac{1}{a_n}-\frac{1}{a_{n+1}})$.</p> <p>Similarly, higher order fractions $\frac{1}{a_n a_{n+1} a_{n+2}} =\frac{1}{2d}(\frac{1}{a_n a_{n+1}} - \frac{1}{a_{n+1}a_{n+2}})$.</p> <p>When seeing these kinds of sequence series, you can have a try to split items cancel each other.</p> Answer 4: <p>Try to observe that $$\frac{1}{n\times (n+1)}=\frac{1}{n}-\frac{1}{n+1}$$ $\therefore$ The given series can be written as $$1-\frac12+\frac12-\frac13+\frac13+\cdots -\frac{1}{n}+\frac1n-\frac{1}{n+1}$$ Each term will cancel out other term except $1$ and $\frac{1}{n+1}$ .</p> <p>$\therefore$ $$=1-\frac1{n+1}$$ $$=\frac{n}{n+1}$$ Hope it helps!!!</p> Answer 5: <p><strong>Hint</strong>: We can write $$\frac {1}{n(n+1)} =\frac {1}{n} -\frac {1}{n+1} $$ Can you take it from here?</p> Answer 6: <p>Generally, for $a_n = a_0+(n-1)d$,$\frac{1}{a_n a_{n+1}}=\frac{1}{d}(\frac{1}{a_n}-\frac{1}{a_{n+1}})$.</p> <p>Similarly, higher order fractions $\frac{1}{a_n a_{n+1} a_{n+2}} =\frac{1}{2d}(\frac{1}{a_n a_{n+1}} - \frac{1}{a_{n+1}a_{n+2}})$.</p> <p>When seeing these kinds of sequence series, you can have a try to split items cancel each other.</p>
Every number from $1$ to $n$ gets multiplied by $2^n$
2,090,472
4
https://math.stackexchange.com/questions/2090472/every-number-from-1-to-n-gets-multiplied-by-2n
<p>Every natural number from $1$ to $n$ ($n &gt;0)$ was multiplied by a power of two (the exponent can be different). After that all the numbers were summed. At what $n$ will the sum also be a power of $2$? Something tells me that it can be at any $n$. I think we have to start with summarizing numbers from $1$ to $n$ by using the formula $$\frac{n(n+1)}{2}.$$</p>
[ { "answer_id": 2090554, "score": 4, "text": "<p>It is always possible. Let $T_n=\\frac 12n(n+1)$ be the sum of the numbers from $1$ to $n$. For $n \\gt 1$ we can always find $k$ such that $2^{k-1} \\lt T_n \\lt 2^k$. We just have to find a set of numbers between $1$ and $n$ that sum to $2^k-T_n$ and multiply them all by $2$, multiplying all the rest by $1$. We can do this by taking $n, n-1, n-2 \\ldots$ until the sum gets too large, deleting the last one, and picking the smaller number we need to come out even. We can then achieve $2^m$ for $m \\gt k$ by multiplying all the multipliers by $2^{m-k}$</p>\n" }, { "answer_id": 2090542, "score": 3, "text": "<p>Assume $n$ is an \"$m$-bit number\", i.e., $2^{m-1}\\le n&lt;2^{m}$.\nLet $S$ be the sum of all numbers from $1$ to $n$ that are neither powers of $2$ nor $3$ times powers of $2$.\nThen\n$$ S\\le \\frac{n(n+1)}{2}-2^{m}+1\\le(2^m-1)(2^{m-1}-1)&lt;2^{2m-1},$$\nSo $S$ is at most a $2m-1$-bit number. By picking $2^m$ as coefficient for each non-power-of-2 summand, these therefore contribute $2^mS$.\nBut we still have $m$ summands of the form $2^k$ and at least $m-1$ sumands of the form $3\\cdot 2^k$. We will use these wisely:</p>\n\n<p>Suppose our sum so far ends in $r$ binary zeroes, i.e., it is $2^r s$ where $s$ is odd (and $r\\ge m$). </p>\n\n<p>If $s\\equiv 1\\pmod 4$ and we still have a number $3\\cdot2^k$ available, we can pick a suitable power of two as coefficient to arrive at \n$$ 2^rs+3\\cdot 2^{r-k}\\cdot 3\\cdot 2^k=2^r(s+3)=2^{r+2}\\cdot\\frac{s+3}4,$$\na number with at least two more zeroes at the end and not more bits in total (unless we had $s=1$ and thus now have $2^{r+2}$, a power of two, as sum).</p>\n\n<p>On the other hand, if $s\\equiv 3\\pmod 4$ and we still have a number $2^k$ available, w can achieve\n$$2^rs+2^{r-k}\\cdot 2^k=2^{r+2}\\cdot\\frac{s+1}4$$\nas next sum, again a number with at least two more zeroes at the end an not more bits in total (unless we now have a power of two).</p>\n\n<p>Thus we can at least $m-1$ times (namely, as long as we can guarantee to still have a $2^k$ or $3\\cdot 2^k$ available, whatever is needed) increase the number of trailing $0$ digits by two without increasing the bit length of the sum (unless we achieve a power of $2$ as sum).\nAs $S$ has at most $2m-1$ bits, this means that we can zero $2(m-1)$ of these, i.e., we can arrive at a power of $2$, possibly without even using all our summands. The remaining sumamnds, if any, can be added one by one without destroying the power-of-two-ness of the sum. So finally we arrive at a sum that is a power of $2$.</p>\n" }, { "answer_id": 2090582, "score": 0, "text": "<p>Are you talking about the following series :\n$$1.2^1,2.2^2,...n.2^n$$</p>\n\n<p>It is an Arithmetico Geometric series.\nSummation of its terms can be found out as follows:</p>\n\n<p>Let $$S = 1.2^1+2.2^2+3.2^3...+n.2^n $$\nNow multiplying both sides by the common ratio $2$, we get</p>\n\n<p>$$2.S = 1.2^2+2.2^3+3.2^4...+(n-1).2^n+n.2^{n+1}$$</p>\n\n<p>Subtracting $2^{nd}$ equation from $1^{st}$ equation, we get</p>\n\n<p>$$-S = 1.2^1+1.2^2+1.2^3+...+1.2^n-n.2^{n+1}$$</p>\n\n<p>Now leaving the last term of our series aside, we observe a G.P. with common ratio $2$ and first term $2$, upon summing it we get,</p>\n\n<p>$$-S = 2(2^n-1)-n.2^{n+1}$$\n$$-S = 2^{n+1}-n.2^{n+1}-2$$\n$$S=(n-1)2^{n+1}+2$$</p>\n\n<p>$$S = 2(1+(n-1)2^n)$$\nNow, we can see that the term $(n-1)2^n$ is even, so adding $1$ to it which is the case will make it odd. Now, for being power of $2$ all the factors of the numbers must be $2$ which is even. Hence, $S$ can't be power of $2$ except for $n = 1$ in which case $S = 2$.</p>\n\n<p>So, the only solution is $n = 1$.</p>\n" }, { "answer_id": 2090554, "score": 4, "text": "<p>It is always possible. Let $T_n=\\frac 12n(n+1)$ be the sum of the numbers from $1$ to $n$. For $n \\gt 1$ we can always find $k$ such that $2^{k-1} \\lt T_n \\lt 2^k$. We just have to find a set of numbers between $1$ and $n$ that sum to $2^k-T_n$ and multiply them all by $2$, multiplying all the rest by $1$. We can do this by taking $n, n-1, n-2 \\ldots$ until the sum gets too large, deleting the last one, and picking the smaller number we need to come out even. We can then achieve $2^m$ for $m \\gt k$ by multiplying all the multipliers by $2^{m-k}$</p>\n" }, { "answer_id": 2090542, "score": 3, "text": "<p>Assume $n$ is an \"$m$-bit number\", i.e., $2^{m-1}\\le n&lt;2^{m}$.\nLet $S$ be the sum of all numbers from $1$ to $n$ that are neither powers of $2$ nor $3$ times powers of $2$.\nThen\n$$ S\\le \\frac{n(n+1)}{2}-2^{m}+1\\le(2^m-1)(2^{m-1}-1)&lt;2^{2m-1},$$\nSo $S$ is at most a $2m-1$-bit number. By picking $2^m$ as coefficient for each non-power-of-2 summand, these therefore contribute $2^mS$.\nBut we still have $m$ summands of the form $2^k$ and at least $m-1$ sumands of the form $3\\cdot 2^k$. We will use these wisely:</p>\n\n<p>Suppose our sum so far ends in $r$ binary zeroes, i.e., it is $2^r s$ where $s$ is odd (and $r\\ge m$). </p>\n\n<p>If $s\\equiv 1\\pmod 4$ and we still have a number $3\\cdot2^k$ available, we can pick a suitable power of two as coefficient to arrive at \n$$ 2^rs+3\\cdot 2^{r-k}\\cdot 3\\cdot 2^k=2^r(s+3)=2^{r+2}\\cdot\\frac{s+3}4,$$\na number with at least two more zeroes at the end and not more bits in total (unless we had $s=1$ and thus now have $2^{r+2}$, a power of two, as sum).</p>\n\n<p>On the other hand, if $s\\equiv 3\\pmod 4$ and we still have a number $2^k$ available, w can achieve\n$$2^rs+2^{r-k}\\cdot 2^k=2^{r+2}\\cdot\\frac{s+1}4$$\nas next sum, again a number with at least two more zeroes at the end an not more bits in total (unless we now have a power of two).</p>\n\n<p>Thus we can at least $m-1$ times (namely, as long as we can guarantee to still have a $2^k$ or $3\\cdot 2^k$ available, whatever is needed) increase the number of trailing $0$ digits by two without increasing the bit length of the sum (unless we achieve a power of $2$ as sum).\nAs $S$ has at most $2m-1$ bits, this means that we can zero $2(m-1)$ of these, i.e., we can arrive at a power of $2$, possibly without even using all our summands. The remaining sumamnds, if any, can be added one by one without destroying the power-of-two-ness of the sum. So finally we arrive at a sum that is a power of $2$.</p>\n" }, { "answer_id": 2090582, "score": 0, "text": "<p>Are you talking about the following series :\n$$1.2^1,2.2^2,...n.2^n$$</p>\n\n<p>It is an Arithmetico Geometric series.\nSummation of its terms can be found out as follows:</p>\n\n<p>Let $$S = 1.2^1+2.2^2+3.2^3...+n.2^n $$\nNow multiplying both sides by the common ratio $2$, we get</p>\n\n<p>$$2.S = 1.2^2+2.2^3+3.2^4...+(n-1).2^n+n.2^{n+1}$$</p>\n\n<p>Subtracting $2^{nd}$ equation from $1^{st}$ equation, we get</p>\n\n<p>$$-S = 1.2^1+1.2^2+1.2^3+...+1.2^n-n.2^{n+1}$$</p>\n\n<p>Now leaving the last term of our series aside, we observe a G.P. with common ratio $2$ and first term $2$, upon summing it we get,</p>\n\n<p>$$-S = 2(2^n-1)-n.2^{n+1}$$\n$$-S = 2^{n+1}-n.2^{n+1}-2$$\n$$S=(n-1)2^{n+1}+2$$</p>\n\n<p>$$S = 2(1+(n-1)2^n)$$\nNow, we can see that the term $(n-1)2^n$ is even, so adding $1$ to it which is the case will make it odd. Now, for being power of $2$ all the factors of the numbers must be $2$ which is even. Hence, $S$ can't be power of $2$ except for $n = 1$ in which case $S = 2$.</p>\n\n<p>So, the only solution is $n = 1$.</p>\n" } ]
Question: Every number from $1$ to $n$ gets multiplied by $2^n$ Answer 1: <p>It is always possible. Let $T_n=\frac 12n(n+1)$ be the sum of the numbers from $1$ to $n$. For $n \gt 1$ we can always find $k$ such that $2^{k-1} \lt T_n \lt 2^k$. We just have to find a set of numbers between $1$ and $n$ that sum to $2^k-T_n$ and multiply them all by $2$, multiplying all the rest by $1$. We can do this by taking $n, n-1, n-2 \ldots$ until the sum gets too large, deleting the last one, and picking the smaller number we need to come out even. We can then achieve $2^m$ for $m \gt k$ by multiplying all the multipliers by $2^{m-k}$</p> Answer 2: <p>Assume $n$ is an "$m$-bit number", i.e., $2^{m-1}\le n&lt;2^{m}$. Let $S$ be the sum of all numbers from $1$ to $n$ that are neither powers of $2$ nor $3$ times powers of $2$. Then $$ S\le \frac{n(n+1)}{2}-2^{m}+1\le(2^m-1)(2^{m-1}-1)&lt;2^{2m-1},$$ So $S$ is at most a $2m-1$-bit number. By picking $2^m$ as coefficient for each non-power-of-2 summand, these therefore contribute $2^mS$. But we still have $m$ summands of the form $2^k$ and at least $m-1$ sumands of the form $3\cdot 2^k$. We will use these wisely:</p> <p>Suppose our sum so far ends in $r$ binary zeroes, i.e., it is $2^r s$ where $s$ is odd (and $r\ge m$). </p> <p>If $s\equiv 1\pmod 4$ and we still have a number $3\cdot2^k$ available, we can pick a suitable power of two as coefficient to arrive at $$ 2^rs+3\cdot 2^{r-k}\cdot 3\cdot 2^k=2^r(s+3)=2^{r+2}\cdot\frac{s+3}4,$$ a number with at least two more zeroes at the end and not more bits in total (unless we had $s=1$ and thus now have $2^{r+2}$, a power of two, as sum).</p> <p>On the other hand, if $s\equiv 3\pmod 4$ and we still have a number $2^k$ available, w can achieve $$2^rs+2^{r-k}\cdot 2^k=2^{r+2}\cdot\frac{s+1}4$$ as next sum, again a number with at least two more zeroes at the end an not more bits in total (unless we now have a power of two).</p> <p>Thus we can at least $m-1$ times (namely, as long as we can guarantee to still have a $2^k$ or $3\cdot 2^k$ available, whatever is needed) increase the number of trailing $0$ digits by two without increasing the bit length of the sum (unless we achieve a power of $2$ as sum). As $S$ has at most $2m-1$ bits, this means that we can zero $2(m-1)$ of these, i.e., we can arrive at a power of $2$, possibly without even using all our summands. The remaining sumamnds, if any, can be added one by one without destroying the power-of-two-ness of the sum. So finally we arrive at a sum that is a power of $2$.</p> Answer 3: <p>Are you talking about the following series : $$1.2^1,2.2^2,...n.2^n$$</p> <p>It is an Arithmetico Geometric series. Summation of its terms can be found out as follows:</p> <p>Let $$S = 1.2^1+2.2^2+3.2^3...+n.2^n $$ Now multiplying both sides by the common ratio $2$, we get</p> <p>$$2.S = 1.2^2+2.2^3+3.2^4...+(n-1).2^n+n.2^{n+1}$$</p> <p>Subtracting $2^{nd}$ equation from $1^{st}$ equation, we get</p> <p>$$-S = 1.2^1+1.2^2+1.2^3+...+1.2^n-n.2^{n+1}$$</p> <p>Now leaving the last term of our series aside, we observe a G.P. with common ratio $2$ and first term $2$, upon summing it we get,</p> <p>$$-S = 2(2^n-1)-n.2^{n+1}$$ $$-S = 2^{n+1}-n.2^{n+1}-2$$ $$S=(n-1)2^{n+1}+2$$</p> <p>$$S = 2(1+(n-1)2^n)$$ Now, we can see that the term $(n-1)2^n$ is even, so adding $1$ to it which is the case will make it odd. Now, for being power of $2$ all the factors of the numbers must be $2$ which is even. Hence, $S$ can't be power of $2$ except for $n = 1$ in which case $S = 2$.</p> <p>So, the only solution is $n = 1$.</p> Answer 4: <p>It is always possible. Let $T_n=\frac 12n(n+1)$ be the sum of the numbers from $1$ to $n$. For $n \gt 1$ we can always find $k$ such that $2^{k-1} \lt T_n \lt 2^k$. We just have to find a set of numbers between $1$ and $n$ that sum to $2^k-T_n$ and multiply them all by $2$, multiplying all the rest by $1$. We can do this by taking $n, n-1, n-2 \ldots$ until the sum gets too large, deleting the last one, and picking the smaller number we need to come out even. We can then achieve $2^m$ for $m \gt k$ by multiplying all the multipliers by $2^{m-k}$</p> Answer 5: <p>Assume $n$ is an "$m$-bit number", i.e., $2^{m-1}\le n&lt;2^{m}$. Let $S$ be the sum of all numbers from $1$ to $n$ that are neither powers of $2$ nor $3$ times powers of $2$. Then $$ S\le \frac{n(n+1)}{2}-2^{m}+1\le(2^m-1)(2^{m-1}-1)&lt;2^{2m-1},$$ So $S$ is at most a $2m-1$-bit number. By picking $2^m$ as coefficient for each non-power-of-2 summand, these therefore contribute $2^mS$. But we still have $m$ summands of the form $2^k$ and at least $m-1$ sumands of the form $3\cdot 2^k$. We will use these wisely:</p> <p>Suppose our sum so far ends in $r$ binary zeroes, i.e., it is $2^r s$ where $s$ is odd (and $r\ge m$). </p> <p>If $s\equiv 1\pmod 4$ and we still have a number $3\cdot2^k$ available, we can pick a suitable power of two as coefficient to arrive at $$ 2^rs+3\cdot 2^{r-k}\cdot 3\cdot 2^k=2^r(s+3)=2^{r+2}\cdot\frac{s+3}4,$$ a number with at least two more zeroes at the end and not more bits in total (unless we had $s=1$ and thus now have $2^{r+2}$, a power of two, as sum).</p> <p>On the other hand, if $s\equiv 3\pmod 4$ and we still have a number $2^k$ available, w can achieve $$2^rs+2^{r-k}\cdot 2^k=2^{r+2}\cdot\frac{s+1}4$$ as next sum, again a number with at least two more zeroes at the end an not more bits in total (unless we now have a power of two).</p> <p>Thus we can at least $m-1$ times (namely, as long as we can guarantee to still have a $2^k$ or $3\cdot 2^k$ available, whatever is needed) increase the number of trailing $0$ digits by two without increasing the bit length of the sum (unless we achieve a power of $2$ as sum). As $S$ has at most $2m-1$ bits, this means that we can zero $2(m-1)$ of these, i.e., we can arrive at a power of $2$, possibly without even using all our summands. The remaining sumamnds, if any, can be added one by one without destroying the power-of-two-ness of the sum. So finally we arrive at a sum that is a power of $2$.</p> Answer 6: <p>Are you talking about the following series : $$1.2^1,2.2^2,...n.2^n$$</p> <p>It is an Arithmetico Geometric series. Summation of its terms can be found out as follows:</p> <p>Let $$S = 1.2^1+2.2^2+3.2^3...+n.2^n $$ Now multiplying both sides by the common ratio $2$, we get</p> <p>$$2.S = 1.2^2+2.2^3+3.2^4...+(n-1).2^n+n.2^{n+1}$$</p> <p>Subtracting $2^{nd}$ equation from $1^{st}$ equation, we get</p> <p>$$-S = 1.2^1+1.2^2+1.2^3+...+1.2^n-n.2^{n+1}$$</p> <p>Now leaving the last term of our series aside, we observe a G.P. with common ratio $2$ and first term $2$, upon summing it we get,</p> <p>$$-S = 2(2^n-1)-n.2^{n+1}$$ $$-S = 2^{n+1}-n.2^{n+1}-2$$ $$S=(n-1)2^{n+1}+2$$</p> <p>$$S = 2(1+(n-1)2^n)$$ Now, we can see that the term $(n-1)2^n$ is even, so adding $1$ to it which is the case will make it odd. Now, for being power of $2$ all the factors of the numbers must be $2$ which is even. Hence, $S$ can't be power of $2$ except for $n = 1$ in which case $S = 2$.</p> <p>So, the only solution is $n = 1$.</p>
Can a symmetric rank-1 symmetric tensor be written as a linear combination of other symmetric rank-1 symmetric tensors?
2,090,753
4
https://math.stackexchange.com/questions/2090753/can-a-symmetric-rank-1-symmetric-tensor-be-written-as-a-linear-combination-of-ot
<p>I know that a symmetric tensor of symmetric rank $1$ can be viewed as a point on the so-called <em>Veronese variety</em>. A symmetric tensor of rank $r$ is then in the linear space spanned by $r$ points of the Veronese variety. My question is the following: can any given symmetric tensor of <strong>rank $1$</strong> ever reside in the linear space spanned by $r$ <em>other</em> points of the Veronese <em>variety</em>, <em>i.e.</em> be written as a linear combination of $r$ <em>other</em> symmetric rank-$1$ symmetric tensors? </p> <p>I am an engineer, currently working on tensor decompositions for signal processing applications. I'm not very familiar with algebraic geometry, but it seems that I need the answer to the question above, to ensure uniqueness of one such decomposition. I looked for (a hint toward) an answer in the literature on Veronese varieties, but it is rather hard to dig into.</p>
[ { "answer_id": 2493195, "score": 2, "text": "<p>Your question is: Can a given symmetric tensor of rank $1$ be written as a linear combination of other symmetric tensors of rank $1$? The answer is yes. Yes, a symmetric tensor of rank $1$ can be written as a linear combination of other symmetric tensors of rank $1$. This is true over the complex numbers, the real numbers, and other fields. For simplicity I'll write this answer over a field of characteristic zero; everything I say will be valid over real numbers and over complex numbers. (It can also be valid over other fields with mild conditions but I will ignore that for now.)</p>\n\n<p>Before proceeding with generalities, perhaps a concrete example might be helpful? Let $V$ be a two-dimensional vector space with basis $x,y$. (You are welcome to think of this as $V = \\mathbb{R}^2$ and $x = (1,0)$, $y=(0,1)$.) One of the symmetric tensors of rank $1$, and order $3$, in $V \\otimes V \\otimes V$, is $x \\otimes x \\otimes x$, which for short we might denote $x^3$. Can we write this $x^3$ as a linear combination of other rank $1$ symmetric tensors? Yes, in various ways; here is one.\n$$\n\\begin{split}\n (x+y) \\otimes (x+y) \\otimes (x+y) &amp;= x \\otimes x \\otimes x + x \\otimes x \\otimes y \\\\\n &amp;\\quad + x \\otimes y \\otimes x + y \\otimes x \\otimes x \\\\\n &amp;\\quad + x \\otimes y \\otimes y + y \\otimes x \\otimes y \\\\\n &amp;\\quad + y \\otimes y \\otimes x + y \\otimes y \\otimes y \\\\\n &amp;= x^3 + 3x^2y + 3xy^2 + y^3,\n\\end{split}\n$$\nwhere $x^3 = x \\otimes x \\otimes x$, $x^2y = \\frac{1}{3}(x \\otimes x \\otimes y + x \\otimes y \\otimes x + y \\otimes x \\otimes x)$, and in general $xyz = \\frac{1}{6}(x \\otimes y \\otimes z + \\text{all permutations})$.</p>\n\n<p>Well,\n$$\n (x+y)^3 + (x-y)^3 = x^3 + 6xy^2,\n$$\nand\n$$\n (x+2y)^3 + (x-2y)^3 = x^3 + 24xy^2 .\n$$\nSo therefore\n$$\n 4(x+y)^3 + 4(x-y)^3 - (x+2y)^3 - (x-2y)^3 = 3x^3.\n$$\nSo the rank $1$ symmetric tensor $x^3$ is in the span of the $4$ rank $1$ symmetric tensors $(x \\pm y)^3$ and $(x \\pm 2y)^3$.</p>\n\n<p>Now, this example begins to point the way to a more general answer. Because when we consider the space of symmetric tensors in $V \\otimes V \\otimes V$—it is a subspace that can be denoted $S^3 V$ ($S$ for Symmetric), or $S_3 V$, depending on the author—we see that it has dimension $4$. A basis is given by $x^3, x^2y, xy^2, y^3$. And another different basis is given by $(x\\pm y)^3$, $(x \\pm 2y)^3$. At that point it becomes clear that <em>every</em> symmetric tensor is a linear combination of those $4$ symmetric rank $1$ tensors, <em>including</em> all the symmetric rank $1$ tensors such as $x^3$.</p>\n\n<p>So we have the first answer to your question: Let $V$ be any finite dimensional vector space. Let $d \\geq 1$. Let $N$ be the dimension of $S^d V$. Now $S^d V$ is spanned by the symmetric rank $1$ tensors, so we can take some $N$ of them to form a basis. And now all the symmetric rank $1$ tensors can be written as linear combinations of those.</p>\n\n<p>Now there is a reasonable follow-up question, which might perhaps be a natural question from a standpoint of an engineer who is interested in identifiability issues: Okay, if some symmetric rank $1$ tensor is equal to a linear combination of some $r$ other symmetric rank $1$ tensors, then what is $r$?</p>\n\n<p>We can certainly have such linear combinations when $r=N$ (the dimension of $S^d V$) but unfortunately it can happen sooner than that. It happens as soon as $r=d+1$. Because if $V$ has a basis $x_1,\\dotsc,x_n$, then $x_1^d$ can be written as a linear combination of $(x_1 + t_i x_2)^d$ for any (!) $d+1$ nonzero values of $t_i$. (Nutshell: with Vandermonde matrices you can show these are linearly independent, and there are enough of them to span all the degree $d$ polynomials in $x_1,x_2$, including $x_1^d$.)</p>\n\n<p>I believe the converse may be true: if $L^d$ is a linear combination of some $r$ other symmetric rank $1$ tensors, then $r \\geq d+1$. But at the moment I'm not sure where to find a result along those lines. (I looked in some papers of Reznick and the book of Iarrobino-Kanev; it might be in there somewhere but so far I have not been able to find it.) If you are interested, please let me know and I will be happy to resume the search.</p>\n" }, { "answer_id": 2092166, "score": 0, "text": "<p>First let me make a point about the terminology. I believe what you mean by \"rank-$1$ tensor\" is what most people call a \"pure\" tensor: it is a tensor $t$ that can be written $v_1\\otimes\\cdots\\otimes v_n$ for some possibly different vectors $v_i$. I believe what you mean by \"pure symmetric\" is that all the $v_i$ are the same; you can prove that this is equivalent to being pure and symmetric separately, so the language \"symmetric pure symmetric\" is redundant.</p>\n\n<p>Let $t=v\\otimes\\cdots\\otimes v$ be a pure symmetric tensor. Choose a basis $E_i$ in which $v=E_1$, and suppose $v$ can be written as a linear combination of pure tensors. Then</p>\n\n<p>$$E_1\\otimes\\dots\\otimes E_1 = v\\otimes\\dots\\otimes v = \\sum_k w^k_1\\otimes...\\otimes w_1^k$$</p>\n\n<p>for some vectors $w_k=\\sum_i c_k^i E_i$. Thus</p>\n\n<p>$$E_1\\otimes\\dots\\otimes E_1=\\sum_{k,J}c_k^{i_1}\\cdots c_k^{i_j}E_{i_1}\\otimes\\cdots\\otimes E_{i_j}$$</p>\n\n<p>A basis representation of a vector is unique, so equating the coefficients on both sides, we see that the coefficient of each $E_{i_1}\\otimes\\cdots\\otimes E_{i_k}$ on the right is a polynomial in the $c_i^k$ that must be equal to $0$ or, in the case $i_1=\\cdots=i_j=1$, equal to $1$; the solution is obtained by checking the common solutions of these equations. In particular for $n\\ne 1$,</p>\n\n<p>$$\\sum_k c_k^n\\cdots c_k^n = 0$$</p>\n\n<p>If your vector space is even dimensional and real, then each term in this sum is an even power of a real number, thus positive, and thus each term must be zero. In other words, $c_k^n=0$ for $n\\ne 1$. This means each $w_i$ is a scalar multiple of $E_1$, so the answer to your question is no.</p>\n\n<p>For odd dimensional real vector spaces and complex vector spaces it seems like the solution would be more complicated.</p>\n" }, { "answer_id": 2493195, "score": 2, "text": "<p>Your question is: Can a given symmetric tensor of rank $1$ be written as a linear combination of other symmetric tensors of rank $1$? The answer is yes. Yes, a symmetric tensor of rank $1$ can be written as a linear combination of other symmetric tensors of rank $1$. This is true over the complex numbers, the real numbers, and other fields. For simplicity I'll write this answer over a field of characteristic zero; everything I say will be valid over real numbers and over complex numbers. (It can also be valid over other fields with mild conditions but I will ignore that for now.)</p>\n\n<p>Before proceeding with generalities, perhaps a concrete example might be helpful? Let $V$ be a two-dimensional vector space with basis $x,y$. (You are welcome to think of this as $V = \\mathbb{R}^2$ and $x = (1,0)$, $y=(0,1)$.) One of the symmetric tensors of rank $1$, and order $3$, in $V \\otimes V \\otimes V$, is $x \\otimes x \\otimes x$, which for short we might denote $x^3$. Can we write this $x^3$ as a linear combination of other rank $1$ symmetric tensors? Yes, in various ways; here is one.\n$$\n\\begin{split}\n (x+y) \\otimes (x+y) \\otimes (x+y) &amp;= x \\otimes x \\otimes x + x \\otimes x \\otimes y \\\\\n &amp;\\quad + x \\otimes y \\otimes x + y \\otimes x \\otimes x \\\\\n &amp;\\quad + x \\otimes y \\otimes y + y \\otimes x \\otimes y \\\\\n &amp;\\quad + y \\otimes y \\otimes x + y \\otimes y \\otimes y \\\\\n &amp;= x^3 + 3x^2y + 3xy^2 + y^3,\n\\end{split}\n$$\nwhere $x^3 = x \\otimes x \\otimes x$, $x^2y = \\frac{1}{3}(x \\otimes x \\otimes y + x \\otimes y \\otimes x + y \\otimes x \\otimes x)$, and in general $xyz = \\frac{1}{6}(x \\otimes y \\otimes z + \\text{all permutations})$.</p>\n\n<p>Well,\n$$\n (x+y)^3 + (x-y)^3 = x^3 + 6xy^2,\n$$\nand\n$$\n (x+2y)^3 + (x-2y)^3 = x^3 + 24xy^2 .\n$$\nSo therefore\n$$\n 4(x+y)^3 + 4(x-y)^3 - (x+2y)^3 - (x-2y)^3 = 3x^3.\n$$\nSo the rank $1$ symmetric tensor $x^3$ is in the span of the $4$ rank $1$ symmetric tensors $(x \\pm y)^3$ and $(x \\pm 2y)^3$.</p>\n\n<p>Now, this example begins to point the way to a more general answer. Because when we consider the space of symmetric tensors in $V \\otimes V \\otimes V$—it is a subspace that can be denoted $S^3 V$ ($S$ for Symmetric), or $S_3 V$, depending on the author—we see that it has dimension $4$. A basis is given by $x^3, x^2y, xy^2, y^3$. And another different basis is given by $(x\\pm y)^3$, $(x \\pm 2y)^3$. At that point it becomes clear that <em>every</em> symmetric tensor is a linear combination of those $4$ symmetric rank $1$ tensors, <em>including</em> all the symmetric rank $1$ tensors such as $x^3$.</p>\n\n<p>So we have the first answer to your question: Let $V$ be any finite dimensional vector space. Let $d \\geq 1$. Let $N$ be the dimension of $S^d V$. Now $S^d V$ is spanned by the symmetric rank $1$ tensors, so we can take some $N$ of them to form a basis. And now all the symmetric rank $1$ tensors can be written as linear combinations of those.</p>\n\n<p>Now there is a reasonable follow-up question, which might perhaps be a natural question from a standpoint of an engineer who is interested in identifiability issues: Okay, if some symmetric rank $1$ tensor is equal to a linear combination of some $r$ other symmetric rank $1$ tensors, then what is $r$?</p>\n\n<p>We can certainly have such linear combinations when $r=N$ (the dimension of $S^d V$) but unfortunately it can happen sooner than that. It happens as soon as $r=d+1$. Because if $V$ has a basis $x_1,\\dotsc,x_n$, then $x_1^d$ can be written as a linear combination of $(x_1 + t_i x_2)^d$ for any (!) $d+1$ nonzero values of $t_i$. (Nutshell: with Vandermonde matrices you can show these are linearly independent, and there are enough of them to span all the degree $d$ polynomials in $x_1,x_2$, including $x_1^d$.)</p>\n\n<p>I believe the converse may be true: if $L^d$ is a linear combination of some $r$ other symmetric rank $1$ tensors, then $r \\geq d+1$. But at the moment I'm not sure where to find a result along those lines. (I looked in some papers of Reznick and the book of Iarrobino-Kanev; it might be in there somewhere but so far I have not been able to find it.) If you are interested, please let me know and I will be happy to resume the search.</p>\n" }, { "answer_id": 2092166, "score": 0, "text": "<p>First let me make a point about the terminology. I believe what you mean by \"rank-$1$ tensor\" is what most people call a \"pure\" tensor: it is a tensor $t$ that can be written $v_1\\otimes\\cdots\\otimes v_n$ for some possibly different vectors $v_i$. I believe what you mean by \"pure symmetric\" is that all the $v_i$ are the same; you can prove that this is equivalent to being pure and symmetric separately, so the language \"symmetric pure symmetric\" is redundant.</p>\n\n<p>Let $t=v\\otimes\\cdots\\otimes v$ be a pure symmetric tensor. Choose a basis $E_i$ in which $v=E_1$, and suppose $v$ can be written as a linear combination of pure tensors. Then</p>\n\n<p>$$E_1\\otimes\\dots\\otimes E_1 = v\\otimes\\dots\\otimes v = \\sum_k w^k_1\\otimes...\\otimes w_1^k$$</p>\n\n<p>for some vectors $w_k=\\sum_i c_k^i E_i$. Thus</p>\n\n<p>$$E_1\\otimes\\dots\\otimes E_1=\\sum_{k,J}c_k^{i_1}\\cdots c_k^{i_j}E_{i_1}\\otimes\\cdots\\otimes E_{i_j}$$</p>\n\n<p>A basis representation of a vector is unique, so equating the coefficients on both sides, we see that the coefficient of each $E_{i_1}\\otimes\\cdots\\otimes E_{i_k}$ on the right is a polynomial in the $c_i^k$ that must be equal to $0$ or, in the case $i_1=\\cdots=i_j=1$, equal to $1$; the solution is obtained by checking the common solutions of these equations. In particular for $n\\ne 1$,</p>\n\n<p>$$\\sum_k c_k^n\\cdots c_k^n = 0$$</p>\n\n<p>If your vector space is even dimensional and real, then each term in this sum is an even power of a real number, thus positive, and thus each term must be zero. In other words, $c_k^n=0$ for $n\\ne 1$. This means each $w_i$ is a scalar multiple of $E_1$, so the answer to your question is no.</p>\n\n<p>For odd dimensional real vector spaces and complex vector spaces it seems like the solution would be more complicated.</p>\n" } ]
Question: Can a symmetric rank-1 symmetric tensor be written as a linear combination of other symmetric rank-1 symmetric tensors? Answer 1: <p>Your question is: Can a given symmetric tensor of rank $1$ be written as a linear combination of other symmetric tensors of rank $1$? The answer is yes. Yes, a symmetric tensor of rank $1$ can be written as a linear combination of other symmetric tensors of rank $1$. This is true over the complex numbers, the real numbers, and other fields. For simplicity I'll write this answer over a field of characteristic zero; everything I say will be valid over real numbers and over complex numbers. (It can also be valid over other fields with mild conditions but I will ignore that for now.)</p> <p>Before proceeding with generalities, perhaps a concrete example might be helpful? Let $V$ be a two-dimensional vector space with basis $x,y$. (You are welcome to think of this as $V = \mathbb{R}^2$ and $x = (1,0)$, $y=(0,1)$.) One of the symmetric tensors of rank $1$, and order $3$, in $V \otimes V \otimes V$, is $x \otimes x \otimes x$, which for short we might denote $x^3$. Can we write this $x^3$ as a linear combination of other rank $1$ symmetric tensors? Yes, in various ways; here is one. $$ \begin{split} (x+y) \otimes (x+y) \otimes (x+y) &amp;= x \otimes x \otimes x + x \otimes x \otimes y \\ &amp;\quad + x \otimes y \otimes x + y \otimes x \otimes x \\ &amp;\quad + x \otimes y \otimes y + y \otimes x \otimes y \\ &amp;\quad + y \otimes y \otimes x + y \otimes y \otimes y \\ &amp;= x^3 + 3x^2y + 3xy^2 + y^3, \end{split} $$ where $x^3 = x \otimes x \otimes x$, $x^2y = \frac{1}{3}(x \otimes x \otimes y + x \otimes y \otimes x + y \otimes x \otimes x)$, and in general $xyz = \frac{1}{6}(x \otimes y \otimes z + \text{all permutations})$.</p> <p>Well, $$ (x+y)^3 + (x-y)^3 = x^3 + 6xy^2, $$ and $$ (x+2y)^3 + (x-2y)^3 = x^3 + 24xy^2 . $$ So therefore $$ 4(x+y)^3 + 4(x-y)^3 - (x+2y)^3 - (x-2y)^3 = 3x^3. $$ So the rank $1$ symmetric tensor $x^3$ is in the span of the $4$ rank $1$ symmetric tensors $(x \pm y)^3$ and $(x \pm 2y)^3$.</p> <p>Now, this example begins to point the way to a more general answer. Because when we consider the space of symmetric tensors in $V \otimes V \otimes V$—it is a subspace that can be denoted $S^3 V$ ($S$ for Symmetric), or $S_3 V$, depending on the author—we see that it has dimension $4$. A basis is given by $x^3, x^2y, xy^2, y^3$. And another different basis is given by $(x\pm y)^3$, $(x \pm 2y)^3$. At that point it becomes clear that <em>every</em> symmetric tensor is a linear combination of those $4$ symmetric rank $1$ tensors, <em>including</em> all the symmetric rank $1$ tensors such as $x^3$.</p> <p>So we have the first answer to your question: Let $V$ be any finite dimensional vector space. Let $d \geq 1$. Let $N$ be the dimension of $S^d V$. Now $S^d V$ is spanned by the symmetric rank $1$ tensors, so we can take some $N$ of them to form a basis. And now all the symmetric rank $1$ tensors can be written as linear combinations of those.</p> <p>Now there is a reasonable follow-up question, which might perhaps be a natural question from a standpoint of an engineer who is interested in identifiability issues: Okay, if some symmetric rank $1$ tensor is equal to a linear combination of some $r$ other symmetric rank $1$ tensors, then what is $r$?</p> <p>We can certainly have such linear combinations when $r=N$ (the dimension of $S^d V$) but unfortunately it can happen sooner than that. It happens as soon as $r=d+1$. Because if $V$ has a basis $x_1,\dotsc,x_n$, then $x_1^d$ can be written as a linear combination of $(x_1 + t_i x_2)^d$ for any (!) $d+1$ nonzero values of $t_i$. (Nutshell: with Vandermonde matrices you can show these are linearly independent, and there are enough of them to span all the degree $d$ polynomials in $x_1,x_2$, including $x_1^d$.)</p> <p>I believe the converse may be true: if $L^d$ is a linear combination of some $r$ other symmetric rank $1$ tensors, then $r \geq d+1$. But at the moment I'm not sure where to find a result along those lines. (I looked in some papers of Reznick and the book of Iarrobino-Kanev; it might be in there somewhere but so far I have not been able to find it.) If you are interested, please let me know and I will be happy to resume the search.</p> Answer 2: <p>First let me make a point about the terminology. I believe what you mean by "rank-$1$ tensor" is what most people call a "pure" tensor: it is a tensor $t$ that can be written $v_1\otimes\cdots\otimes v_n$ for some possibly different vectors $v_i$. I believe what you mean by "pure symmetric" is that all the $v_i$ are the same; you can prove that this is equivalent to being pure and symmetric separately, so the language "symmetric pure symmetric" is redundant.</p> <p>Let $t=v\otimes\cdots\otimes v$ be a pure symmetric tensor. Choose a basis $E_i$ in which $v=E_1$, and suppose $v$ can be written as a linear combination of pure tensors. Then</p> <p>$$E_1\otimes\dots\otimes E_1 = v\otimes\dots\otimes v = \sum_k w^k_1\otimes...\otimes w_1^k$$</p> <p>for some vectors $w_k=\sum_i c_k^i E_i$. Thus</p> <p>$$E_1\otimes\dots\otimes E_1=\sum_{k,J}c_k^{i_1}\cdots c_k^{i_j}E_{i_1}\otimes\cdots\otimes E_{i_j}$$</p> <p>A basis representation of a vector is unique, so equating the coefficients on both sides, we see that the coefficient of each $E_{i_1}\otimes\cdots\otimes E_{i_k}$ on the right is a polynomial in the $c_i^k$ that must be equal to $0$ or, in the case $i_1=\cdots=i_j=1$, equal to $1$; the solution is obtained by checking the common solutions of these equations. In particular for $n\ne 1$,</p> <p>$$\sum_k c_k^n\cdots c_k^n = 0$$</p> <p>If your vector space is even dimensional and real, then each term in this sum is an even power of a real number, thus positive, and thus each term must be zero. In other words, $c_k^n=0$ for $n\ne 1$. This means each $w_i$ is a scalar multiple of $E_1$, so the answer to your question is no.</p> <p>For odd dimensional real vector spaces and complex vector spaces it seems like the solution would be more complicated.</p> Answer 3: <p>Your question is: Can a given symmetric tensor of rank $1$ be written as a linear combination of other symmetric tensors of rank $1$? The answer is yes. Yes, a symmetric tensor of rank $1$ can be written as a linear combination of other symmetric tensors of rank $1$. This is true over the complex numbers, the real numbers, and other fields. For simplicity I'll write this answer over a field of characteristic zero; everything I say will be valid over real numbers and over complex numbers. (It can also be valid over other fields with mild conditions but I will ignore that for now.)</p> <p>Before proceeding with generalities, perhaps a concrete example might be helpful? Let $V$ be a two-dimensional vector space with basis $x,y$. (You are welcome to think of this as $V = \mathbb{R}^2$ and $x = (1,0)$, $y=(0,1)$.) One of the symmetric tensors of rank $1$, and order $3$, in $V \otimes V \otimes V$, is $x \otimes x \otimes x$, which for short we might denote $x^3$. Can we write this $x^3$ as a linear combination of other rank $1$ symmetric tensors? Yes, in various ways; here is one. $$ \begin{split} (x+y) \otimes (x+y) \otimes (x+y) &amp;= x \otimes x \otimes x + x \otimes x \otimes y \\ &amp;\quad + x \otimes y \otimes x + y \otimes x \otimes x \\ &amp;\quad + x \otimes y \otimes y + y \otimes x \otimes y \\ &amp;\quad + y \otimes y \otimes x + y \otimes y \otimes y \\ &amp;= x^3 + 3x^2y + 3xy^2 + y^3, \end{split} $$ where $x^3 = x \otimes x \otimes x$, $x^2y = \frac{1}{3}(x \otimes x \otimes y + x \otimes y \otimes x + y \otimes x \otimes x)$, and in general $xyz = \frac{1}{6}(x \otimes y \otimes z + \text{all permutations})$.</p> <p>Well, $$ (x+y)^3 + (x-y)^3 = x^3 + 6xy^2, $$ and $$ (x+2y)^3 + (x-2y)^3 = x^3 + 24xy^2 . $$ So therefore $$ 4(x+y)^3 + 4(x-y)^3 - (x+2y)^3 - (x-2y)^3 = 3x^3. $$ So the rank $1$ symmetric tensor $x^3$ is in the span of the $4$ rank $1$ symmetric tensors $(x \pm y)^3$ and $(x \pm 2y)^3$.</p> <p>Now, this example begins to point the way to a more general answer. Because when we consider the space of symmetric tensors in $V \otimes V \otimes V$—it is a subspace that can be denoted $S^3 V$ ($S$ for Symmetric), or $S_3 V$, depending on the author—we see that it has dimension $4$. A basis is given by $x^3, x^2y, xy^2, y^3$. And another different basis is given by $(x\pm y)^3$, $(x \pm 2y)^3$. At that point it becomes clear that <em>every</em> symmetric tensor is a linear combination of those $4$ symmetric rank $1$ tensors, <em>including</em> all the symmetric rank $1$ tensors such as $x^3$.</p> <p>So we have the first answer to your question: Let $V$ be any finite dimensional vector space. Let $d \geq 1$. Let $N$ be the dimension of $S^d V$. Now $S^d V$ is spanned by the symmetric rank $1$ tensors, so we can take some $N$ of them to form a basis. And now all the symmetric rank $1$ tensors can be written as linear combinations of those.</p> <p>Now there is a reasonable follow-up question, which might perhaps be a natural question from a standpoint of an engineer who is interested in identifiability issues: Okay, if some symmetric rank $1$ tensor is equal to a linear combination of some $r$ other symmetric rank $1$ tensors, then what is $r$?</p> <p>We can certainly have such linear combinations when $r=N$ (the dimension of $S^d V$) but unfortunately it can happen sooner than that. It happens as soon as $r=d+1$. Because if $V$ has a basis $x_1,\dotsc,x_n$, then $x_1^d$ can be written as a linear combination of $(x_1 + t_i x_2)^d$ for any (!) $d+1$ nonzero values of $t_i$. (Nutshell: with Vandermonde matrices you can show these are linearly independent, and there are enough of them to span all the degree $d$ polynomials in $x_1,x_2$, including $x_1^d$.)</p> <p>I believe the converse may be true: if $L^d$ is a linear combination of some $r$ other symmetric rank $1$ tensors, then $r \geq d+1$. But at the moment I'm not sure where to find a result along those lines. (I looked in some papers of Reznick and the book of Iarrobino-Kanev; it might be in there somewhere but so far I have not been able to find it.) If you are interested, please let me know and I will be happy to resume the search.</p> Answer 4: <p>First let me make a point about the terminology. I believe what you mean by "rank-$1$ tensor" is what most people call a "pure" tensor: it is a tensor $t$ that can be written $v_1\otimes\cdots\otimes v_n$ for some possibly different vectors $v_i$. I believe what you mean by "pure symmetric" is that all the $v_i$ are the same; you can prove that this is equivalent to being pure and symmetric separately, so the language "symmetric pure symmetric" is redundant.</p> <p>Let $t=v\otimes\cdots\otimes v$ be a pure symmetric tensor. Choose a basis $E_i$ in which $v=E_1$, and suppose $v$ can be written as a linear combination of pure tensors. Then</p> <p>$$E_1\otimes\dots\otimes E_1 = v\otimes\dots\otimes v = \sum_k w^k_1\otimes...\otimes w_1^k$$</p> <p>for some vectors $w_k=\sum_i c_k^i E_i$. Thus</p> <p>$$E_1\otimes\dots\otimes E_1=\sum_{k,J}c_k^{i_1}\cdots c_k^{i_j}E_{i_1}\otimes\cdots\otimes E_{i_j}$$</p> <p>A basis representation of a vector is unique, so equating the coefficients on both sides, we see that the coefficient of each $E_{i_1}\otimes\cdots\otimes E_{i_k}$ on the right is a polynomial in the $c_i^k$ that must be equal to $0$ or, in the case $i_1=\cdots=i_j=1$, equal to $1$; the solution is obtained by checking the common solutions of these equations. In particular for $n\ne 1$,</p> <p>$$\sum_k c_k^n\cdots c_k^n = 0$$</p> <p>If your vector space is even dimensional and real, then each term in this sum is an even power of a real number, thus positive, and thus each term must be zero. In other words, $c_k^n=0$ for $n\ne 1$. This means each $w_i$ is a scalar multiple of $E_1$, so the answer to your question is no.</p> <p>For odd dimensional real vector spaces and complex vector spaces it seems like the solution would be more complicated.</p>
Calculating the operator adjoint to the partial time derivative
2,090,800
4
https://math.stackexchange.com/questions/2090800/calculating-the-operator-adjoint-to-the-partial-time-derivative
<p>I want to show that for any $\mathcal{L}^2(\mathbb{C}^n\times\mathbb{R},\mathbb{C})$-function $\Psi(x,t)$ the differential operator $\partial_t$ is self-adjoint, i.e. $\langle\Phi|\partial_t\Psi\rangle=\langle\partial_t\Phi|\Psi\rangle$ for any $\Phi,\Psi\in\mathcal{L}^2(\mathbb{C}^n\times\mathbb{R},\mathbb{C})$, and therefore $\partial^\dagger_t=\partial_t$.</p> <p>By definition the Hermitian scalar product of $\Psi,\Phi$ is $$\langle\Phi(t)|\partial_t\Psi(t)\rangle=\int_{\mathbb{C}^n}\Phi^\ast(x,t)\frac{\partial\Psi}{\partial t}(x,t)d^nx.$$</p> <p>Because of $\frac{\partial\Phi^\ast\Psi}{\partial t}(x,t)=\Phi^\ast(x,t)\frac{\partial\Psi}{\partial t}(x,t)+\frac{\partial\Phi}{\partial t}(x,t)\Psi(x,t)$, we can rewrite this as</p> <p>$$\int_{\mathbb{C}^n}\Phi^\ast(x,t)\frac{\partial\Psi}{\partial t}(x,t)d^nx = \int_{\mathbb{C}^n}\frac{\partial\Phi^\ast\Psi}{\partial t}(x,t)d^nx-\int_{\mathbb{C}^n}\frac{\partial\Phi^\ast}{\partial t}(x,t)\Psi(x,t)d^nx.$$</p> <p>Since $\Phi(x,t),\Psi(x,t)$ are $\mathcal{L}^2$-integrable over $\mathbb{C}^n$ for each constant $t\in\mathbb{R}$ by default we can exchange the integral and the differential for the first summand, which gives us:</p> <p>$$\int_{\mathbb{C}^n}\frac{\partial\Phi^\ast\Psi}{\partial t}(x,t)d^nx = \frac{\partial}{\partial t}\int_{\mathbb{C}^n}\Phi^\ast\Psi(x,t)d^nx = \frac{\partial}{\partial t}|\langle\Phi(t)|\Psi(t)\rangle|^2.$$</p> <p>This is equal to zero, which we can show by using the unitary time evolution operator $U(t,t_0)$. Let $|\Psi(t)\rangle = U(t,t_0)|\Psi(t_0)\rangle$. Then $\langle\Phi(t)| = \left(U(t,t_0)|\Phi(t_0)\rangle\right)^\dagger=\langle\Phi(t_0)|U^\dagger(t,t_0)$. Therefore, the scalar product of both vectors is time-independent:</p> <p>$$ \frac{\partial}{\partial t}|\langle\Phi(t)|\Psi(t)\rangle|^2 = \frac{\partial}{\partial t}\langle\Phi(t_0)|U^\dagger(t,t_0)U(t,t_0)|\Psi(t_0)\rangle = \frac{\partial}{\partial t}\langle\Phi(t_0)|\Psi(t_0)\rangle = 0.$$</p> <p>Hence, we now have the following equation: $$\int_{\mathbb{C}^n}\Phi^\ast(x,t)\frac{\partial\Psi}{\partial t}(x,t)d^nx = -\int_{\mathbb{C}^n}\frac{\partial\Phi^\ast}{\partial t}(x,t)\Psi(x,t)d^nx.$$</p> <p>Doesn't this rather mean that $\partial^\dagger_t=-\partial_t$? This is where I am stuck.</p>
[ { "answer_id": 2091554, "score": 4, "text": "<p>What you wrote is wrong, sorry. If the Hilbert space is $L^2(\\mathbb C^n \\times \\mathbb R, \\mathbb C)$, then the relevant integral is $$\\int_{\\mathbb C^n \\times \\mathbb R} \\overline{\\psi(x,t)} \\phi(x,t) d^nx dt\\tag{1}$$\nwhere (I am interpreting your notations) the functions are complex-valued and $d^nx$ is the Lebesgue measure over $\\mathbb C^n$ viewed as $\\mathbb R^{2n}$ and $dt$ the one over $\\mathbb R$. Presumably your $\\mathbb C^n$ should be $\\mathbb R^n$. </p>\n\n<p>Instead you use the measure $d^nx$ over the factor factor only. \n$$\\int_{\\mathbb C^n} \\overline{\\psi(x)} \\phi(x) d^nx$$\nIn this case <strong>it does make any sense</strong> to wonder whether or not $\\partial_t$ is Hermitian or not, since it is <strong>not</strong> an operator over the linear space $L^2(\\mathbb C^n, \\mathbb C)$, that is on the functions $\\psi=\\psi(x)$ </p>\n\n<p>(Even dealing with the whole Hilbert space $L^2(\\mathbb C^n \\times \\mathbb R, \\mathbb C)$ and the scalar product (1) there would be other issues regarding regularity of functions but the problem arising from your approach is much more relevant.) </p>\n\n<p>In elementary Quantum Mechanics the Hilbert space is $L^2(\\mathbb R^n)$ referred to Lebesgue's measure $d^nx$ over $\\mathbb R^n$. </p>\n\n<p><strong>Operators</strong> are linear maps $A: D(A) \\to L^2(\\mathbb R^n)$ where $D(A) \\subset L^2(\\mathbb R^n)$ is a (usually dense) linear subspace. </p>\n\n<p>You are instead considering a function from $\\mathbb R$ to $L^2(\\mathbb R^n)$ physically representing the <strong>temporal evolution</strong> of a vector $\\psi_0$ at $t=0$\n$$\\mathbb R \\ni t \\mapsto \\psi_t \\in L^2(\\mathbb R^n) $$ \nThe derivative you are considering acts on such a family\n$$\\partial_t \\psi_t := \\lim_{h \\to 0} \\frac{\\psi_{t+h} -\\psi_t}{h} \\tag{2}$$\nand, if exists, is computed with respect to the topology of $L^2(\\mathbb R^n)$.\nYou see that to compute (2) you need a $t$-parametrized <strong>family</strong> of vectors.</p>\n\n<p>With a <strong>single</strong> vector $\\psi \\in L^2(\\mathbb R^n)$, $\\partial_t \\psi$ would not have any meaning, differently from, say, $A\\psi$ -- where $A$ is a true linear operator in $L^2(\\mathbb R^n)$ and $\\psi \\in D(A)$ -- which is meaningful. </p>\n" }, { "answer_id": 2090827, "score": 2, "text": "<p>So, it seems that I had a \"Brett vorm Kopf\" as they say in German.</p>\n\n<p>Since $\\partial_t\\Phi^\\ast(x,t)$ is a real derivative, complex conjugation and derivation commute. Thus, indeed we have $-\\int_{\\mathbb{C}^n}\\frac{\\partial\\Phi^\\ast}{\\partial t}(x,t)\\Psi(x,t)d^nx = \\int_{\\mathbb{C}^n}\\left(-\\frac{\\partial\\Phi}{\\partial t}(x,t)\\right)^\\ast\\Psi(x,t)d^nx$, which concludes</p>\n\n<p>$$\\langle\\Phi(t)|\\partial_t\\Psi(t)\\rangle = \\langle-\\partial_t\\Phi(t)|\\Psi(t)\\rangle \\Leftrightarrow \\partial^\\dagger_t=-\\partial_t.$$</p>\n\n<p>Hence, my claim in the very beginning of my post was wrong. This is the answer I was looking for. The post can be closed.</p>\n" }, { "answer_id": 2091554, "score": 4, "text": "<p>What you wrote is wrong, sorry. If the Hilbert space is $L^2(\\mathbb C^n \\times \\mathbb R, \\mathbb C)$, then the relevant integral is $$\\int_{\\mathbb C^n \\times \\mathbb R} \\overline{\\psi(x,t)} \\phi(x,t) d^nx dt\\tag{1}$$\nwhere (I am interpreting your notations) the functions are complex-valued and $d^nx$ is the Lebesgue measure over $\\mathbb C^n$ viewed as $\\mathbb R^{2n}$ and $dt$ the one over $\\mathbb R$. Presumably your $\\mathbb C^n$ should be $\\mathbb R^n$. </p>\n\n<p>Instead you use the measure $d^nx$ over the factor factor only. \n$$\\int_{\\mathbb C^n} \\overline{\\psi(x)} \\phi(x) d^nx$$\nIn this case <strong>it does make any sense</strong> to wonder whether or not $\\partial_t$ is Hermitian or not, since it is <strong>not</strong> an operator over the linear space $L^2(\\mathbb C^n, \\mathbb C)$, that is on the functions $\\psi=\\psi(x)$ </p>\n\n<p>(Even dealing with the whole Hilbert space $L^2(\\mathbb C^n \\times \\mathbb R, \\mathbb C)$ and the scalar product (1) there would be other issues regarding regularity of functions but the problem arising from your approach is much more relevant.) </p>\n\n<p>In elementary Quantum Mechanics the Hilbert space is $L^2(\\mathbb R^n)$ referred to Lebesgue's measure $d^nx$ over $\\mathbb R^n$. </p>\n\n<p><strong>Operators</strong> are linear maps $A: D(A) \\to L^2(\\mathbb R^n)$ where $D(A) \\subset L^2(\\mathbb R^n)$ is a (usually dense) linear subspace. </p>\n\n<p>You are instead considering a function from $\\mathbb R$ to $L^2(\\mathbb R^n)$ physically representing the <strong>temporal evolution</strong> of a vector $\\psi_0$ at $t=0$\n$$\\mathbb R \\ni t \\mapsto \\psi_t \\in L^2(\\mathbb R^n) $$ \nThe derivative you are considering acts on such a family\n$$\\partial_t \\psi_t := \\lim_{h \\to 0} \\frac{\\psi_{t+h} -\\psi_t}{h} \\tag{2}$$\nand, if exists, is computed with respect to the topology of $L^2(\\mathbb R^n)$.\nYou see that to compute (2) you need a $t$-parametrized <strong>family</strong> of vectors.</p>\n\n<p>With a <strong>single</strong> vector $\\psi \\in L^2(\\mathbb R^n)$, $\\partial_t \\psi$ would not have any meaning, differently from, say, $A\\psi$ -- where $A$ is a true linear operator in $L^2(\\mathbb R^n)$ and $\\psi \\in D(A)$ -- which is meaningful. </p>\n" }, { "answer_id": 2090827, "score": 2, "text": "<p>So, it seems that I had a \"Brett vorm Kopf\" as they say in German.</p>\n\n<p>Since $\\partial_t\\Phi^\\ast(x,t)$ is a real derivative, complex conjugation and derivation commute. Thus, indeed we have $-\\int_{\\mathbb{C}^n}\\frac{\\partial\\Phi^\\ast}{\\partial t}(x,t)\\Psi(x,t)d^nx = \\int_{\\mathbb{C}^n}\\left(-\\frac{\\partial\\Phi}{\\partial t}(x,t)\\right)^\\ast\\Psi(x,t)d^nx$, which concludes</p>\n\n<p>$$\\langle\\Phi(t)|\\partial_t\\Psi(t)\\rangle = \\langle-\\partial_t\\Phi(t)|\\Psi(t)\\rangle \\Leftrightarrow \\partial^\\dagger_t=-\\partial_t.$$</p>\n\n<p>Hence, my claim in the very beginning of my post was wrong. This is the answer I was looking for. The post can be closed.</p>\n" } ]
Question: Calculating the operator adjoint to the partial time derivative Answer 1: <p>What you wrote is wrong, sorry. If the Hilbert space is $L^2(\mathbb C^n \times \mathbb R, \mathbb C)$, then the relevant integral is $$\int_{\mathbb C^n \times \mathbb R} \overline{\psi(x,t)} \phi(x,t) d^nx dt\tag{1}$$ where (I am interpreting your notations) the functions are complex-valued and $d^nx$ is the Lebesgue measure over $\mathbb C^n$ viewed as $\mathbb R^{2n}$ and $dt$ the one over $\mathbb R$. Presumably your $\mathbb C^n$ should be $\mathbb R^n$. </p> <p>Instead you use the measure $d^nx$ over the factor factor only. $$\int_{\mathbb C^n} \overline{\psi(x)} \phi(x) d^nx$$ In this case <strong>it does make any sense</strong> to wonder whether or not $\partial_t$ is Hermitian or not, since it is <strong>not</strong> an operator over the linear space $L^2(\mathbb C^n, \mathbb C)$, that is on the functions $\psi=\psi(x)$ </p> <p>(Even dealing with the whole Hilbert space $L^2(\mathbb C^n \times \mathbb R, \mathbb C)$ and the scalar product (1) there would be other issues regarding regularity of functions but the problem arising from your approach is much more relevant.) </p> <p>In elementary Quantum Mechanics the Hilbert space is $L^2(\mathbb R^n)$ referred to Lebesgue's measure $d^nx$ over $\mathbb R^n$. </p> <p><strong>Operators</strong> are linear maps $A: D(A) \to L^2(\mathbb R^n)$ where $D(A) \subset L^2(\mathbb R^n)$ is a (usually dense) linear subspace. </p> <p>You are instead considering a function from $\mathbb R$ to $L^2(\mathbb R^n)$ physically representing the <strong>temporal evolution</strong> of a vector $\psi_0$ at $t=0$ $$\mathbb R \ni t \mapsto \psi_t \in L^2(\mathbb R^n) $$ The derivative you are considering acts on such a family $$\partial_t \psi_t := \lim_{h \to 0} \frac{\psi_{t+h} -\psi_t}{h} \tag{2}$$ and, if exists, is computed with respect to the topology of $L^2(\mathbb R^n)$. You see that to compute (2) you need a $t$-parametrized <strong>family</strong> of vectors.</p> <p>With a <strong>single</strong> vector $\psi \in L^2(\mathbb R^n)$, $\partial_t \psi$ would not have any meaning, differently from, say, $A\psi$ -- where $A$ is a true linear operator in $L^2(\mathbb R^n)$ and $\psi \in D(A)$ -- which is meaningful. </p> Answer 2: <p>So, it seems that I had a "Brett vorm Kopf" as they say in German.</p> <p>Since $\partial_t\Phi^\ast(x,t)$ is a real derivative, complex conjugation and derivation commute. Thus, indeed we have $-\int_{\mathbb{C}^n}\frac{\partial\Phi^\ast}{\partial t}(x,t)\Psi(x,t)d^nx = \int_{\mathbb{C}^n}\left(-\frac{\partial\Phi}{\partial t}(x,t)\right)^\ast\Psi(x,t)d^nx$, which concludes</p> <p>$$\langle\Phi(t)|\partial_t\Psi(t)\rangle = \langle-\partial_t\Phi(t)|\Psi(t)\rangle \Leftrightarrow \partial^\dagger_t=-\partial_t.$$</p> <p>Hence, my claim in the very beginning of my post was wrong. This is the answer I was looking for. The post can be closed.</p> Answer 3: <p>What you wrote is wrong, sorry. If the Hilbert space is $L^2(\mathbb C^n \times \mathbb R, \mathbb C)$, then the relevant integral is $$\int_{\mathbb C^n \times \mathbb R} \overline{\psi(x,t)} \phi(x,t) d^nx dt\tag{1}$$ where (I am interpreting your notations) the functions are complex-valued and $d^nx$ is the Lebesgue measure over $\mathbb C^n$ viewed as $\mathbb R^{2n}$ and $dt$ the one over $\mathbb R$. Presumably your $\mathbb C^n$ should be $\mathbb R^n$. </p> <p>Instead you use the measure $d^nx$ over the factor factor only. $$\int_{\mathbb C^n} \overline{\psi(x)} \phi(x) d^nx$$ In this case <strong>it does make any sense</strong> to wonder whether or not $\partial_t$ is Hermitian or not, since it is <strong>not</strong> an operator over the linear space $L^2(\mathbb C^n, \mathbb C)$, that is on the functions $\psi=\psi(x)$ </p> <p>(Even dealing with the whole Hilbert space $L^2(\mathbb C^n \times \mathbb R, \mathbb C)$ and the scalar product (1) there would be other issues regarding regularity of functions but the problem arising from your approach is much more relevant.) </p> <p>In elementary Quantum Mechanics the Hilbert space is $L^2(\mathbb R^n)$ referred to Lebesgue's measure $d^nx$ over $\mathbb R^n$. </p> <p><strong>Operators</strong> are linear maps $A: D(A) \to L^2(\mathbb R^n)$ where $D(A) \subset L^2(\mathbb R^n)$ is a (usually dense) linear subspace. </p> <p>You are instead considering a function from $\mathbb R$ to $L^2(\mathbb R^n)$ physically representing the <strong>temporal evolution</strong> of a vector $\psi_0$ at $t=0$ $$\mathbb R \ni t \mapsto \psi_t \in L^2(\mathbb R^n) $$ The derivative you are considering acts on such a family $$\partial_t \psi_t := \lim_{h \to 0} \frac{\psi_{t+h} -\psi_t}{h} \tag{2}$$ and, if exists, is computed with respect to the topology of $L^2(\mathbb R^n)$. You see that to compute (2) you need a $t$-parametrized <strong>family</strong> of vectors.</p> <p>With a <strong>single</strong> vector $\psi \in L^2(\mathbb R^n)$, $\partial_t \psi$ would not have any meaning, differently from, say, $A\psi$ -- where $A$ is a true linear operator in $L^2(\mathbb R^n)$ and $\psi \in D(A)$ -- which is meaningful. </p> Answer 4: <p>So, it seems that I had a "Brett vorm Kopf" as they say in German.</p> <p>Since $\partial_t\Phi^\ast(x,t)$ is a real derivative, complex conjugation and derivation commute. Thus, indeed we have $-\int_{\mathbb{C}^n}\frac{\partial\Phi^\ast}{\partial t}(x,t)\Psi(x,t)d^nx = \int_{\mathbb{C}^n}\left(-\frac{\partial\Phi}{\partial t}(x,t)\right)^\ast\Psi(x,t)d^nx$, which concludes</p> <p>$$\langle\Phi(t)|\partial_t\Psi(t)\rangle = \langle-\partial_t\Phi(t)|\Psi(t)\rangle \Leftrightarrow \partial^\dagger_t=-\partial_t.$$</p> <p>Hence, my claim in the very beginning of my post was wrong. This is the answer I was looking for. The post can be closed.</p>
Solving a differential equation with a factor $2$ in the argument of the unknown function.
2,090,781
4
https://math.stackexchange.com/questions/2090781/solving-a-differential-equation-with-a-factor-2-in-the-argument-of-the-unknown
<p>The well-known trig identity </p> <p>$$2\sin(x)\cos(x) = \sin(2x)$$ </p> <p>gives use the differential equation (which I'm not even sure qualifies as an ODE, given the factor in the argument?)</p> <p>$$2f(x)f'(x) = f(2x)$$</p> <p>Assuming one had no idea that $\sin(x)$ is a solution to this equation, I was wondering if there was a way to derive the solution anyway.</p> <p><strong>My attempt</strong></p> <p>Assume that $f(x)$ has a power series expansion valid for all values of $x$ that we are interested in, that is</p> <p>$$f(x) = \sum_{n=0}^{\infty}a_nx^n$$</p> <p>for some coefficients $a_n$.</p> <p>Plugging this in gives us</p> <p>$$\sum_{n=0}^\infty 2^na_nx^n=2\left(\sum_{n=0}^\infty a_nx^n\right)\left(\sum_{n=0}^\infty (n+1)a_{n+1}x^n\right) \tag1$$</p> <p>Assuming absolute convergence, we can rewrite the RHS using the Cauchy product formula:</p> <p>\begin{align} \left(\sum_{n=0}^\infty a_nx^n\right)\left(\sum_{n=0}^\infty (n+1)a_{n+1}x^n\right)&amp;=\sum_{n=0}^\infty\sum_{k=0}^n(k+1)a_{k+1}x^ka_{n-k}x^{n-k}\\ &amp;=\sum_{n=0}^\infty x^n\sum_{k=0}^n(k+1)a_{k+1}a_{n-k} \end{align}</p> <p>Plugging this into $(1)$ and comparing coefficients of powers of $x$, we get:</p> <p>$$a_n=2^{1-n}\sum_{k=0}^n(k+1)a_{k+1}a_{n-k}$$</p> <p>However, I have no idea how to proceed with this recurrence relation for the coefficients of the power series that I am searching for.</p> <blockquote> <p>Therefore, my two questions are:</p> <p>$1.$ Is my approach correct and is there a way to solve this recurrence relation for the known coefficients of the power series expansion of $\sin(x)?$</p> <p>$2.$ Is there a general approach that is maybe less cumbersome to this kind of problem?</p> </blockquote>
[ { "answer_id": 2090792, "score": 2, "text": "<p>One could imagine the solution to be an exponential of the following form:</p>\n\n<p>$$f(x)=ae^{bx}$$</p>\n\n<p>Substituting values in, we have</p>\n\n<p>$$2a^2be^{2bx}=ae^{2bx}$$</p>\n\n<p>$$\\implies2ab=1$$</p>\n\n<p>$$\\implies a=1/2b$$</p>\n\n<p>$$\\implies f(x)=\\frac1{2b}e^{bx}$$</p>\n\n<p>which is another more general solution to your differential equation. If $b$ is complex, we might end up with $f(x)=\\sin(x)$ due to Euler's formula.</p>\n" }, { "answer_id": 2093139, "score": 0, "text": "<p>Entering the \"Ansatz\"\n$$f(x):=\\sum_{k=0}^n a_k x^k$$\nwith, e.g., $n=8$ into Mathematica and putting the coefficients of\n$$\\psi:=2f(x)f'(x)-f(2x)+O[x]^{n+1}$$\nto $0$ produces successive numerical values $a_k$ $(0\\leq k\\leq n)$ that can be attributed to the following solution functions:\n$$f(x)=\\qquad x\\&gt;,\\qquad{1\\over2\\lambda}e^{\\lambda x}\\&gt;,\\qquad {1\\over\\lambda}\\sin(\\lambda x)$$\nwith $\\lambda\\ne0$ an arbitrary complex constant.</p>\n\n<p>Note that a solution which is $\\ne0$ in a punctured neighborhood $\\dot U$ of $0$ is automatically $C^\\infty$ in $\\dot U$. But there might exist solutions that have an essential singularity at $0$.</p>\n" }, { "answer_id": 2090792, "score": 2, "text": "<p>One could imagine the solution to be an exponential of the following form:</p>\n\n<p>$$f(x)=ae^{bx}$$</p>\n\n<p>Substituting values in, we have</p>\n\n<p>$$2a^2be^{2bx}=ae^{2bx}$$</p>\n\n<p>$$\\implies2ab=1$$</p>\n\n<p>$$\\implies a=1/2b$$</p>\n\n<p>$$\\implies f(x)=\\frac1{2b}e^{bx}$$</p>\n\n<p>which is another more general solution to your differential equation. If $b$ is complex, we might end up with $f(x)=\\sin(x)$ due to Euler's formula.</p>\n" }, { "answer_id": 2093139, "score": 0, "text": "<p>Entering the \"Ansatz\"\n$$f(x):=\\sum_{k=0}^n a_k x^k$$\nwith, e.g., $n=8$ into Mathematica and putting the coefficients of\n$$\\psi:=2f(x)f'(x)-f(2x)+O[x]^{n+1}$$\nto $0$ produces successive numerical values $a_k$ $(0\\leq k\\leq n)$ that can be attributed to the following solution functions:\n$$f(x)=\\qquad x\\&gt;,\\qquad{1\\over2\\lambda}e^{\\lambda x}\\&gt;,\\qquad {1\\over\\lambda}\\sin(\\lambda x)$$\nwith $\\lambda\\ne0$ an arbitrary complex constant.</p>\n\n<p>Note that a solution which is $\\ne0$ in a punctured neighborhood $\\dot U$ of $0$ is automatically $C^\\infty$ in $\\dot U$. But there might exist solutions that have an essential singularity at $0$.</p>\n" } ]
Question: Solving a differential equation with a factor $2$ in the argument of the unknown function. Answer 1: <p>One could imagine the solution to be an exponential of the following form:</p> <p>$$f(x)=ae^{bx}$$</p> <p>Substituting values in, we have</p> <p>$$2a^2be^{2bx}=ae^{2bx}$$</p> <p>$$\implies2ab=1$$</p> <p>$$\implies a=1/2b$$</p> <p>$$\implies f(x)=\frac1{2b}e^{bx}$$</p> <p>which is another more general solution to your differential equation. If $b$ is complex, we might end up with $f(x)=\sin(x)$ due to Euler's formula.</p> Answer 2: <p>Entering the "Ansatz" $$f(x):=\sum_{k=0}^n a_k x^k$$ with, e.g., $n=8$ into Mathematica and putting the coefficients of $$\psi:=2f(x)f'(x)-f(2x)+O[x]^{n+1}$$ to $0$ produces successive numerical values $a_k$ $(0\leq k\leq n)$ that can be attributed to the following solution functions: $$f(x)=\qquad x\&gt;,\qquad{1\over2\lambda}e^{\lambda x}\&gt;,\qquad {1\over\lambda}\sin(\lambda x)$$ with $\lambda\ne0$ an arbitrary complex constant.</p> <p>Note that a solution which is $\ne0$ in a punctured neighborhood $\dot U$ of $0$ is automatically $C^\infty$ in $\dot U$. But there might exist solutions that have an essential singularity at $0$.</p> Answer 3: <p>One could imagine the solution to be an exponential of the following form:</p> <p>$$f(x)=ae^{bx}$$</p> <p>Substituting values in, we have</p> <p>$$2a^2be^{2bx}=ae^{2bx}$$</p> <p>$$\implies2ab=1$$</p> <p>$$\implies a=1/2b$$</p> <p>$$\implies f(x)=\frac1{2b}e^{bx}$$</p> <p>which is another more general solution to your differential equation. If $b$ is complex, we might end up with $f(x)=\sin(x)$ due to Euler's formula.</p> Answer 4: <p>Entering the "Ansatz" $$f(x):=\sum_{k=0}^n a_k x^k$$ with, e.g., $n=8$ into Mathematica and putting the coefficients of $$\psi:=2f(x)f'(x)-f(2x)+O[x]^{n+1}$$ to $0$ produces successive numerical values $a_k$ $(0\leq k\leq n)$ that can be attributed to the following solution functions: $$f(x)=\qquad x\&gt;,\qquad{1\over2\lambda}e^{\lambda x}\&gt;,\qquad {1\over\lambda}\sin(\lambda x)$$ with $\lambda\ne0$ an arbitrary complex constant.</p> <p>Note that a solution which is $\ne0$ in a punctured neighborhood $\dot U$ of $0$ is automatically $C^\infty$ in $\dot U$. But there might exist solutions that have an essential singularity at $0$.</p>
For $abc=1$ prove that $\sum\limits_\text{cyc}\frac{1}{a+3}\geq\sum\limits_\text{cyc}\frac{a}{a^2+3}$
2,090,810
4
https://math.stackexchange.com/questions/2090810/for-abc-1-prove-that-sum-limits-textcyc-frac1a3-geq-sum-limits-text
<p>Let $a$, $b$ and $c$ be positive numbers such that $abc=1$. Prove that: $$\frac{1}{a+3}+\frac{1}{b+3}+\frac{1}{c+3}\geq\frac{a}{a^2+3}+\frac{b}{b^2+3}+\frac{c}{c^2+3}$$</p> <p>I tried TL, BW, the Vasc's Theorems and more, but without success.</p> <p>I proved this inequality! </p> <p>I proved also the hardest version: $\sum\limits_{cyc}\frac{1}{a+4}\geq\sum\limits_{cyc}\frac{a}{a^2+4}$. </p> <p>Thanks all!</p>
[ { "answer_id": 2211313, "score": 3, "text": "<p>BW in the following version does not help.</p>\n\n<p>Let $a=x^3$, $b=y^3$ and $c=z^3$.</p>\n\n<p>Hence, we need to prove that\n$$\\sum_{cyc}\\frac{1}{x^3+3xyz}\\geq\\sum_{cyc}\\frac{x^3}{x^6+3x^2y^2z^2}$$ or\n$$\\sum_{cyc}\\frac{1}{x^3+3xyz}\\geq\\sum_{cyc}\\frac{x}{x^4+3y^2z^2}.$$</p>\n\n<p>Now, we can assume that $x=\\min\\{x,y,z\\}$, $y=x+u$ and $z=x+v$ </p>\n\n<p>and these substitutions give inequality, which I don't know to prove.</p>\n\n<p>But we can use another BW!</p>\n\n<p>Let $a=\\frac{y}{x}$, $b=\\frac{z}{y}$ and $c=\\frac{x}{z}$, where $x$, $y$ and $z$ are positives. </p>\n\n<p>Hence, we need to prove that\n$$\\sum_{cyc}\\frac{x}{3x+y}\\geq\\sum_{cyc}\\frac{xy}{3x^2+y^2}$$ or\n$$\\sum_{cyc}\\frac{x^3-x^2y}{(3x+y)(3x^2+y^2)}\\geq0.$$\nNow, let $x=\\min\\{x,y,z\\}$, $y=x+u$ and $z=x+v$.</p>\n\n<p>Hence, we need to prove that\n$$128(u^2-uv+v^2)x^7+16(16u^3+23u^2v-15uv^2+16v^3)x^6+$$\n$$+32(8u^4+27u^3v+12u^2v^2-11uv^3+8v^4)x^5+$$\n$$+4(32u^5+193u^4v+266u^3v^2-42u^2v^3-33uv^4+32v^5)x^4+$$\n$$+2(8u^6+178u^5v+435u^4v^2+152u^3v^3-99u^2v^4+30uv^5+8v^6)x^3+$$\n$$+uv(45u^5+375u^4v+291u^3v^2-83u^2v^3+57uv^4+3v^5)x^2+$$\n$$+2u^2v^2(24u^4+66u^3v-18u^2v^2+13uv^3+3v^4)x+$$\n$$+u^3v^3(18u^3-6u^2v+3uv^2+v^3)\\geq0,$$\nwhich is obvious.</p>\n\n<p>Done!</p>\n" }, { "answer_id": 4603845, "score": 1, "text": "<p>Another way.</p>\n<p>Since for any <span class=\"math-container\">$a&gt;0$</span> we have <span class=\"math-container\">$$\\frac{1}{a+3}-\\frac{a}{a^2+3}+\\frac{9}{64}\\geq\\frac{27}{64\\left(a^{\\frac{8}{3}}+a^{\\frac{4}{3}}+1\\right)}$$</span> and for positives <span class=\"math-container\">$a$</span>, <span class=\"math-container\">$b$</span> and <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$abc=1$</span> we have <span class=\"math-container\">$$\\sum_{cyc}\\frac{1}{a^2+a+1}\\geq1,$$</span> we obtain:\n<span class=\"math-container\">$$\\sum_{cyc}\\left(\\frac{1}{a+3}-\\frac{a}{a^2+3}\\right)=\\sum_{cyc}\\left(\\frac{1}{a+3}-\\frac{a}{a^2+3}+\\frac{9}{64}\\right)-\\frac{27}{64}\\geq$$</span>\n<span class=\"math-container\">$$\\geq\\frac{27}{64}\\left(\\sum_{cyc}\\frac{1}{a^{\\frac{8}{3}}+a^{\\frac{4}{3}}+1}-1\\right)\\geq0.$$</span></p>\n" }, { "answer_id": 2211313, "score": 3, "text": "<p>BW in the following version does not help.</p>\n\n<p>Let $a=x^3$, $b=y^3$ and $c=z^3$.</p>\n\n<p>Hence, we need to prove that\n$$\\sum_{cyc}\\frac{1}{x^3+3xyz}\\geq\\sum_{cyc}\\frac{x^3}{x^6+3x^2y^2z^2}$$ or\n$$\\sum_{cyc}\\frac{1}{x^3+3xyz}\\geq\\sum_{cyc}\\frac{x}{x^4+3y^2z^2}.$$</p>\n\n<p>Now, we can assume that $x=\\min\\{x,y,z\\}$, $y=x+u$ and $z=x+v$ </p>\n\n<p>and these substitutions give inequality, which I don't know to prove.</p>\n\n<p>But we can use another BW!</p>\n\n<p>Let $a=\\frac{y}{x}$, $b=\\frac{z}{y}$ and $c=\\frac{x}{z}$, where $x$, $y$ and $z$ are positives. </p>\n\n<p>Hence, we need to prove that\n$$\\sum_{cyc}\\frac{x}{3x+y}\\geq\\sum_{cyc}\\frac{xy}{3x^2+y^2}$$ or\n$$\\sum_{cyc}\\frac{x^3-x^2y}{(3x+y)(3x^2+y^2)}\\geq0.$$\nNow, let $x=\\min\\{x,y,z\\}$, $y=x+u$ and $z=x+v$.</p>\n\n<p>Hence, we need to prove that\n$$128(u^2-uv+v^2)x^7+16(16u^3+23u^2v-15uv^2+16v^3)x^6+$$\n$$+32(8u^4+27u^3v+12u^2v^2-11uv^3+8v^4)x^5+$$\n$$+4(32u^5+193u^4v+266u^3v^2-42u^2v^3-33uv^4+32v^5)x^4+$$\n$$+2(8u^6+178u^5v+435u^4v^2+152u^3v^3-99u^2v^4+30uv^5+8v^6)x^3+$$\n$$+uv(45u^5+375u^4v+291u^3v^2-83u^2v^3+57uv^4+3v^5)x^2+$$\n$$+2u^2v^2(24u^4+66u^3v-18u^2v^2+13uv^3+3v^4)x+$$\n$$+u^3v^3(18u^3-6u^2v+3uv^2+v^3)\\geq0,$$\nwhich is obvious.</p>\n\n<p>Done!</p>\n" }, { "answer_id": 4603845, "score": 1, "text": "<p>Another way.</p>\n<p>Since for any <span class=\"math-container\">$a&gt;0$</span> we have <span class=\"math-container\">$$\\frac{1}{a+3}-\\frac{a}{a^2+3}+\\frac{9}{64}\\geq\\frac{27}{64\\left(a^{\\frac{8}{3}}+a^{\\frac{4}{3}}+1\\right)}$$</span> and for positives <span class=\"math-container\">$a$</span>, <span class=\"math-container\">$b$</span> and <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$abc=1$</span> we have <span class=\"math-container\">$$\\sum_{cyc}\\frac{1}{a^2+a+1}\\geq1,$$</span> we obtain:\n<span class=\"math-container\">$$\\sum_{cyc}\\left(\\frac{1}{a+3}-\\frac{a}{a^2+3}\\right)=\\sum_{cyc}\\left(\\frac{1}{a+3}-\\frac{a}{a^2+3}+\\frac{9}{64}\\right)-\\frac{27}{64}\\geq$$</span>\n<span class=\"math-container\">$$\\geq\\frac{27}{64}\\left(\\sum_{cyc}\\frac{1}{a^{\\frac{8}{3}}+a^{\\frac{4}{3}}+1}-1\\right)\\geq0.$$</span></p>\n" } ]
Question: For $abc=1$ prove that $\sum\limits_\text{cyc}\frac{1}{a+3}\geq\sum\limits_\text{cyc}\frac{a}{a^2+3}$ Answer 1: <p>BW in the following version does not help.</p> <p>Let $a=x^3$, $b=y^3$ and $c=z^3$.</p> <p>Hence, we need to prove that $$\sum_{cyc}\frac{1}{x^3+3xyz}\geq\sum_{cyc}\frac{x^3}{x^6+3x^2y^2z^2}$$ or $$\sum_{cyc}\frac{1}{x^3+3xyz}\geq\sum_{cyc}\frac{x}{x^4+3y^2z^2}.$$</p> <p>Now, we can assume that $x=\min\{x,y,z\}$, $y=x+u$ and $z=x+v$ </p> <p>and these substitutions give inequality, which I don't know to prove.</p> <p>But we can use another BW!</p> <p>Let $a=\frac{y}{x}$, $b=\frac{z}{y}$ and $c=\frac{x}{z}$, where $x$, $y$ and $z$ are positives. </p> <p>Hence, we need to prove that $$\sum_{cyc}\frac{x}{3x+y}\geq\sum_{cyc}\frac{xy}{3x^2+y^2}$$ or $$\sum_{cyc}\frac{x^3-x^2y}{(3x+y)(3x^2+y^2)}\geq0.$$ Now, let $x=\min\{x,y,z\}$, $y=x+u$ and $z=x+v$.</p> <p>Hence, we need to prove that $$128(u^2-uv+v^2)x^7+16(16u^3+23u^2v-15uv^2+16v^3)x^6+$$ $$+32(8u^4+27u^3v+12u^2v^2-11uv^3+8v^4)x^5+$$ $$+4(32u^5+193u^4v+266u^3v^2-42u^2v^3-33uv^4+32v^5)x^4+$$ $$+2(8u^6+178u^5v+435u^4v^2+152u^3v^3-99u^2v^4+30uv^5+8v^6)x^3+$$ $$+uv(45u^5+375u^4v+291u^3v^2-83u^2v^3+57uv^4+3v^5)x^2+$$ $$+2u^2v^2(24u^4+66u^3v-18u^2v^2+13uv^3+3v^4)x+$$ $$+u^3v^3(18u^3-6u^2v+3uv^2+v^3)\geq0,$$ which is obvious.</p> <p>Done!</p> Answer 2: <p>Another way.</p> <p>Since for any <span class="math-container">$a&gt;0$</span> we have <span class="math-container">$$\frac{1}{a+3}-\frac{a}{a^2+3}+\frac{9}{64}\geq\frac{27}{64\left(a^{\frac{8}{3}}+a^{\frac{4}{3}}+1\right)}$$</span> and for positives <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span> such that <span class="math-container">$abc=1$</span> we have <span class="math-container">$$\sum_{cyc}\frac{1}{a^2+a+1}\geq1,$$</span> we obtain: <span class="math-container">$$\sum_{cyc}\left(\frac{1}{a+3}-\frac{a}{a^2+3}\right)=\sum_{cyc}\left(\frac{1}{a+3}-\frac{a}{a^2+3}+\frac{9}{64}\right)-\frac{27}{64}\geq$$</span> <span class="math-container">$$\geq\frac{27}{64}\left(\sum_{cyc}\frac{1}{a^{\frac{8}{3}}+a^{\frac{4}{3}}+1}-1\right)\geq0.$$</span></p> Answer 3: <p>BW in the following version does not help.</p> <p>Let $a=x^3$, $b=y^3$ and $c=z^3$.</p> <p>Hence, we need to prove that $$\sum_{cyc}\frac{1}{x^3+3xyz}\geq\sum_{cyc}\frac{x^3}{x^6+3x^2y^2z^2}$$ or $$\sum_{cyc}\frac{1}{x^3+3xyz}\geq\sum_{cyc}\frac{x}{x^4+3y^2z^2}.$$</p> <p>Now, we can assume that $x=\min\{x,y,z\}$, $y=x+u$ and $z=x+v$ </p> <p>and these substitutions give inequality, which I don't know to prove.</p> <p>But we can use another BW!</p> <p>Let $a=\frac{y}{x}$, $b=\frac{z}{y}$ and $c=\frac{x}{z}$, where $x$, $y$ and $z$ are positives. </p> <p>Hence, we need to prove that $$\sum_{cyc}\frac{x}{3x+y}\geq\sum_{cyc}\frac{xy}{3x^2+y^2}$$ or $$\sum_{cyc}\frac{x^3-x^2y}{(3x+y)(3x^2+y^2)}\geq0.$$ Now, let $x=\min\{x,y,z\}$, $y=x+u$ and $z=x+v$.</p> <p>Hence, we need to prove that $$128(u^2-uv+v^2)x^7+16(16u^3+23u^2v-15uv^2+16v^3)x^6+$$ $$+32(8u^4+27u^3v+12u^2v^2-11uv^3+8v^4)x^5+$$ $$+4(32u^5+193u^4v+266u^3v^2-42u^2v^3-33uv^4+32v^5)x^4+$$ $$+2(8u^6+178u^5v+435u^4v^2+152u^3v^3-99u^2v^4+30uv^5+8v^6)x^3+$$ $$+uv(45u^5+375u^4v+291u^3v^2-83u^2v^3+57uv^4+3v^5)x^2+$$ $$+2u^2v^2(24u^4+66u^3v-18u^2v^2+13uv^3+3v^4)x+$$ $$+u^3v^3(18u^3-6u^2v+3uv^2+v^3)\geq0,$$ which is obvious.</p> <p>Done!</p> Answer 4: <p>Another way.</p> <p>Since for any <span class="math-container">$a&gt;0$</span> we have <span class="math-container">$$\frac{1}{a+3}-\frac{a}{a^2+3}+\frac{9}{64}\geq\frac{27}{64\left(a^{\frac{8}{3}}+a^{\frac{4}{3}}+1\right)}$$</span> and for positives <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span> such that <span class="math-container">$abc=1$</span> we have <span class="math-container">$$\sum_{cyc}\frac{1}{a^2+a+1}\geq1,$$</span> we obtain: <span class="math-container">$$\sum_{cyc}\left(\frac{1}{a+3}-\frac{a}{a^2+3}\right)=\sum_{cyc}\left(\frac{1}{a+3}-\frac{a}{a^2+3}+\frac{9}{64}\right)-\frac{27}{64}\geq$$</span> <span class="math-container">$$\geq\frac{27}{64}\left(\sum_{cyc}\frac{1}{a^{\frac{8}{3}}+a^{\frac{4}{3}}+1}-1\right)\geq0.$$</span></p>
Direct Decomposition of Groups
2,090,880
4
https://math.stackexchange.com/questions/2090880/direct-decomposition-of-groups
<p>A group $G$ is called directly irreducible if $G \simeq A \times B$ implies $G \simeq A$ or $G \simeq B$. I am looking for proof of the following theorem: </p> <ul> <li>If $G = G_1 \times G_2 \times \cdots \times G_n \simeq H_1 \times H_2 \times \cdots \times H_m$ , where the $G$'s and $H$'s are directly irreducible groups, and the lattice of congruences on $G$ has finite-length (it satisfies both ACC and DCC), then $n = m$ and there exists a permutation $\sigma \in S_n$ such that $G_i \simeq H_{\sigma(i)}$. </li> </ul> <p>A proof, or a reference to a proof, would be much appreciated. </p> <p>I have been told that there is a proof that involves the <a href="http://planetmath.org/kuroshoretheorem" rel="nofollow noreferrer">Kurosh-Ore Theorem</a>, on modular lattices. This is specifically the proof that I am looking for, but anything will do, thanks.</p>
[ { "answer_id": 2090919, "score": 3, "text": "<p>This is just now true. There exist a group $G$ and \"directly irreducible\" groups $A, B, C, D$ such that $G$ is isomorphic to both $A \\times B$ and $C \\times D$, these being non-isomorphic direct decompositions. See for example $\\S$ 42 in A.G. Kurosh, The theory of groups, Volume two, second english edition, translated by K.A. Hirsch, AMS Chelsea Publishing, 1960.</p>\n\n<p>If you are ready to excuse my French, see also this and other examples in Y. Cornulier and P. de la Harpe, Décompositions de groupes par produit direct et groupes de Coxeter, in Geometric group theory, 75-102, Trends in Math., Birkhäuser, Basel, 2007.</p>\n" }, { "answer_id": 2094832, "score": 1, "text": "<p>The mandatory reference is <a href=\"http://people.math.sc.edu/mcnulty/alglatvar/alvi.pdf\" rel=\"nofollow noreferrer\">Algebras, Lattices, Varieties (vol I)</a>, by McKenzie, McNulty and Taylor. In Section 2.3 they start with the subject, and Kurosh-Ore is Theorem 2.33. All of Chapter 5 is devoted to unique factorization, and you might be interested in Section 5.3.</p>\n\n<hr>\n\n<p>(This was a comment to the question that apparently is satisfactory answer to the OP, so I'm reposting as such.)</p>\n" }, { "answer_id": 2090919, "score": 3, "text": "<p>This is just now true. There exist a group $G$ and \"directly irreducible\" groups $A, B, C, D$ such that $G$ is isomorphic to both $A \\times B$ and $C \\times D$, these being non-isomorphic direct decompositions. See for example $\\S$ 42 in A.G. Kurosh, The theory of groups, Volume two, second english edition, translated by K.A. Hirsch, AMS Chelsea Publishing, 1960.</p>\n\n<p>If you are ready to excuse my French, see also this and other examples in Y. Cornulier and P. de la Harpe, Décompositions de groupes par produit direct et groupes de Coxeter, in Geometric group theory, 75-102, Trends in Math., Birkhäuser, Basel, 2007.</p>\n" }, { "answer_id": 2094832, "score": 1, "text": "<p>The mandatory reference is <a href=\"http://people.math.sc.edu/mcnulty/alglatvar/alvi.pdf\" rel=\"nofollow noreferrer\">Algebras, Lattices, Varieties (vol I)</a>, by McKenzie, McNulty and Taylor. In Section 2.3 they start with the subject, and Kurosh-Ore is Theorem 2.33. All of Chapter 5 is devoted to unique factorization, and you might be interested in Section 5.3.</p>\n\n<hr>\n\n<p>(This was a comment to the question that apparently is satisfactory answer to the OP, so I'm reposting as such.)</p>\n" } ]
Question: Direct Decomposition of Groups Answer 1: <p>This is just now true. There exist a group $G$ and "directly irreducible" groups $A, B, C, D$ such that $G$ is isomorphic to both $A \times B$ and $C \times D$, these being non-isomorphic direct decompositions. See for example $\S$ 42 in A.G. Kurosh, The theory of groups, Volume two, second english edition, translated by K.A. Hirsch, AMS Chelsea Publishing, 1960.</p> <p>If you are ready to excuse my French, see also this and other examples in Y. Cornulier and P. de la Harpe, Décompositions de groupes par produit direct et groupes de Coxeter, in Geometric group theory, 75-102, Trends in Math., Birkhäuser, Basel, 2007.</p> Answer 2: <p>The mandatory reference is <a href="http://people.math.sc.edu/mcnulty/alglatvar/alvi.pdf" rel="nofollow noreferrer">Algebras, Lattices, Varieties (vol I)</a>, by McKenzie, McNulty and Taylor. In Section 2.3 they start with the subject, and Kurosh-Ore is Theorem 2.33. All of Chapter 5 is devoted to unique factorization, and you might be interested in Section 5.3.</p> <hr> <p>(This was a comment to the question that apparently is satisfactory answer to the OP, so I'm reposting as such.)</p> Answer 3: <p>This is just now true. There exist a group $G$ and "directly irreducible" groups $A, B, C, D$ such that $G$ is isomorphic to both $A \times B$ and $C \times D$, these being non-isomorphic direct decompositions. See for example $\S$ 42 in A.G. Kurosh, The theory of groups, Volume two, second english edition, translated by K.A. Hirsch, AMS Chelsea Publishing, 1960.</p> <p>If you are ready to excuse my French, see also this and other examples in Y. Cornulier and P. de la Harpe, Décompositions de groupes par produit direct et groupes de Coxeter, in Geometric group theory, 75-102, Trends in Math., Birkhäuser, Basel, 2007.</p> Answer 4: <p>The mandatory reference is <a href="http://people.math.sc.edu/mcnulty/alglatvar/alvi.pdf" rel="nofollow noreferrer">Algebras, Lattices, Varieties (vol I)</a>, by McKenzie, McNulty and Taylor. In Section 2.3 they start with the subject, and Kurosh-Ore is Theorem 2.33. All of Chapter 5 is devoted to unique factorization, and you might be interested in Section 5.3.</p> <hr> <p>(This was a comment to the question that apparently is satisfactory answer to the OP, so I'm reposting as such.)</p>
Find all $n$ with $\!\bmod n\!:\, a\,$ invertible $\Rightarrow a^{-1} \equiv a,\,$ i.e. $a^2\equiv 1$
2,091,280
4
https://math.stackexchange.com/questions/2091280/find-all-n-with-bmod-n-a-invertible-rightarrow-a-1-equiv-a
<blockquote> <p>Find all positive integers $n$ such that $a \equiv a^{-1} \pmod{n}$ for all invertible $a$ modulo $n$.</p> </blockquote> <p>I found that $n = 1,2,4,6,8,12,24$ satisfy this. How can we prove that these are all of them?</p>
[ { "answer_id": 2091286, "score": 4, "text": "<p>Equivalently, you are asking that $a^2\\equiv 1\\pmod{n}$ for any invertible $a$ mod $n$. By the Chinese remainder theorem, it suffices to consider the case that $n=p^d$ is a prime power. If $p$ is odd and $p^d\\neq3$, then $a=2$ is invertible mod $p^d$ but $a^2=4\\not\\equiv 1\\pmod{p^d}$. If $p=2$ and $d&gt;3$, then $a=3$ is invertible mod $p^d$ but $a^2=9\\not\\equiv 1\\pmod{p^d}$.</p>\n\n<p>So if $n$ has the property you state, then the only possible odd prime factor of $n$ is $3$ (and $3^2\\not\\mid n$) and the highest power of $2$ dividing $n$ is at most $2^3$. That is, $n\\mid 24$, which is exactly the numbers on your list together with $n=3$ (which is also an example and is missing from your list).</p>\n" }, { "answer_id": 2091283, "score": 1, "text": "<p>notice that this would imply <span class=\"math-container\">$a^2\\equiv 1$</span> for all <span class=\"math-container\">$a$</span> coprime to <span class=\"math-container\">$n$</span>. Notice that <span class=\"math-container\">$5$</span> does not work because <span class=\"math-container\">$2^2$</span> is not <span class=\"math-container\">$1\\bmod 5$</span>, we conclude that if <span class=\"math-container\">$5|n$</span> then <span class=\"math-container\">$n$</span> does not work, by taking an invertible residue class that is <span class=\"math-container\">$2\\bmod 5$</span>.</p>\n<p>Finally, if <span class=\"math-container\">$5\\nmid n$</span> then we should have that <span class=\"math-container\">$5^2 \\equiv 1 \\bmod n$</span>, which implies <span class=\"math-container\">$n$</span> is a divisor of <span class=\"math-container\">$24$</span>.</p>\n<hr />\n<p>Note: to see that if <span class=\"math-container\">$5\\mid n$</span> there is an integer <span class=\"math-container\">$x$</span> that is invertible <span class=\"math-container\">$\\bmod n$</span> and congruent to <span class=\"math-container\">$2\\bmod 5$</span> we can write <span class=\"math-container\">$n = 5^am$</span> with <span class=\"math-container\">$5\\nmid m$</span> and take a solution to the system:</p>\n<p><span class=\"math-container\">$x \\equiv 2 \\bmod 5^a$</span></p>\n<p><span class=\"math-container\">$x \\equiv 1 \\bmod m$</span></p>\n" }, { "answer_id": 2091834, "score": 1, "text": "<p>The hypothesis implies that $U(n)$ has exponent $2$.</p>\n\n<p>By the Chinese remainder theorem, the exponent of $U(n)$ is given by the \n<a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">Carmichael function</a>\n$$\n\\lambda (n)=\\operatorname {lcm}(\\lambda (p_{1}^{{a_{1}}}),\\;\\lambda (p_{2}^{{a_{2}}}),\\dots ,\\lambda (p_{k}^{a_{k}}))\n$$\nwhere\n$$\n\\lambda (p^a)=\n\\begin{cases}\n\\;\\;\\varphi(p^a) &amp;\\mbox{if } p\\ne 2\\\\\n\\tfrac12\\varphi(p^a)&amp;\\text{if }p=2\n\\end{cases}\n$$\nTherefore, $\\varphi(p_i^{a_i})=2$ for all odd prime factors of $n$, which implies that $p=3, a=1$ is the only possible candidate, and also that $n$ at most a factor $2^8$.</p>\n\n<p>These restrictions give the list you have found.</p>\n" }, { "answer_id": 2092013, "score": 1, "text": "<p><span class=\"math-container\">$\\,\\overbrace{{\\rm if}\\,\\ 5\\!\\nmid\\! n\\,\\ {\\rm then}\\,\\ n\\,|\\, \\color{#90f}{24}\\!=\\! 5^2\\!-\\!1}^{\\large \\text{by hypothesis}},\\,$</span> else <span class=\"math-container\">$\\,5\\!\\mid\\! n =\\!\\! \\overbrace{\\color{#0a0}2^{\\large j}\\color{#c00}k}^{\\large {\\rm odd}\\ \\color{#c00}k}\\!, \\,$</span> &amp; <span class=\"math-container\">$\\,(\\color{#0a0}2\\!+\\!5\\color{#c00}k)^{\\large 2}\\!\\not\\equiv 1\\bmod{5},\\,$</span> so <span class=\"math-container\">$\\not\\equiv 1\\bmod n.\\,$</span> <span class=\"math-container\">$\\small\\bf QED$</span></p>\n" }, { "answer_id": 2091286, "score": 4, "text": "<p>Equivalently, you are asking that $a^2\\equiv 1\\pmod{n}$ for any invertible $a$ mod $n$. By the Chinese remainder theorem, it suffices to consider the case that $n=p^d$ is a prime power. If $p$ is odd and $p^d\\neq3$, then $a=2$ is invertible mod $p^d$ but $a^2=4\\not\\equiv 1\\pmod{p^d}$. If $p=2$ and $d&gt;3$, then $a=3$ is invertible mod $p^d$ but $a^2=9\\not\\equiv 1\\pmod{p^d}$.</p>\n\n<p>So if $n$ has the property you state, then the only possible odd prime factor of $n$ is $3$ (and $3^2\\not\\mid n$) and the highest power of $2$ dividing $n$ is at most $2^3$. That is, $n\\mid 24$, which is exactly the numbers on your list together with $n=3$ (which is also an example and is missing from your list).</p>\n" }, { "answer_id": 2091283, "score": 1, "text": "<p>notice that this would imply <span class=\"math-container\">$a^2\\equiv 1$</span> for all <span class=\"math-container\">$a$</span> coprime to <span class=\"math-container\">$n$</span>. Notice that <span class=\"math-container\">$5$</span> does not work because <span class=\"math-container\">$2^2$</span> is not <span class=\"math-container\">$1\\bmod 5$</span>, we conclude that if <span class=\"math-container\">$5|n$</span> then <span class=\"math-container\">$n$</span> does not work, by taking an invertible residue class that is <span class=\"math-container\">$2\\bmod 5$</span>.</p>\n<p>Finally, if <span class=\"math-container\">$5\\nmid n$</span> then we should have that <span class=\"math-container\">$5^2 \\equiv 1 \\bmod n$</span>, which implies <span class=\"math-container\">$n$</span> is a divisor of <span class=\"math-container\">$24$</span>.</p>\n<hr />\n<p>Note: to see that if <span class=\"math-container\">$5\\mid n$</span> there is an integer <span class=\"math-container\">$x$</span> that is invertible <span class=\"math-container\">$\\bmod n$</span> and congruent to <span class=\"math-container\">$2\\bmod 5$</span> we can write <span class=\"math-container\">$n = 5^am$</span> with <span class=\"math-container\">$5\\nmid m$</span> and take a solution to the system:</p>\n<p><span class=\"math-container\">$x \\equiv 2 \\bmod 5^a$</span></p>\n<p><span class=\"math-container\">$x \\equiv 1 \\bmod m$</span></p>\n" }, { "answer_id": 2091834, "score": 1, "text": "<p>The hypothesis implies that $U(n)$ has exponent $2$.</p>\n\n<p>By the Chinese remainder theorem, the exponent of $U(n)$ is given by the \n<a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">Carmichael function</a>\n$$\n\\lambda (n)=\\operatorname {lcm}(\\lambda (p_{1}^{{a_{1}}}),\\;\\lambda (p_{2}^{{a_{2}}}),\\dots ,\\lambda (p_{k}^{a_{k}}))\n$$\nwhere\n$$\n\\lambda (p^a)=\n\\begin{cases}\n\\;\\;\\varphi(p^a) &amp;\\mbox{if } p\\ne 2\\\\\n\\tfrac12\\varphi(p^a)&amp;\\text{if }p=2\n\\end{cases}\n$$\nTherefore, $\\varphi(p_i^{a_i})=2$ for all odd prime factors of $n$, which implies that $p=3, a=1$ is the only possible candidate, and also that $n$ at most a factor $2^8$.</p>\n\n<p>These restrictions give the list you have found.</p>\n" }, { "answer_id": 2092013, "score": 1, "text": "<p><span class=\"math-container\">$\\,\\overbrace{{\\rm if}\\,\\ 5\\!\\nmid\\! n\\,\\ {\\rm then}\\,\\ n\\,|\\, \\color{#90f}{24}\\!=\\! 5^2\\!-\\!1}^{\\large \\text{by hypothesis}},\\,$</span> else <span class=\"math-container\">$\\,5\\!\\mid\\! n =\\!\\! \\overbrace{\\color{#0a0}2^{\\large j}\\color{#c00}k}^{\\large {\\rm odd}\\ \\color{#c00}k}\\!, \\,$</span> &amp; <span class=\"math-container\">$\\,(\\color{#0a0}2\\!+\\!5\\color{#c00}k)^{\\large 2}\\!\\not\\equiv 1\\bmod{5},\\,$</span> so <span class=\"math-container\">$\\not\\equiv 1\\bmod n.\\,$</span> <span class=\"math-container\">$\\small\\bf QED$</span></p>\n" } ]
Question: Find all $n$ with $\!\bmod n\!:\, a\,$ invertible $\Rightarrow a^{-1} \equiv a,\,$ i.e. $a^2\equiv 1$ Answer 1: <p>Equivalently, you are asking that $a^2\equiv 1\pmod{n}$ for any invertible $a$ mod $n$. By the Chinese remainder theorem, it suffices to consider the case that $n=p^d$ is a prime power. If $p$ is odd and $p^d\neq3$, then $a=2$ is invertible mod $p^d$ but $a^2=4\not\equiv 1\pmod{p^d}$. If $p=2$ and $d&gt;3$, then $a=3$ is invertible mod $p^d$ but $a^2=9\not\equiv 1\pmod{p^d}$.</p> <p>So if $n$ has the property you state, then the only possible odd prime factor of $n$ is $3$ (and $3^2\not\mid n$) and the highest power of $2$ dividing $n$ is at most $2^3$. That is, $n\mid 24$, which is exactly the numbers on your list together with $n=3$ (which is also an example and is missing from your list).</p> Answer 2: <p>notice that this would imply <span class="math-container">$a^2\equiv 1$</span> for all <span class="math-container">$a$</span> coprime to <span class="math-container">$n$</span>. Notice that <span class="math-container">$5$</span> does not work because <span class="math-container">$2^2$</span> is not <span class="math-container">$1\bmod 5$</span>, we conclude that if <span class="math-container">$5|n$</span> then <span class="math-container">$n$</span> does not work, by taking an invertible residue class that is <span class="math-container">$2\bmod 5$</span>.</p> <p>Finally, if <span class="math-container">$5\nmid n$</span> then we should have that <span class="math-container">$5^2 \equiv 1 \bmod n$</span>, which implies <span class="math-container">$n$</span> is a divisor of <span class="math-container">$24$</span>.</p> <hr /> <p>Note: to see that if <span class="math-container">$5\mid n$</span> there is an integer <span class="math-container">$x$</span> that is invertible <span class="math-container">$\bmod n$</span> and congruent to <span class="math-container">$2\bmod 5$</span> we can write <span class="math-container">$n = 5^am$</span> with <span class="math-container">$5\nmid m$</span> and take a solution to the system:</p> <p><span class="math-container">$x \equiv 2 \bmod 5^a$</span></p> <p><span class="math-container">$x \equiv 1 \bmod m$</span></p> Answer 3: <p>The hypothesis implies that $U(n)$ has exponent $2$.</p> <p>By the Chinese remainder theorem, the exponent of $U(n)$ is given by the <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">Carmichael function</a> $$ \lambda (n)=\operatorname {lcm}(\lambda (p_{1}^{{a_{1}}}),\;\lambda (p_{2}^{{a_{2}}}),\dots ,\lambda (p_{k}^{a_{k}})) $$ where $$ \lambda (p^a)= \begin{cases} \;\;\varphi(p^a) &amp;\mbox{if } p\ne 2\\ \tfrac12\varphi(p^a)&amp;\text{if }p=2 \end{cases} $$ Therefore, $\varphi(p_i^{a_i})=2$ for all odd prime factors of $n$, which implies that $p=3, a=1$ is the only possible candidate, and also that $n$ at most a factor $2^8$.</p> <p>These restrictions give the list you have found.</p> Answer 4: <p><span class="math-container">$\,\overbrace{{\rm if}\,\ 5\!\nmid\! n\,\ {\rm then}\,\ n\,|\, \color{#90f}{24}\!=\! 5^2\!-\!1}^{\large \text{by hypothesis}},\,$</span> else <span class="math-container">$\,5\!\mid\! n =\!\! \overbrace{\color{#0a0}2^{\large j}\color{#c00}k}^{\large {\rm odd}\ \color{#c00}k}\!, \,$</span> &amp; <span class="math-container">$\,(\color{#0a0}2\!+\!5\color{#c00}k)^{\large 2}\!\not\equiv 1\bmod{5},\,$</span> so <span class="math-container">$\not\equiv 1\bmod n.\,$</span> <span class="math-container">$\small\bf QED$</span></p> Answer 5: <p>Equivalently, you are asking that $a^2\equiv 1\pmod{n}$ for any invertible $a$ mod $n$. By the Chinese remainder theorem, it suffices to consider the case that $n=p^d$ is a prime power. If $p$ is odd and $p^d\neq3$, then $a=2$ is invertible mod $p^d$ but $a^2=4\not\equiv 1\pmod{p^d}$. If $p=2$ and $d&gt;3$, then $a=3$ is invertible mod $p^d$ but $a^2=9\not\equiv 1\pmod{p^d}$.</p> <p>So if $n$ has the property you state, then the only possible odd prime factor of $n$ is $3$ (and $3^2\not\mid n$) and the highest power of $2$ dividing $n$ is at most $2^3$. That is, $n\mid 24$, which is exactly the numbers on your list together with $n=3$ (which is also an example and is missing from your list).</p> Answer 6: <p>notice that this would imply <span class="math-container">$a^2\equiv 1$</span> for all <span class="math-container">$a$</span> coprime to <span class="math-container">$n$</span>. Notice that <span class="math-container">$5$</span> does not work because <span class="math-container">$2^2$</span> is not <span class="math-container">$1\bmod 5$</span>, we conclude that if <span class="math-container">$5|n$</span> then <span class="math-container">$n$</span> does not work, by taking an invertible residue class that is <span class="math-container">$2\bmod 5$</span>.</p> <p>Finally, if <span class="math-container">$5\nmid n$</span> then we should have that <span class="math-container">$5^2 \equiv 1 \bmod n$</span>, which implies <span class="math-container">$n$</span> is a divisor of <span class="math-container">$24$</span>.</p> <hr /> <p>Note: to see that if <span class="math-container">$5\mid n$</span> there is an integer <span class="math-container">$x$</span> that is invertible <span class="math-container">$\bmod n$</span> and congruent to <span class="math-container">$2\bmod 5$</span> we can write <span class="math-container">$n = 5^am$</span> with <span class="math-container">$5\nmid m$</span> and take a solution to the system:</p> <p><span class="math-container">$x \equiv 2 \bmod 5^a$</span></p> <p><span class="math-container">$x \equiv 1 \bmod m$</span></p> Answer 7: <p>The hypothesis implies that $U(n)$ has exponent $2$.</p> <p>By the Chinese remainder theorem, the exponent of $U(n)$ is given by the <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">Carmichael function</a> $$ \lambda (n)=\operatorname {lcm}(\lambda (p_{1}^{{a_{1}}}),\;\lambda (p_{2}^{{a_{2}}}),\dots ,\lambda (p_{k}^{a_{k}})) $$ where $$ \lambda (p^a)= \begin{cases} \;\;\varphi(p^a) &amp;\mbox{if } p\ne 2\\ \tfrac12\varphi(p^a)&amp;\text{if }p=2 \end{cases} $$ Therefore, $\varphi(p_i^{a_i})=2$ for all odd prime factors of $n$, which implies that $p=3, a=1$ is the only possible candidate, and also that $n$ at most a factor $2^8$.</p> <p>These restrictions give the list you have found.</p> Answer 8: <p><span class="math-container">$\,\overbrace{{\rm if}\,\ 5\!\nmid\! n\,\ {\rm then}\,\ n\,|\, \color{#90f}{24}\!=\! 5^2\!-\!1}^{\large \text{by hypothesis}},\,$</span> else <span class="math-container">$\,5\!\mid\! n =\!\! \overbrace{\color{#0a0}2^{\large j}\color{#c00}k}^{\large {\rm odd}\ \color{#c00}k}\!, \,$</span> &amp; <span class="math-container">$\,(\color{#0a0}2\!+\!5\color{#c00}k)^{\large 2}\!\not\equiv 1\bmod{5},\,$</span> so <span class="math-container">$\not\equiv 1\bmod n.\,$</span> <span class="math-container">$\small\bf QED$</span></p>
How to extract coefficients of a generating function using a computer?
2,091,328
4
https://math.stackexchange.com/questions/2091328/how-to-extract-coefficients-of-a-generating-function-using-a-computer
<p>Say that as a simple example I want to extract the fourth coefficient (partitions of $4$) of $$[x^4]\,G(x)= \prod_{i=1}^4 \frac{1}{1 - x^i}$$.</p> <p>What is the best way to go about it?</p> <p>Is it Wolfram Alpha? If so, how is it entered?</p> <p>Is it usually calculated with <em>ad hoc</em> code? Suggestions?</p>
[ { "answer_id": 2091387, "score": 2, "text": "<p>Here are some ways to compute this on a computer. First of all if you have access to powerful mathematical software like Mathematica this can be done very easily. For example like this:</p>\n\n<pre><code>n = 4;\ngseries = Product[1/(1 - x^i), {i, 1, 4}];\na4 = Residue[gseries/x^(n + 1), {x, 0}]\n</code></pre>\n\n<p>where <a href=\"http://reference.wolfram.com/language/ref/Residue.html\" rel=\"nofollow noreferrer\"><code>Residue</code></a> is an instruction to extract the coefficient of the $\\frac{1}{x}$ term in a power-series. We can also <a href=\"http://reference.wolfram.com/language/ref/Series.html\" rel=\"nofollow noreferrer\"><code>Series</code></a> expand a function about $x=0$ and simply read off the coefficient using <code>Series[ gseries , {x, 0, n + 1} ]</code>.</p>\n\n<p>This solution relies on having all these complex functionality in the programming environment, but we can also quite easily solve this without having it.</p>\n\n<hr>\n\n<p>We can represent a polynomial (or power-series) $A(x) = \\sum_{n\\geq 0} a_n x^n$ on a computer as a sequence $\\{a_0,a_1,\\ldots,a_{n_{\\rm max}}\\}$. The product of two power-series is given by the <a href=\"https://en.wikipedia.org/wiki/Cauchy_product\" rel=\"nofollow noreferrer\">Cauchy-product</a></p>\n\n<p>$$A(x)B(x) = \\sum_{n\\geq 0} c_n x^n~~~\\text{where}~~~c_n = \\sum_{k=0}^na_n b_{n-k}$$</p>\n\n<p>which tells us how the sequences transform under multiplication. We will use this as a basis to compute the power-series. Below is a implementation in Mathematica (but it uses no Mathematica specific functions so it can be implemented in any programming language without needing special libraries). </p>\n\n<pre><code>(* The maximum exponent we will cover *)\nnmax = 20;\n\n(* Returns the Cauchy product of two power-series *)\nCauchyProduct[an_, bn_] := Module[{cn},\n cn[m_] := Sum[an[[k]] bn[[m + 2 - k]], {k, 1, m + 1}];\n Table[cn[m], {m, 0, nmax - 1}]\n];\n\n(* The function 1 + x^i + x^2i + ... = (1,0,..,0,1,0,..,0,1,0,..) *)\ngfunc[i_] := Table[If[Mod[j - 1, i] == 0 , 1, 0], {j, 1, nmax}];\n\n(* Expand the product Prod gfunc_i = Prod ( 1/(1-x^i)) to order nmax *)\ngseries = gfunc[1];\nDo[\n gseries = CauchyProduct[gseries, gfunc[i]]\n , {i, 2, 4}];\n\n(* The coefficient of x^n *)\nn = 4;\nPrint[\"an = \", gseries[[n + 1]]];\n</code></pre>\n\n<p>Note that in this example we only need to take <code>nmax = 5</code> to get the answer.</p>\n" }, { "answer_id": 2092201, "score": 1, "text": "<p>The Cauchy product in R is handled with the <a href=\"https://en.wikipedia.org/wiki/Toeplitz_matrix#Discrete_convolution\" rel=\"nofollow noreferrer\">discrete convolution</a> function:</p>\n\n<pre><code>&gt; (l= lapply(seq_len(4)-1, function(i) rep(c(1, integer(i)), length.out=6)))\n[[1]]\n[1] 1 1 1 1 1 1\n\n[[2]]\n[1] 1 0 1 0 1 0\n\n[[3]]\n[1] 1 0 0 1 0 0\n\n[[4]]\n[1] 1 0 0 0 1 0\n\n&gt; round(Reduce(f = function(x, y) convolve(x, y, type = \"open\"), x = l))\n</code></pre>\n\n<p>$[1] \\text{0 0 0 0 1 1 2 3 5 6 6 8 8 8 6 6}\\bullet \\color{blue}{5}\\bullet \\text{3 2 1 1}$</p>\n\n<p>This is consistent with the results in <a href=\"https://www.wolframalpha.com/input/?i=(1%2Bx%2Bx%5E2%2Bx%5E3%2Bx%5E4%2Bx%5E5)+*+(1+%2B+x%5E2+%2B+x%5E4)+*+(1+%2B+x%5E3)+*+(1+%2B+x%5E4)\" rel=\"nofollow noreferrer\">Wolfram Alpha</a>:</p>\n\n<blockquote>\n <p>Input:</p>\n</blockquote>\n\n<p>$$\\small (1+x+x^2+x^3+x^4+x^5) \\times (1 + x^2 + x^4) \\times (1 + x^3) \\times (1 + x^4)$$</p>\n\n<blockquote>\n <p>Output:</p>\n</blockquote>\n\n<p>$$\\small x^{16}+x^{15}+2 x^{14}+3 x^{13}+5 x^{12}+6 x^{11}+6 x^{10}+8 x^9+8 x^8+8 x^7+6 x^6+6 x^5+\\large{\\color{blue}{5}}\\small x^4+3 x^3+2 x^2+x+1$$</p>\n\n<hr>\n\n<p><strong>EDIT CONTAINING THE ANSWER SOUGHT IN THE OP:</strong></p>\n\n<p>The answer I was looking for is as follows using Wolfram Alpha (not only free, but always readily accessible):</p>\n\n<p><strong>Example 1:</strong></p>\n\n<p>... Well, simply the problem in the OP - i.e. the number of partitions of an integer (in the example, $4$):</p>\n\n<p><code>SeriesCoefficient[\\prod_{i=1}^4\\frac{1}{1-x^i},{x, 0, 4}]</code></p>\n\n<p><a href=\"https://i.stack.imgur.com/IpfPQ.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/IpfPQ.png\" alt=\"enter image description here\"></a></p>\n\n<hr>\n\n<p><strong>Example 2:</strong></p>\n\n<p>From <a href=\"https://youtu.be/ZyUb5UxBA9Q?t=14m43s\" rel=\"nofollow noreferrer\">this presentation online</a>:</p>\n\n<blockquote>\n <p>If a fair coin is rolled 12 times, how many times will it sum to 30?</p>\n</blockquote>\n\n<p>We want the 30th coefficient of</p>\n\n<p>$$\\begin{align}&amp;[x^{30}]\\;(x+x^2+x^3+x^4+x^5+x^6)^{12}\\\\[2ex]\n=&amp;[x^{30}]\\;x^{12}(1+x+x^2+x^3+x^4+x^5)^{12}\\\\[2ex]\n=&amp;[x^{18}]\\;(1+x+x^2+x^3+x^4+x^5)^{12}\\\\[2ex]\n=&amp;[x^{18}]\\;\\left(\\frac{1-x^6}{1-x}\\right)^{12}\n\\end{align}$$</p>\n\n<p><code>SeriesCoefficient[(1 - x^6)^(12) (1 - x)^(-12),{x, 0, 18}]</code></p>\n\n<p><a href=\"https://i.stack.imgur.com/PKgWE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/PKgWE.png\" alt=\"enter image description here\"></a></p>\n\n<p>Here is the alternative presented at the end of the analytical resolution of the problem in the presentation:</p>\n\n<p><a href=\"https://i.stack.imgur.com/YmQxX.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YmQxX.png\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/4kOJw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4kOJw.png\" alt=\"enter image description here\"></a></p>\n\n<hr>\n\n<p><strong>Example 3:</strong></p>\n\n<p>The GF for the Fibonacci sequence can be derived from the recursion $f_n=f_{n-1} + f_{n-2}$ with $f_0=0$ and $f_1=1:$</p>\n\n<p>$$\\begin{align}\nF(x) &amp;= f_0 x^0 + f_1 x^1 + f_2 x^2 + \\cdots \\\\[2 ex]\n&amp;= x + f_1 x^2 + f_2 x^3 + \\cdots\\\\\n&amp;\\quad \\quad+ f_0 x^2 + f_1 x^3+\\cdots\\\\[2ex]\n&amp;=x+ xF(x) +x^2 F(x)\n\\end{align}\n$$</p>\n\n<p>as</p>\n\n<p>$$F(x)=\\frac{x}{1-x-x^2}.$$</p>\n\n<p>The 10th Fibonacci number is...</p>\n\n<p><code>SeriesCoefficient[x / (1 - x - x^2), {x, 0, 10}]</code></p>\n\n<p><a href=\"https://i.stack.imgur.com/Ajakf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Ajakf.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2091387, "score": 2, "text": "<p>Here are some ways to compute this on a computer. First of all if you have access to powerful mathematical software like Mathematica this can be done very easily. For example like this:</p>\n\n<pre><code>n = 4;\ngseries = Product[1/(1 - x^i), {i, 1, 4}];\na4 = Residue[gseries/x^(n + 1), {x, 0}]\n</code></pre>\n\n<p>where <a href=\"http://reference.wolfram.com/language/ref/Residue.html\" rel=\"nofollow noreferrer\"><code>Residue</code></a> is an instruction to extract the coefficient of the $\\frac{1}{x}$ term in a power-series. We can also <a href=\"http://reference.wolfram.com/language/ref/Series.html\" rel=\"nofollow noreferrer\"><code>Series</code></a> expand a function about $x=0$ and simply read off the coefficient using <code>Series[ gseries , {x, 0, n + 1} ]</code>.</p>\n\n<p>This solution relies on having all these complex functionality in the programming environment, but we can also quite easily solve this without having it.</p>\n\n<hr>\n\n<p>We can represent a polynomial (or power-series) $A(x) = \\sum_{n\\geq 0} a_n x^n$ on a computer as a sequence $\\{a_0,a_1,\\ldots,a_{n_{\\rm max}}\\}$. The product of two power-series is given by the <a href=\"https://en.wikipedia.org/wiki/Cauchy_product\" rel=\"nofollow noreferrer\">Cauchy-product</a></p>\n\n<p>$$A(x)B(x) = \\sum_{n\\geq 0} c_n x^n~~~\\text{where}~~~c_n = \\sum_{k=0}^na_n b_{n-k}$$</p>\n\n<p>which tells us how the sequences transform under multiplication. We will use this as a basis to compute the power-series. Below is a implementation in Mathematica (but it uses no Mathematica specific functions so it can be implemented in any programming language without needing special libraries). </p>\n\n<pre><code>(* The maximum exponent we will cover *)\nnmax = 20;\n\n(* Returns the Cauchy product of two power-series *)\nCauchyProduct[an_, bn_] := Module[{cn},\n cn[m_] := Sum[an[[k]] bn[[m + 2 - k]], {k, 1, m + 1}];\n Table[cn[m], {m, 0, nmax - 1}]\n];\n\n(* The function 1 + x^i + x^2i + ... = (1,0,..,0,1,0,..,0,1,0,..) *)\ngfunc[i_] := Table[If[Mod[j - 1, i] == 0 , 1, 0], {j, 1, nmax}];\n\n(* Expand the product Prod gfunc_i = Prod ( 1/(1-x^i)) to order nmax *)\ngseries = gfunc[1];\nDo[\n gseries = CauchyProduct[gseries, gfunc[i]]\n , {i, 2, 4}];\n\n(* The coefficient of x^n *)\nn = 4;\nPrint[\"an = \", gseries[[n + 1]]];\n</code></pre>\n\n<p>Note that in this example we only need to take <code>nmax = 5</code> to get the answer.</p>\n" }, { "answer_id": 2092201, "score": 1, "text": "<p>The Cauchy product in R is handled with the <a href=\"https://en.wikipedia.org/wiki/Toeplitz_matrix#Discrete_convolution\" rel=\"nofollow noreferrer\">discrete convolution</a> function:</p>\n\n<pre><code>&gt; (l= lapply(seq_len(4)-1, function(i) rep(c(1, integer(i)), length.out=6)))\n[[1]]\n[1] 1 1 1 1 1 1\n\n[[2]]\n[1] 1 0 1 0 1 0\n\n[[3]]\n[1] 1 0 0 1 0 0\n\n[[4]]\n[1] 1 0 0 0 1 0\n\n&gt; round(Reduce(f = function(x, y) convolve(x, y, type = \"open\"), x = l))\n</code></pre>\n\n<p>$[1] \\text{0 0 0 0 1 1 2 3 5 6 6 8 8 8 6 6}\\bullet \\color{blue}{5}\\bullet \\text{3 2 1 1}$</p>\n\n<p>This is consistent with the results in <a href=\"https://www.wolframalpha.com/input/?i=(1%2Bx%2Bx%5E2%2Bx%5E3%2Bx%5E4%2Bx%5E5)+*+(1+%2B+x%5E2+%2B+x%5E4)+*+(1+%2B+x%5E3)+*+(1+%2B+x%5E4)\" rel=\"nofollow noreferrer\">Wolfram Alpha</a>:</p>\n\n<blockquote>\n <p>Input:</p>\n</blockquote>\n\n<p>$$\\small (1+x+x^2+x^3+x^4+x^5) \\times (1 + x^2 + x^4) \\times (1 + x^3) \\times (1 + x^4)$$</p>\n\n<blockquote>\n <p>Output:</p>\n</blockquote>\n\n<p>$$\\small x^{16}+x^{15}+2 x^{14}+3 x^{13}+5 x^{12}+6 x^{11}+6 x^{10}+8 x^9+8 x^8+8 x^7+6 x^6+6 x^5+\\large{\\color{blue}{5}}\\small x^4+3 x^3+2 x^2+x+1$$</p>\n\n<hr>\n\n<p><strong>EDIT CONTAINING THE ANSWER SOUGHT IN THE OP:</strong></p>\n\n<p>The answer I was looking for is as follows using Wolfram Alpha (not only free, but always readily accessible):</p>\n\n<p><strong>Example 1:</strong></p>\n\n<p>... Well, simply the problem in the OP - i.e. the number of partitions of an integer (in the example, $4$):</p>\n\n<p><code>SeriesCoefficient[\\prod_{i=1}^4\\frac{1}{1-x^i},{x, 0, 4}]</code></p>\n\n<p><a href=\"https://i.stack.imgur.com/IpfPQ.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/IpfPQ.png\" alt=\"enter image description here\"></a></p>\n\n<hr>\n\n<p><strong>Example 2:</strong></p>\n\n<p>From <a href=\"https://youtu.be/ZyUb5UxBA9Q?t=14m43s\" rel=\"nofollow noreferrer\">this presentation online</a>:</p>\n\n<blockquote>\n <p>If a fair coin is rolled 12 times, how many times will it sum to 30?</p>\n</blockquote>\n\n<p>We want the 30th coefficient of</p>\n\n<p>$$\\begin{align}&amp;[x^{30}]\\;(x+x^2+x^3+x^4+x^5+x^6)^{12}\\\\[2ex]\n=&amp;[x^{30}]\\;x^{12}(1+x+x^2+x^3+x^4+x^5)^{12}\\\\[2ex]\n=&amp;[x^{18}]\\;(1+x+x^2+x^3+x^4+x^5)^{12}\\\\[2ex]\n=&amp;[x^{18}]\\;\\left(\\frac{1-x^6}{1-x}\\right)^{12}\n\\end{align}$$</p>\n\n<p><code>SeriesCoefficient[(1 - x^6)^(12) (1 - x)^(-12),{x, 0, 18}]</code></p>\n\n<p><a href=\"https://i.stack.imgur.com/PKgWE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/PKgWE.png\" alt=\"enter image description here\"></a></p>\n\n<p>Here is the alternative presented at the end of the analytical resolution of the problem in the presentation:</p>\n\n<p><a href=\"https://i.stack.imgur.com/YmQxX.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YmQxX.png\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/4kOJw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/4kOJw.png\" alt=\"enter image description here\"></a></p>\n\n<hr>\n\n<p><strong>Example 3:</strong></p>\n\n<p>The GF for the Fibonacci sequence can be derived from the recursion $f_n=f_{n-1} + f_{n-2}$ with $f_0=0$ and $f_1=1:$</p>\n\n<p>$$\\begin{align}\nF(x) &amp;= f_0 x^0 + f_1 x^1 + f_2 x^2 + \\cdots \\\\[2 ex]\n&amp;= x + f_1 x^2 + f_2 x^3 + \\cdots\\\\\n&amp;\\quad \\quad+ f_0 x^2 + f_1 x^3+\\cdots\\\\[2ex]\n&amp;=x+ xF(x) +x^2 F(x)\n\\end{align}\n$$</p>\n\n<p>as</p>\n\n<p>$$F(x)=\\frac{x}{1-x-x^2}.$$</p>\n\n<p>The 10th Fibonacci number is...</p>\n\n<p><code>SeriesCoefficient[x / (1 - x - x^2), {x, 0, 10}]</code></p>\n\n<p><a href=\"https://i.stack.imgur.com/Ajakf.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Ajakf.jpg\" alt=\"enter image description here\"></a></p>\n" } ]
Question: How to extract coefficients of a generating function using a computer? Answer 1: <p>Here are some ways to compute this on a computer. First of all if you have access to powerful mathematical software like Mathematica this can be done very easily. For example like this:</p> <pre><code>n = 4; gseries = Product[1/(1 - x^i), {i, 1, 4}]; a4 = Residue[gseries/x^(n + 1), {x, 0}] </code></pre> <p>where <a href="http://reference.wolfram.com/language/ref/Residue.html" rel="nofollow noreferrer"><code>Residue</code></a> is an instruction to extract the coefficient of the $\frac{1}{x}$ term in a power-series. We can also <a href="http://reference.wolfram.com/language/ref/Series.html" rel="nofollow noreferrer"><code>Series</code></a> expand a function about $x=0$ and simply read off the coefficient using <code>Series[ gseries , {x, 0, n + 1} ]</code>.</p> <p>This solution relies on having all these complex functionality in the programming environment, but we can also quite easily solve this without having it.</p> <hr> <p>We can represent a polynomial (or power-series) $A(x) = \sum_{n\geq 0} a_n x^n$ on a computer as a sequence $\{a_0,a_1,\ldots,a_{n_{\rm max}}\}$. The product of two power-series is given by the <a href="https://en.wikipedia.org/wiki/Cauchy_product" rel="nofollow noreferrer">Cauchy-product</a></p> <p>$$A(x)B(x) = \sum_{n\geq 0} c_n x^n~~~\text{where}~~~c_n = \sum_{k=0}^na_n b_{n-k}$$</p> <p>which tells us how the sequences transform under multiplication. We will use this as a basis to compute the power-series. Below is a implementation in Mathematica (but it uses no Mathematica specific functions so it can be implemented in any programming language without needing special libraries). </p> <pre><code>(* The maximum exponent we will cover *) nmax = 20; (* Returns the Cauchy product of two power-series *) CauchyProduct[an_, bn_] := Module[{cn}, cn[m_] := Sum[an[[k]] bn[[m + 2 - k]], {k, 1, m + 1}]; Table[cn[m], {m, 0, nmax - 1}] ]; (* The function 1 + x^i + x^2i + ... = (1,0,..,0,1,0,..,0,1,0,..) *) gfunc[i_] := Table[If[Mod[j - 1, i] == 0 , 1, 0], {j, 1, nmax}]; (* Expand the product Prod gfunc_i = Prod ( 1/(1-x^i)) to order nmax *) gseries = gfunc[1]; Do[ gseries = CauchyProduct[gseries, gfunc[i]] , {i, 2, 4}]; (* The coefficient of x^n *) n = 4; Print["an = ", gseries[[n + 1]]]; </code></pre> <p>Note that in this example we only need to take <code>nmax = 5</code> to get the answer.</p> Answer 2: <p>The Cauchy product in R is handled with the <a href="https://en.wikipedia.org/wiki/Toeplitz_matrix#Discrete_convolution" rel="nofollow noreferrer">discrete convolution</a> function:</p> <pre><code>&gt; (l= lapply(seq_len(4)-1, function(i) rep(c(1, integer(i)), length.out=6))) [[1]] [1] 1 1 1 1 1 1 [[2]] [1] 1 0 1 0 1 0 [[3]] [1] 1 0 0 1 0 0 [[4]] [1] 1 0 0 0 1 0 &gt; round(Reduce(f = function(x, y) convolve(x, y, type = "open"), x = l)) </code></pre> <p>$[1] \text{0 0 0 0 1 1 2 3 5 6 6 8 8 8 6 6}\bullet \color{blue}{5}\bullet \text{3 2 1 1}$</p> <p>This is consistent with the results in <a href="https://www.wolframalpha.com/input/?i=(1%2Bx%2Bx%5E2%2Bx%5E3%2Bx%5E4%2Bx%5E5)+*+(1+%2B+x%5E2+%2B+x%5E4)+*+(1+%2B+x%5E3)+*+(1+%2B+x%5E4)" rel="nofollow noreferrer">Wolfram Alpha</a>:</p> <blockquote> <p>Input:</p> </blockquote> <p>$$\small (1+x+x^2+x^3+x^4+x^5) \times (1 + x^2 + x^4) \times (1 + x^3) \times (1 + x^4)$$</p> <blockquote> <p>Output:</p> </blockquote> <p>$$\small x^{16}+x^{15}+2 x^{14}+3 x^{13}+5 x^{12}+6 x^{11}+6 x^{10}+8 x^9+8 x^8+8 x^7+6 x^6+6 x^5+\large{\color{blue}{5}}\small x^4+3 x^3+2 x^2+x+1$$</p> <hr> <p><strong>EDIT CONTAINING THE ANSWER SOUGHT IN THE OP:</strong></p> <p>The answer I was looking for is as follows using Wolfram Alpha (not only free, but always readily accessible):</p> <p><strong>Example 1:</strong></p> <p>... Well, simply the problem in the OP - i.e. the number of partitions of an integer (in the example, $4$):</p> <p><code>SeriesCoefficient[\prod_{i=1}^4\frac{1}{1-x^i},{x, 0, 4}]</code></p> <p><a href="https://i.stack.imgur.com/IpfPQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IpfPQ.png" alt="enter image description here"></a></p> <hr> <p><strong>Example 2:</strong></p> <p>From <a href="https://youtu.be/ZyUb5UxBA9Q?t=14m43s" rel="nofollow noreferrer">this presentation online</a>:</p> <blockquote> <p>If a fair coin is rolled 12 times, how many times will it sum to 30?</p> </blockquote> <p>We want the 30th coefficient of</p> <p>$$\begin{align}&amp;[x^{30}]\;(x+x^2+x^3+x^4+x^5+x^6)^{12}\\[2ex] =&amp;[x^{30}]\;x^{12}(1+x+x^2+x^3+x^4+x^5)^{12}\\[2ex] =&amp;[x^{18}]\;(1+x+x^2+x^3+x^4+x^5)^{12}\\[2ex] =&amp;[x^{18}]\;\left(\frac{1-x^6}{1-x}\right)^{12} \end{align}$$</p> <p><code>SeriesCoefficient[(1 - x^6)^(12) (1 - x)^(-12),{x, 0, 18}]</code></p> <p><a href="https://i.stack.imgur.com/PKgWE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PKgWE.png" alt="enter image description here"></a></p> <p>Here is the alternative presented at the end of the analytical resolution of the problem in the presentation:</p> <p><a href="https://i.stack.imgur.com/YmQxX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YmQxX.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/4kOJw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4kOJw.png" alt="enter image description here"></a></p> <hr> <p><strong>Example 3:</strong></p> <p>The GF for the Fibonacci sequence can be derived from the recursion $f_n=f_{n-1} + f_{n-2}$ with $f_0=0$ and $f_1=1:$</p> <p>$$\begin{align} F(x) &amp;= f_0 x^0 + f_1 x^1 + f_2 x^2 + \cdots \\[2 ex] &amp;= x + f_1 x^2 + f_2 x^3 + \cdots\\ &amp;\quad \quad+ f_0 x^2 + f_1 x^3+\cdots\\[2ex] &amp;=x+ xF(x) +x^2 F(x) \end{align} $$</p> <p>as</p> <p>$$F(x)=\frac{x}{1-x-x^2}.$$</p> <p>The 10th Fibonacci number is...</p> <p><code>SeriesCoefficient[x / (1 - x - x^2), {x, 0, 10}]</code></p> <p><a href="https://i.stack.imgur.com/Ajakf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ajakf.jpg" alt="enter image description here"></a></p> Answer 3: <p>Here are some ways to compute this on a computer. First of all if you have access to powerful mathematical software like Mathematica this can be done very easily. For example like this:</p> <pre><code>n = 4; gseries = Product[1/(1 - x^i), {i, 1, 4}]; a4 = Residue[gseries/x^(n + 1), {x, 0}] </code></pre> <p>where <a href="http://reference.wolfram.com/language/ref/Residue.html" rel="nofollow noreferrer"><code>Residue</code></a> is an instruction to extract the coefficient of the $\frac{1}{x}$ term in a power-series. We can also <a href="http://reference.wolfram.com/language/ref/Series.html" rel="nofollow noreferrer"><code>Series</code></a> expand a function about $x=0$ and simply read off the coefficient using <code>Series[ gseries , {x, 0, n + 1} ]</code>.</p> <p>This solution relies on having all these complex functionality in the programming environment, but we can also quite easily solve this without having it.</p> <hr> <p>We can represent a polynomial (or power-series) $A(x) = \sum_{n\geq 0} a_n x^n$ on a computer as a sequence $\{a_0,a_1,\ldots,a_{n_{\rm max}}\}$. The product of two power-series is given by the <a href="https://en.wikipedia.org/wiki/Cauchy_product" rel="nofollow noreferrer">Cauchy-product</a></p> <p>$$A(x)B(x) = \sum_{n\geq 0} c_n x^n~~~\text{where}~~~c_n = \sum_{k=0}^na_n b_{n-k}$$</p> <p>which tells us how the sequences transform under multiplication. We will use this as a basis to compute the power-series. Below is a implementation in Mathematica (but it uses no Mathematica specific functions so it can be implemented in any programming language without needing special libraries). </p> <pre><code>(* The maximum exponent we will cover *) nmax = 20; (* Returns the Cauchy product of two power-series *) CauchyProduct[an_, bn_] := Module[{cn}, cn[m_] := Sum[an[[k]] bn[[m + 2 - k]], {k, 1, m + 1}]; Table[cn[m], {m, 0, nmax - 1}] ]; (* The function 1 + x^i + x^2i + ... = (1,0,..,0,1,0,..,0,1,0,..) *) gfunc[i_] := Table[If[Mod[j - 1, i] == 0 , 1, 0], {j, 1, nmax}]; (* Expand the product Prod gfunc_i = Prod ( 1/(1-x^i)) to order nmax *) gseries = gfunc[1]; Do[ gseries = CauchyProduct[gseries, gfunc[i]] , {i, 2, 4}]; (* The coefficient of x^n *) n = 4; Print["an = ", gseries[[n + 1]]]; </code></pre> <p>Note that in this example we only need to take <code>nmax = 5</code> to get the answer.</p> Answer 4: <p>The Cauchy product in R is handled with the <a href="https://en.wikipedia.org/wiki/Toeplitz_matrix#Discrete_convolution" rel="nofollow noreferrer">discrete convolution</a> function:</p> <pre><code>&gt; (l= lapply(seq_len(4)-1, function(i) rep(c(1, integer(i)), length.out=6))) [[1]] [1] 1 1 1 1 1 1 [[2]] [1] 1 0 1 0 1 0 [[3]] [1] 1 0 0 1 0 0 [[4]] [1] 1 0 0 0 1 0 &gt; round(Reduce(f = function(x, y) convolve(x, y, type = "open"), x = l)) </code></pre> <p>$[1] \text{0 0 0 0 1 1 2 3 5 6 6 8 8 8 6 6}\bullet \color{blue}{5}\bullet \text{3 2 1 1}$</p> <p>This is consistent with the results in <a href="https://www.wolframalpha.com/input/?i=(1%2Bx%2Bx%5E2%2Bx%5E3%2Bx%5E4%2Bx%5E5)+*+(1+%2B+x%5E2+%2B+x%5E4)+*+(1+%2B+x%5E3)+*+(1+%2B+x%5E4)" rel="nofollow noreferrer">Wolfram Alpha</a>:</p> <blockquote> <p>Input:</p> </blockquote> <p>$$\small (1+x+x^2+x^3+x^4+x^5) \times (1 + x^2 + x^4) \times (1 + x^3) \times (1 + x^4)$$</p> <blockquote> <p>Output:</p> </blockquote> <p>$$\small x^{16}+x^{15}+2 x^{14}+3 x^{13}+5 x^{12}+6 x^{11}+6 x^{10}+8 x^9+8 x^8+8 x^7+6 x^6+6 x^5+\large{\color{blue}{5}}\small x^4+3 x^3+2 x^2+x+1$$</p> <hr> <p><strong>EDIT CONTAINING THE ANSWER SOUGHT IN THE OP:</strong></p> <p>The answer I was looking for is as follows using Wolfram Alpha (not only free, but always readily accessible):</p> <p><strong>Example 1:</strong></p> <p>... Well, simply the problem in the OP - i.e. the number of partitions of an integer (in the example, $4$):</p> <p><code>SeriesCoefficient[\prod_{i=1}^4\frac{1}{1-x^i},{x, 0, 4}]</code></p> <p><a href="https://i.stack.imgur.com/IpfPQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IpfPQ.png" alt="enter image description here"></a></p> <hr> <p><strong>Example 2:</strong></p> <p>From <a href="https://youtu.be/ZyUb5UxBA9Q?t=14m43s" rel="nofollow noreferrer">this presentation online</a>:</p> <blockquote> <p>If a fair coin is rolled 12 times, how many times will it sum to 30?</p> </blockquote> <p>We want the 30th coefficient of</p> <p>$$\begin{align}&amp;[x^{30}]\;(x+x^2+x^3+x^4+x^5+x^6)^{12}\\[2ex] =&amp;[x^{30}]\;x^{12}(1+x+x^2+x^3+x^4+x^5)^{12}\\[2ex] =&amp;[x^{18}]\;(1+x+x^2+x^3+x^4+x^5)^{12}\\[2ex] =&amp;[x^{18}]\;\left(\frac{1-x^6}{1-x}\right)^{12} \end{align}$$</p> <p><code>SeriesCoefficient[(1 - x^6)^(12) (1 - x)^(-12),{x, 0, 18}]</code></p> <p><a href="https://i.stack.imgur.com/PKgWE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PKgWE.png" alt="enter image description here"></a></p> <p>Here is the alternative presented at the end of the analytical resolution of the problem in the presentation:</p> <p><a href="https://i.stack.imgur.com/YmQxX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YmQxX.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/4kOJw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4kOJw.png" alt="enter image description here"></a></p> <hr> <p><strong>Example 3:</strong></p> <p>The GF for the Fibonacci sequence can be derived from the recursion $f_n=f_{n-1} + f_{n-2}$ with $f_0=0$ and $f_1=1:$</p> <p>$$\begin{align} F(x) &amp;= f_0 x^0 + f_1 x^1 + f_2 x^2 + \cdots \\[2 ex] &amp;= x + f_1 x^2 + f_2 x^3 + \cdots\\ &amp;\quad \quad+ f_0 x^2 + f_1 x^3+\cdots\\[2ex] &amp;=x+ xF(x) +x^2 F(x) \end{align} $$</p> <p>as</p> <p>$$F(x)=\frac{x}{1-x-x^2}.$$</p> <p>The 10th Fibonacci number is...</p> <p><code>SeriesCoefficient[x / (1 - x - x^2), {x, 0, 10}]</code></p> <p><a href="https://i.stack.imgur.com/Ajakf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ajakf.jpg" alt="enter image description here"></a></p>
Show that $\sum\limits_{n=1}^{32}\frac1{n^2}=1 + \frac{1}{4} +\frac{1}{9} +\dots+ \frac{1}{1024}&lt;2$
2,091,928
4
https://math.stackexchange.com/questions/2091928/show-that-sum-limits-n-132-frac1n2-1-frac14-frac19-dots
<p>Show that $$1 + \frac{1}{4} +\frac{1}{9} +\dots+ \frac{1}{1024} &lt;2$$</p> <p>I know that the denominators are perfect squares starting from that of $1$ till $32$. Also I know about this identity $$\frac{1}{n(n+1)} &gt; \frac{1}{(n+1)^2} &gt; \frac{1}{(n+1)(n+2)}.$$ But I am not able to implement it Please help me.</p>
[ { "answer_id": 2091948, "score": 2, "text": "<p>We know $$\\frac{1}{n(n+1)} &gt; \\frac{1}{(n+1)^2} \\Rightarrow \\frac{1}{n} - \\frac{1}{n+1} &gt; \\frac{1}{(n+1)^2}$$ Adding for $n =1,2,\\cdots,31$, we get, $$\\frac{1}{1} - \\frac{1}{2} &gt; \\frac{1}{4}$$ $$\\frac{1}{2}-\\frac{1}{3} &gt; \\frac{1}{9}$$ $$\\vdots$$ $$\\frac{1}{31}-\\frac{1}{32} &gt; \\frac{1}{1024}$$Adding gives us $$\\frac{1}{4} +\\frac{1}{9} +\\cdots +\\frac{1}{1024}&lt; 1-\\frac{1}{32}$$ $$\\Rightarrow 1+\\frac{1}{4}+ \\cdots+\\frac{1}{1024}&lt; 2-\\frac{1}{32} &lt; 2$$ Hope it helps.</p>\n" }, { "answer_id": 2091985, "score": 2, "text": "<p>Another way:</p>\n\n<p>$$\\begin{align} \\sum_{k=1}^{2^5} \\frac{1}{k^2}\n&amp;\\leq 1 + \\sum_{k=2}^{2^5}\\int_{k-1}^{k}\\frac{1}{t^2} dt\\\\\n&amp;=1+\\int_1^{2^5}\\frac{1}{t^2} dt \\\\\n&amp;=2-\\frac{1}{2^5}\\\\\n&amp;&lt;2\n\\end{align}$$</p>\n" }, { "answer_id": 2091957, "score": 1, "text": "<p>For $n&gt;0$, we have\n$$\\frac{1}{(n+1)^2}\\leq \\frac{1}{n(n+1)}=\\frac{1}{n}-\\frac{1}{n+1}$$</p>\n\n<p>and by sum,\n$$1+\\sum_{n=1}^{31}\\frac{1}{(n+1)^2}\\leq 2-\\frac{1}{32}&lt;2$$</p>\n" }, { "answer_id": 2091948, "score": 2, "text": "<p>We know $$\\frac{1}{n(n+1)} &gt; \\frac{1}{(n+1)^2} \\Rightarrow \\frac{1}{n} - \\frac{1}{n+1} &gt; \\frac{1}{(n+1)^2}$$ Adding for $n =1,2,\\cdots,31$, we get, $$\\frac{1}{1} - \\frac{1}{2} &gt; \\frac{1}{4}$$ $$\\frac{1}{2}-\\frac{1}{3} &gt; \\frac{1}{9}$$ $$\\vdots$$ $$\\frac{1}{31}-\\frac{1}{32} &gt; \\frac{1}{1024}$$Adding gives us $$\\frac{1}{4} +\\frac{1}{9} +\\cdots +\\frac{1}{1024}&lt; 1-\\frac{1}{32}$$ $$\\Rightarrow 1+\\frac{1}{4}+ \\cdots+\\frac{1}{1024}&lt; 2-\\frac{1}{32} &lt; 2$$ Hope it helps.</p>\n" }, { "answer_id": 2091985, "score": 2, "text": "<p>Another way:</p>\n\n<p>$$\\begin{align} \\sum_{k=1}^{2^5} \\frac{1}{k^2}\n&amp;\\leq 1 + \\sum_{k=2}^{2^5}\\int_{k-1}^{k}\\frac{1}{t^2} dt\\\\\n&amp;=1+\\int_1^{2^5}\\frac{1}{t^2} dt \\\\\n&amp;=2-\\frac{1}{2^5}\\\\\n&amp;&lt;2\n\\end{align}$$</p>\n" }, { "answer_id": 2091957, "score": 1, "text": "<p>For $n&gt;0$, we have\n$$\\frac{1}{(n+1)^2}\\leq \\frac{1}{n(n+1)}=\\frac{1}{n}-\\frac{1}{n+1}$$</p>\n\n<p>and by sum,\n$$1+\\sum_{n=1}^{31}\\frac{1}{(n+1)^2}\\leq 2-\\frac{1}{32}&lt;2$$</p>\n" } ]
Question: Show that $\sum\limits_{n=1}^{32}\frac1{n^2}=1 + \frac{1}{4} +\frac{1}{9} +\dots+ \frac{1}{1024}&lt;2$ Answer 1: <p>We know $$\frac{1}{n(n+1)} &gt; \frac{1}{(n+1)^2} \Rightarrow \frac{1}{n} - \frac{1}{n+1} &gt; \frac{1}{(n+1)^2}$$ Adding for $n =1,2,\cdots,31$, we get, $$\frac{1}{1} - \frac{1}{2} &gt; \frac{1}{4}$$ $$\frac{1}{2}-\frac{1}{3} &gt; \frac{1}{9}$$ $$\vdots$$ $$\frac{1}{31}-\frac{1}{32} &gt; \frac{1}{1024}$$Adding gives us $$\frac{1}{4} +\frac{1}{9} +\cdots +\frac{1}{1024}&lt; 1-\frac{1}{32}$$ $$\Rightarrow 1+\frac{1}{4}+ \cdots+\frac{1}{1024}&lt; 2-\frac{1}{32} &lt; 2$$ Hope it helps.</p> Answer 2: <p>Another way:</p> <p>$$\begin{align} \sum_{k=1}^{2^5} \frac{1}{k^2} &amp;\leq 1 + \sum_{k=2}^{2^5}\int_{k-1}^{k}\frac{1}{t^2} dt\\ &amp;=1+\int_1^{2^5}\frac{1}{t^2} dt \\ &amp;=2-\frac{1}{2^5}\\ &amp;&lt;2 \end{align}$$</p> Answer 3: <p>For $n&gt;0$, we have $$\frac{1}{(n+1)^2}\leq \frac{1}{n(n+1)}=\frac{1}{n}-\frac{1}{n+1}$$</p> <p>and by sum, $$1+\sum_{n=1}^{31}\frac{1}{(n+1)^2}\leq 2-\frac{1}{32}&lt;2$$</p> Answer 4: <p>We know $$\frac{1}{n(n+1)} &gt; \frac{1}{(n+1)^2} \Rightarrow \frac{1}{n} - \frac{1}{n+1} &gt; \frac{1}{(n+1)^2}$$ Adding for $n =1,2,\cdots,31$, we get, $$\frac{1}{1} - \frac{1}{2} &gt; \frac{1}{4}$$ $$\frac{1}{2}-\frac{1}{3} &gt; \frac{1}{9}$$ $$\vdots$$ $$\frac{1}{31}-\frac{1}{32} &gt; \frac{1}{1024}$$Adding gives us $$\frac{1}{4} +\frac{1}{9} +\cdots +\frac{1}{1024}&lt; 1-\frac{1}{32}$$ $$\Rightarrow 1+\frac{1}{4}+ \cdots+\frac{1}{1024}&lt; 2-\frac{1}{32} &lt; 2$$ Hope it helps.</p> Answer 5: <p>Another way:</p> <p>$$\begin{align} \sum_{k=1}^{2^5} \frac{1}{k^2} &amp;\leq 1 + \sum_{k=2}^{2^5}\int_{k-1}^{k}\frac{1}{t^2} dt\\ &amp;=1+\int_1^{2^5}\frac{1}{t^2} dt \\ &amp;=2-\frac{1}{2^5}\\ &amp;&lt;2 \end{align}$$</p> Answer 6: <p>For $n&gt;0$, we have $$\frac{1}{(n+1)^2}\leq \frac{1}{n(n+1)}=\frac{1}{n}-\frac{1}{n+1}$$</p> <p>and by sum, $$1+\sum_{n=1}^{31}\frac{1}{(n+1)^2}\leq 2-\frac{1}{32}&lt;2$$</p>
Sum of perfect squares and cubes $&lt; 10^6$
2,092,177
4
https://math.stackexchange.com/questions/2092177/sum-of-perfect-squares-and-cubes-106
<p>Are there more natural numbers from $1$ to $1,000,000$ inclusive that can be represented as the sum of a perfect square and cube, or numbers that can't be?</p>
[ { "answer_id": 2092184, "score": 4, "text": "<p>Your argument shows that there are <em>at most</em> $100,000$ numbers up to $1,000,000$ that can be represented as the sum of a square and a positive cube. This is enough to answer your question.</p>\n" }, { "answer_id": 2092242, "score": 2, "text": "<p>your answer works as is and in fact the approximation is really good.\nI tried to get a tighter result by bounding it by the following sum:</p>\n\n<p>$\\sum\\limits_{i=1}^{100} \\sqrt{10^6-i^3}$. This expression can be bounded by the integral of the curve</p>\n\n<p>$\\int\\limits_{x=0}^{100}\\sqrt{10^6-x^3}dx$, however, since the function $x^3$ is convex and $\\sqrt{x}$ is concave the improvement is really small:</p>\n\n<p><a href=\"https://i.stack.imgur.com/1dRzu.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1dRzu.png\" alt=\"enter image description here\"></a></p>\n\n<p>we only obtain the slightly better bound of $84130$.</p>\n\n<p>On the other hand, the actual result is $76356$ as can be found with the following code:</p>\n\n<pre><code>#include &lt;bits/stdc++.h&gt;\nusing namespace std;\n\nconst int MAX=1000000;\nint A[MAX];\n\nint main(){\n int res=0;\n for(int x=0;x*x&lt;MAX;x++){\n for(int y=1;x*x+y*y*y&lt;MAX;y++){\n A[x*x+y*y*y]=1;\n }\n }\n for(int i=1;i&lt;MAX;i++){\n res+=A[i];\n }\n printf(\"%d\\n\",res);\n}\n</code></pre>\n" }, { "answer_id": 2092184, "score": 4, "text": "<p>Your argument shows that there are <em>at most</em> $100,000$ numbers up to $1,000,000$ that can be represented as the sum of a square and a positive cube. This is enough to answer your question.</p>\n" }, { "answer_id": 2092242, "score": 2, "text": "<p>your answer works as is and in fact the approximation is really good.\nI tried to get a tighter result by bounding it by the following sum:</p>\n\n<p>$\\sum\\limits_{i=1}^{100} \\sqrt{10^6-i^3}$. This expression can be bounded by the integral of the curve</p>\n\n<p>$\\int\\limits_{x=0}^{100}\\sqrt{10^6-x^3}dx$, however, since the function $x^3$ is convex and $\\sqrt{x}$ is concave the improvement is really small:</p>\n\n<p><a href=\"https://i.stack.imgur.com/1dRzu.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/1dRzu.png\" alt=\"enter image description here\"></a></p>\n\n<p>we only obtain the slightly better bound of $84130$.</p>\n\n<p>On the other hand, the actual result is $76356$ as can be found with the following code:</p>\n\n<pre><code>#include &lt;bits/stdc++.h&gt;\nusing namespace std;\n\nconst int MAX=1000000;\nint A[MAX];\n\nint main(){\n int res=0;\n for(int x=0;x*x&lt;MAX;x++){\n for(int y=1;x*x+y*y*y&lt;MAX;y++){\n A[x*x+y*y*y]=1;\n }\n }\n for(int i=1;i&lt;MAX;i++){\n res+=A[i];\n }\n printf(\"%d\\n\",res);\n}\n</code></pre>\n" } ]
Question: Sum of perfect squares and cubes $&lt; 10^6$ Answer 1: <p>Your argument shows that there are <em>at most</em> $100,000$ numbers up to $1,000,000$ that can be represented as the sum of a square and a positive cube. This is enough to answer your question.</p> Answer 2: <p>your answer works as is and in fact the approximation is really good. I tried to get a tighter result by bounding it by the following sum:</p> <p>$\sum\limits_{i=1}^{100} \sqrt{10^6-i^3}$. This expression can be bounded by the integral of the curve</p> <p>$\int\limits_{x=0}^{100}\sqrt{10^6-x^3}dx$, however, since the function $x^3$ is convex and $\sqrt{x}$ is concave the improvement is really small:</p> <p><a href="https://i.stack.imgur.com/1dRzu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1dRzu.png" alt="enter image description here"></a></p> <p>we only obtain the slightly better bound of $84130$.</p> <p>On the other hand, the actual result is $76356$ as can be found with the following code:</p> <pre><code>#include &lt;bits/stdc++.h&gt; using namespace std; const int MAX=1000000; int A[MAX]; int main(){ int res=0; for(int x=0;x*x&lt;MAX;x++){ for(int y=1;x*x+y*y*y&lt;MAX;y++){ A[x*x+y*y*y]=1; } } for(int i=1;i&lt;MAX;i++){ res+=A[i]; } printf("%d\n",res); } </code></pre> Answer 3: <p>Your argument shows that there are <em>at most</em> $100,000$ numbers up to $1,000,000$ that can be represented as the sum of a square and a positive cube. This is enough to answer your question.</p> Answer 4: <p>your answer works as is and in fact the approximation is really good. I tried to get a tighter result by bounding it by the following sum:</p> <p>$\sum\limits_{i=1}^{100} \sqrt{10^6-i^3}$. This expression can be bounded by the integral of the curve</p> <p>$\int\limits_{x=0}^{100}\sqrt{10^6-x^3}dx$, however, since the function $x^3$ is convex and $\sqrt{x}$ is concave the improvement is really small:</p> <p><a href="https://i.stack.imgur.com/1dRzu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1dRzu.png" alt="enter image description here"></a></p> <p>we only obtain the slightly better bound of $84130$.</p> <p>On the other hand, the actual result is $76356$ as can be found with the following code:</p> <pre><code>#include &lt;bits/stdc++.h&gt; using namespace std; const int MAX=1000000; int A[MAX]; int main(){ int res=0; for(int x=0;x*x&lt;MAX;x++){ for(int y=1;x*x+y*y*y&lt;MAX;y++){ A[x*x+y*y*y]=1; } } for(int i=1;i&lt;MAX;i++){ res+=A[i]; } printf("%d\n",res); } </code></pre>
Show that a limit of a sequence is zero
2,092,292
4
https://math.stackexchange.com/questions/2092292/show-that-a-limit-of-a-sequence-is-zero
<p>Let $(a_n)$ positive sequence where $$\dfrac{n-1}{n} \leq \dfrac{a_{n+1}}{a_n} \leq \dfrac {n}{n+1}$$</p> <p>Prove that: $\lim_{n\to\infty} a_n = 0$.</p> <p>I was stuck here:</p> <p>I: $$\lim_{n\to\infty} \dfrac{n-1}{n} = \lim_{n\to\infty}1-\dfrac{1}{n} = 1$$ II: $$\lim_{n\to\infty} \dfrac {n}{n+1} = \lim_{n\to\infty} \dfrac{1}{1+ \dfrac{1}{n}} = 1$$</p> <p>Then: $$\lim_{n\to\infty} \dfrac{a_{n+1}}{a_n} = 1$$</p> <p>How do I continue from here?</p>
[ { "answer_id": 2092301, "score": 11, "text": "<p>Just observe that $$a_{n+1}\\le \\frac{n}{n+1}a_n\\le \\frac{n}{n+1}\\cdot\\frac{n-1}{n}a_{n-1}=\\frac{n-1}{n+1}a_{n-1}\\le\\cdots\\le \\frac{1}{n}a_1\\\\a_{n+1}\\ge \\frac{n-1}{n}a_n\\ge \\frac{n-1}{n}\\cdot\\frac{n-2}{n-1}a_{n-1}=\\frac{n-2}{n}a_{n-1}\\ge\\cdots\\ge \\frac{1}{n}a_2\\\\\\implies \\lim\\sup_n a_n\\le 0\\le \\lim\\inf_n a_n\\\\\\implies \\lim_n a_n=0$$ Actually the lower bound on $a_n/a_{n-1}$ is redundant as $a_n$ is a positive sequence.</p>\n" }, { "answer_id": 2092332, "score": 4, "text": "<p>@Samrat Mukhopadhyay Another solution, in the same vein as yours:</p>\n\n<p>Let us define: $b_n:=na_n$.</p>\n\n<p>The second inequality can thus be written: $b_{n+1}\\leq b_n.$ </p>\n\n<p>Thus $b_n$ is a decreasing sequence of positive numbers.</p>\n\n<p>Being bounded from below by zero, sequence $b_n$ converges to its \"lim inf\", say $M \\geq 0$. </p>\n\n<p>From $\\lim_{n \\to \\infty} b_n = M$ we conclude that, using the definition of $b_n$,</p>\n\n<p>$$\\lim_{n \\to \\infty} a_n = \\lim_{n \\to \\infty} b_n \\dfrac{1}{n} = M\\times 0 = 0.$$ </p>\n\n<p>Remark: surprisingly, the first inequality doesn't play any role.</p>\n" }, { "answer_id": 2092301, "score": 11, "text": "<p>Just observe that $$a_{n+1}\\le \\frac{n}{n+1}a_n\\le \\frac{n}{n+1}\\cdot\\frac{n-1}{n}a_{n-1}=\\frac{n-1}{n+1}a_{n-1}\\le\\cdots\\le \\frac{1}{n}a_1\\\\a_{n+1}\\ge \\frac{n-1}{n}a_n\\ge \\frac{n-1}{n}\\cdot\\frac{n-2}{n-1}a_{n-1}=\\frac{n-2}{n}a_{n-1}\\ge\\cdots\\ge \\frac{1}{n}a_2\\\\\\implies \\lim\\sup_n a_n\\le 0\\le \\lim\\inf_n a_n\\\\\\implies \\lim_n a_n=0$$ Actually the lower bound on $a_n/a_{n-1}$ is redundant as $a_n$ is a positive sequence.</p>\n" }, { "answer_id": 2092332, "score": 4, "text": "<p>@Samrat Mukhopadhyay Another solution, in the same vein as yours:</p>\n\n<p>Let us define: $b_n:=na_n$.</p>\n\n<p>The second inequality can thus be written: $b_{n+1}\\leq b_n.$ </p>\n\n<p>Thus $b_n$ is a decreasing sequence of positive numbers.</p>\n\n<p>Being bounded from below by zero, sequence $b_n$ converges to its \"lim inf\", say $M \\geq 0$. </p>\n\n<p>From $\\lim_{n \\to \\infty} b_n = M$ we conclude that, using the definition of $b_n$,</p>\n\n<p>$$\\lim_{n \\to \\infty} a_n = \\lim_{n \\to \\infty} b_n \\dfrac{1}{n} = M\\times 0 = 0.$$ </p>\n\n<p>Remark: surprisingly, the first inequality doesn't play any role.</p>\n" } ]
Question: Show that a limit of a sequence is zero Answer 1: <p>Just observe that $$a_{n+1}\le \frac{n}{n+1}a_n\le \frac{n}{n+1}\cdot\frac{n-1}{n}a_{n-1}=\frac{n-1}{n+1}a_{n-1}\le\cdots\le \frac{1}{n}a_1\\a_{n+1}\ge \frac{n-1}{n}a_n\ge \frac{n-1}{n}\cdot\frac{n-2}{n-1}a_{n-1}=\frac{n-2}{n}a_{n-1}\ge\cdots\ge \frac{1}{n}a_2\\\implies \lim\sup_n a_n\le 0\le \lim\inf_n a_n\\\implies \lim_n a_n=0$$ Actually the lower bound on $a_n/a_{n-1}$ is redundant as $a_n$ is a positive sequence.</p> Answer 2: <p>@Samrat Mukhopadhyay Another solution, in the same vein as yours:</p> <p>Let us define: $b_n:=na_n$.</p> <p>The second inequality can thus be written: $b_{n+1}\leq b_n.$ </p> <p>Thus $b_n$ is a decreasing sequence of positive numbers.</p> <p>Being bounded from below by zero, sequence $b_n$ converges to its "lim inf", say $M \geq 0$. </p> <p>From $\lim_{n \to \infty} b_n = M$ we conclude that, using the definition of $b_n$,</p> <p>$$\lim_{n \to \infty} a_n = \lim_{n \to \infty} b_n \dfrac{1}{n} = M\times 0 = 0.$$ </p> <p>Remark: surprisingly, the first inequality doesn't play any role.</p> Answer 3: <p>Just observe that $$a_{n+1}\le \frac{n}{n+1}a_n\le \frac{n}{n+1}\cdot\frac{n-1}{n}a_{n-1}=\frac{n-1}{n+1}a_{n-1}\le\cdots\le \frac{1}{n}a_1\\a_{n+1}\ge \frac{n-1}{n}a_n\ge \frac{n-1}{n}\cdot\frac{n-2}{n-1}a_{n-1}=\frac{n-2}{n}a_{n-1}\ge\cdots\ge \frac{1}{n}a_2\\\implies \lim\sup_n a_n\le 0\le \lim\inf_n a_n\\\implies \lim_n a_n=0$$ Actually the lower bound on $a_n/a_{n-1}$ is redundant as $a_n$ is a positive sequence.</p> Answer 4: <p>@Samrat Mukhopadhyay Another solution, in the same vein as yours:</p> <p>Let us define: $b_n:=na_n$.</p> <p>The second inequality can thus be written: $b_{n+1}\leq b_n.$ </p> <p>Thus $b_n$ is a decreasing sequence of positive numbers.</p> <p>Being bounded from below by zero, sequence $b_n$ converges to its "lim inf", say $M \geq 0$. </p> <p>From $\lim_{n \to \infty} b_n = M$ we conclude that, using the definition of $b_n$,</p> <p>$$\lim_{n \to \infty} a_n = \lim_{n \to \infty} b_n \dfrac{1}{n} = M\times 0 = 0.$$ </p> <p>Remark: surprisingly, the first inequality doesn't play any role.</p>
Computing $\lim_n \frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k}$
2,092,326
4
https://math.stackexchange.com/questions/2092326/computing-lim-n-fracn2n-sum-k-1n-frac2kk
<blockquote> <p>What is $\lim_n \frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k}$ ?</p> </blockquote> <p>Here are a few remarks:</p> <ul> <li><p>Since $x\mapsto \frac{2^x}{x}$ is increasing when $x\geq 2$, one might be tempted to use the integral test. This fails: when doing so, one gets $a_n\leq \sum_{k=1}^n \frac{2^k}{k}\leq b_n$ where $a_n\sim \frac{2^n}{\ln (2)n}$ and $b_n\sim \frac{2^{n+1}}{\ln (2)n}$. Unfortunately $b_n$ is too big and this estimate doesn't yield the limit.</p></li> <li><p>Here's my solution: since it's easy to sum $2^k$ and the difference $\frac{1}{k}-\frac{1}{k+1}$ is small, it's natural to try summation by parts: $$\begin{align} \sum_{k=1}^n \frac{2^k}{k} &amp;=\frac{S_n}{n+1}-1+\sum_{k=1}^n S_k \left(\frac{1}{k}-\frac{1}{k+1} \right)\quad \text{where} \; S_n=\sum_{k=0}^n 2^k\\ &amp;= \frac{2^{n+1}}{n+1} + \sum_{k=1}^n \frac{2^{k+1}}{k(k+1)} - \underbrace{1 - \sum_{k=1}^n\left(\frac{1}{k(k+1)}\right) - \frac{1}{n+1}}_{\text{bounded}}\\ \end{align}$$</p></li> </ul> <p>Intuition suggests $\displaystyle \sum_{k=1}^n \frac{2^{k+1}}{k(k+1)}=o\left(\frac{2^n}n \right)$ but it's not immediate to prove. I had to resort to another summation by parts! Indeed $$\begin{align}\small\sum_{k=1}^n \frac{2^{k+1}}{k(k+1)}&amp;= \small 2\left[ \frac{2^{n+1}}{n(n+1)} + 2\sum_{k=1}^n \left(\frac{2^{k+1}}{k(k+1)(k+2)}\right)-\frac 12 -2\sum_{k=1}^n \left(\frac{1}{k(k+1)(k+2)}\right) - \frac{1}{n(n+1)}\right]\\ &amp;\small\leq \frac{2^{n+2}}{n(n+1)}+\frac{2^{n+2}}{n(n+1)(n+2)}\cdot n \\ &amp;\small= o\left(\frac{2^n}n \right) \end{align}$$</p> <p>Hence $$\sum_{k=1}^n \frac{2^k}{k} = \frac{2^{n+1}}{n+1} + o\left(\frac{2^n}n \right)$$ and $$\lim_n \frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k} = 2$$</p> <hr> <blockquote> <p>This solution is quite tedious and computational... That's why I'm looking for a shorter or smarter solution that avoids summation by parts (integration by parts is easy to perform on functions, it just gets quite heavy with series).</p> </blockquote>
[ { "answer_id": 2092497, "score": 12, "text": "<p>Write the expression as</p>\n\n<p>$$\\frac{\\sum_{k=1}^{n}2^k/k}{2^n/n}.$$</p>\n\n<p>Note the denominator $\\to \\infty.$ That rings the Stolz-Cesaro bell, so consider</p>\n\n<p>$$\\frac{2^{n+1}/(n+1)}{2^{n+1}/(n+1) -2^n/n} = \\frac{1}{1 -(n+1)/(2n)} \\to \\frac{1}{1/2} = 2.$$</p>\n\n<p>By Stolz-Cesaro, the desired limit is $2.$</p>\n" }, { "answer_id": 2092359, "score": 6, "text": "<p>Let $S_n = \\frac{n}{2^n} \\sum_{k=1}^{n} \\frac{2^k}{k}$. It has the following trivial lower bound.</p>\n\n<p>$$ 2 - \\frac{1}{2^n} = \\sum_{k=1}^{n} \\frac{1}{2^{n-k}} \\leq S_n. $$</p>\n\n<p>For an upper bound, fix $r \\in (0, 1)$ and let $N = N(r,n) = \\lfloor rn \\rfloor$. Then for $n$ large, we have $1 &lt; N &lt; n$ and hence</p>\n\n<p>\\begin{align*}\nS_n\n&amp;= \\frac{n}{2^n} \\sum_{k=1}^{N} \\color{red}{\\frac{2^k}{k}} + \\frac{n}{2^n} \\sum_{k=N+1}^{n} \\color{blue}{\\frac{2^k}{k}} \\\\\n&amp;\\leq \\frac{n}{2^n} \\sum_{k=1}^{N} \\color{red}{2^k}+ \\frac{n}{2^n} \\sum_{k=N+1}^{n} \\color{blue}{\\frac{2^k}{rn}} \\\\\n&amp;= \\frac{n}{2^n} (2^{N+1} - 1) + \\frac{1}{r 2^n}(2^{n+1} - 2^{N+1}) \\\\\n&amp;\\leq n 2^{-(1-r)n+1} + \\frac{2}{r}.\n\\end{align*}</p>\n\n<p>For the red terms, we utilized the trivial bound $\\frac{1}{k} \\leq 1$. For the blue terms, we utilized the fact that $k \\geq N+1$ implies $k \\geq nr$. Thus it follows that</p>\n\n<p>$$ 2 \\leq \\liminf_{n\\to\\infty} S_n \\leq \\limsup_{n\\to\\infty} S_n \\leq \\frac{2}{r}. $$</p>\n\n<p>Taking $r \\uparrow 1$ gives the wanted conclusion.</p>\n" }, { "answer_id": 2092407, "score": 2, "text": "<p>$\\begin{array}\\\\\n\\frac{n}{2^n}\\sum_{k=1}^n \\frac{2^k}{k}\n&amp;=n\\sum_{k=1}^n \\frac{2^{k-n}}{k}\\\\\n&amp;=n\\sum_{k=0}^{n-1} \\frac{2^{-k}}{n-k}\\\\\n&amp;=\\sum_{k=0}^{n-1} \\frac{2^{-k}}{1-k/n}\\\\\n\\text{so}\\\\\n\\frac{n}{2^n}\\sum_{k=1}^n \\frac{2^k}{k}-2\n&amp;=\\sum_{k=0}^{n-1} 2^{-k}(\\frac1{1-k/n}-1)-\\frac1{2^{n-1}}\\\\\n&amp;=\\sum_{k=0}^{n-1} 2^{-k}(\\frac{k/n}{1-k/n})-\\frac1{2^{n-1}}\\\\\n\\end{array}\n$</p>\n\n<p>If $k \\le cn$,\n$2^{-k}(\\frac{k/n}{1-k/n})\n\\le 2^{-k}\\frac{c}{1-c}\n$\nso\n$\\sum_{k=0}^{\\lfloor nc \\rfloor} 2^{-k}(\\frac{k/n}{1-k/n})\n\\le 2\\frac{c}{1-c}\n\\lt 4c$\nif\n$0 &lt; c &lt; \\frac12$.</p>\n\n<p>If\n$k &gt; cn$,</p>\n\n<p>$\\begin{array}\\\\\n\\sum_{k=cn}^{n-1} 2^{-k}(\\frac{k/n}{1-k/n})\n&amp;&lt;\\sum_{k=cn}^{n-1} 2^{-cn}(\\frac{k/n}{1-k/n})\\\\\n&amp;=2^{-cn}\\sum_{k=cn}^{n-1} (\\frac{k}{n-k})\\\\\n&amp;&lt;2^{-cn}\\sum_{k=cn}^{n-1} n\\\\\n&amp;&lt;n^22^{-cn}\\\\\n&amp;&lt;e^{-cn \\ln 2 + 2\\ln n}\\\\\n\\end{array}\n$</p>\n\n<p>For any fixed $c &gt; 0$,\n$-cn \\ln 2 + 2\\ln n\n\\to -\\infty$\nas\n$n \\to \\infty$,\nso that\n$e^{-cn \\ln 2 + 2\\ln n}\n\\to 0$.</p>\n\n<p>By first choosing $c$ small\nand then $n$ large,\nboth\n$4c$ and\n$e^{-cn \\ln 2 + 2\\ln n}$\ncan be made as small as we want,\nso that\n$\\frac{n}{2^n}\\sum_{k=1}^n \\frac{2^k}{k}-2\n$\ncan be made as small as we want,\nso that\n$\\lim_{n \\to \\infty}\\frac{n}{2^n}\\sum_{k=1}^n \\frac{2^k}{k}\n=2\n$.</p>\n" }, { "answer_id": 2092461, "score": 2, "text": "<p>Let\n$$ S_n=\\frac{n}{2^n}\\sum_{k=1}^n\\frac{2^k}{k}. \\tag{1}$$\nClearly\n$$ S_n=\\sum_{k=1}^n\\frac{n}{k}\\frac1{2^{n-k}}\\ge \\sum_{k=1}^n\\frac1{2^{n-k}}=2-\\frac{1}{2^n}. $$\nDefine\n\\begin{eqnarray}\nf_n(x)=\\frac{n}{2^n}\\sum_{k=1}^n\\frac{2^k}{k}\\left(\\frac12x\\right)^k\n\\end{eqnarray}\nand then $f_n(0)=0,f_n(2)=S_n$ and\n$$ f_n'(x)=\\frac{n}{2^n}\\sum_{k=1}^nx^{k-1}=\\frac{n}{2^n}\\frac{1-x^n}{1-x}.$$\nSo\n\\begin{eqnarray}\nS_n&amp;=&amp;\\frac{n}{2^n}\\int_0^2\\frac{1-x^n}{1-x}dx\\\\\n&amp;=&amp;\\frac{n}{2^n}\\int_0^1\\frac{1-x^n}{1-x}dx+\\frac{n}{2^n}\\int_1^2\\frac{x^n-1}{x-1}dx.\n\\end{eqnarray}\nSince\n$$ \\int_0^1\\frac{1-x^n}{1-x}dx=\\ln n+\\gamma+o(1),$$\none \n$$ \\frac{n}{2^n}\\int_0^1\\frac{1-x^n}{1-x}dx=o(1). $$\nNoting that if $x\\in[1,2]$, then $x-1\\ge1$ and $x^n-1\\le x^n$, one has\n$$ S_n\\le o(1)+\\frac{n}{2^n}\\int_1^2\\frac{x^n-1}{x-1}dx\\le o(1)+\\frac{n}{2^n}\\int_1^2x^ndx=o(1)+\\frac{n}{2^n}\\frac{1}{n+1}(2^{n+1}-1).\\tag{2}$$\nFrom (1) and (2), one has\n$$ 2-\\frac{1}{2^n}\\le S_n\\le o(1)+\\frac{n}{2^n}\\frac{1}{n+1}(2^{n+1}-1).$$\nLetting $n\\to\\infty$, one has\n$$ \\lim_{n\\to\\infty}S_n=2. $$</p>\n" }, { "answer_id": 2092465, "score": 2, "text": "<p>Let $f(x)=\\sum_{k=1}^n\\frac{x^k}{k}$. Then, we can write</p>\n\n<p>$$f(x)=\\sum_{k=1}^n \\int_0^x t^{k-1}\\,dk=\\int_0^x \\frac{1-t^n}{1-t}\\,dt$$</p>\n\n<p>so that $f(2)=\\sum_{k=1}^n\\frac{2^k}{k}=\\int_0^2 \\frac{1-t^n}{1-t}\\,dt$.</p>\n\n<hr>\n\n<p>Next, we choose $0&lt;\\delta&lt;1$ and write</p>\n\n<p>$$\\begin{align}\n\\frac{n}{2^n}\\int_0^2 \\frac{1-t^n}{1-t}\\,dt&amp;=\\frac{n}{2^n}\\int_0^{2-\\delta}\\frac{1-t^n}{1-t}\\,dt+\\frac{n}{2^n}\\int_{2-\\delta}^2\\frac{t^n-1}{t-1}\\,dt \\tag 1\n\\end{align}$$</p>\n\n<p>We can apply the mean-value theorem for the second term on the right-hand side of $(1)$ to reveal that for some $\\xi_n\\in [2-\\delta,2]$ </p>\n\n<p>$$\\begin{align}\n\\frac{n}{2^n}\\int_{2-\\delta}^2 \\frac{t^n-1}{t-1}\\,dt&amp;=\\frac{n}{2^n(\\xi_n-1)}\\int_{2-\\delta}^2(t^n-1)\\,dt\\\\\\\\\n&amp;=\\frac{n}{2^n(\\xi-1)}\\left(\\frac{2^{n+1}-(2-\\delta)^{n+1}}{n+1}+\\delta\\right) \\tag 2\n\\end{align}$$</p>\n\n<p>Letting $n\\to \\infty$ in $(2)$ yields</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{n}{2^n}\\int_{2-\\delta}^2 \\frac{t^n-1}{t-1}\\,dt=\\frac{2}{\\xi-1}$$</p>\n\n<p>For any given $\\epsilon&gt;0$, we can choose $0&lt;\\delta&lt;1$ so small that $\\left|\\frac{2}{\\xi_n-1}-2\\right|&lt;\\epsilon$. Now, we proceed with that $0&lt;\\delta&lt;1$ fixed. </p>\n\n<p>Noting that $\\frac{1-t^n}{1-t}$ is positive and monotonically increasing on $[0,2]$, the first term on the right-hand side of $(1)$ is bounded above by $\\frac{n}{2^n}\\frac{(2-\\delta)^n-1}{1-\\delta}\\to 0$ as $n\\to \\infty$.</p>\n\n<hr>\n\n<p>Putting it all together, we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{n}{2^n}\\sum_{k=1}^n\\frac{2^k}{k}=2$$</p>\n</blockquote>\n" }, { "answer_id": 2092680, "score": 2, "text": "<p>I encountered this limit when computing\n$$\n\\sum_{k=0}^n\\frac1{\\binom{n}{k}}=\\frac{n+1}{2^n}\\sum_{k=0}^n\\frac{2^k}{k+1}\n$$\nin equation $(8)$ of <a href=\"https://mathematica.stackexchange.com/a/39470\">this answer</a>, and for which an asymptotic expansion is given in equation $(6)$ of <a href=\"https://math.stackexchange.com/a/629637\">this answer</a>:\n$$\n\\begin{align}\n\\frac{n}{2^n}\\sum_{k=1}^n\\frac{2^k}k\n&amp;=\\sum_{k=1}^n\\frac1{\\binom{n-1}{k-1}}\\\\\n&amp;\\sim2+\\frac2{n-1}+\\frac4{(n-1)(n-2)}+\\frac{12}{(n-1)(n-2)(n-3)}\\\\\n&amp;+\\frac{48}{(n-1)(n-2)(n-3)(n-4)}+O\\left(\\frac1{n^5}\\right)\n\\end{align}\n$$</p>\n" }, { "answer_id": 2092497, "score": 12, "text": "<p>Write the expression as</p>\n\n<p>$$\\frac{\\sum_{k=1}^{n}2^k/k}{2^n/n}.$$</p>\n\n<p>Note the denominator $\\to \\infty.$ That rings the Stolz-Cesaro bell, so consider</p>\n\n<p>$$\\frac{2^{n+1}/(n+1)}{2^{n+1}/(n+1) -2^n/n} = \\frac{1}{1 -(n+1)/(2n)} \\to \\frac{1}{1/2} = 2.$$</p>\n\n<p>By Stolz-Cesaro, the desired limit is $2.$</p>\n" }, { "answer_id": 2092359, "score": 6, "text": "<p>Let $S_n = \\frac{n}{2^n} \\sum_{k=1}^{n} \\frac{2^k}{k}$. It has the following trivial lower bound.</p>\n\n<p>$$ 2 - \\frac{1}{2^n} = \\sum_{k=1}^{n} \\frac{1}{2^{n-k}} \\leq S_n. $$</p>\n\n<p>For an upper bound, fix $r \\in (0, 1)$ and let $N = N(r,n) = \\lfloor rn \\rfloor$. Then for $n$ large, we have $1 &lt; N &lt; n$ and hence</p>\n\n<p>\\begin{align*}\nS_n\n&amp;= \\frac{n}{2^n} \\sum_{k=1}^{N} \\color{red}{\\frac{2^k}{k}} + \\frac{n}{2^n} \\sum_{k=N+1}^{n} \\color{blue}{\\frac{2^k}{k}} \\\\\n&amp;\\leq \\frac{n}{2^n} \\sum_{k=1}^{N} \\color{red}{2^k}+ \\frac{n}{2^n} \\sum_{k=N+1}^{n} \\color{blue}{\\frac{2^k}{rn}} \\\\\n&amp;= \\frac{n}{2^n} (2^{N+1} - 1) + \\frac{1}{r 2^n}(2^{n+1} - 2^{N+1}) \\\\\n&amp;\\leq n 2^{-(1-r)n+1} + \\frac{2}{r}.\n\\end{align*}</p>\n\n<p>For the red terms, we utilized the trivial bound $\\frac{1}{k} \\leq 1$. For the blue terms, we utilized the fact that $k \\geq N+1$ implies $k \\geq nr$. Thus it follows that</p>\n\n<p>$$ 2 \\leq \\liminf_{n\\to\\infty} S_n \\leq \\limsup_{n\\to\\infty} S_n \\leq \\frac{2}{r}. $$</p>\n\n<p>Taking $r \\uparrow 1$ gives the wanted conclusion.</p>\n" }, { "answer_id": 2092407, "score": 2, "text": "<p>$\\begin{array}\\\\\n\\frac{n}{2^n}\\sum_{k=1}^n \\frac{2^k}{k}\n&amp;=n\\sum_{k=1}^n \\frac{2^{k-n}}{k}\\\\\n&amp;=n\\sum_{k=0}^{n-1} \\frac{2^{-k}}{n-k}\\\\\n&amp;=\\sum_{k=0}^{n-1} \\frac{2^{-k}}{1-k/n}\\\\\n\\text{so}\\\\\n\\frac{n}{2^n}\\sum_{k=1}^n \\frac{2^k}{k}-2\n&amp;=\\sum_{k=0}^{n-1} 2^{-k}(\\frac1{1-k/n}-1)-\\frac1{2^{n-1}}\\\\\n&amp;=\\sum_{k=0}^{n-1} 2^{-k}(\\frac{k/n}{1-k/n})-\\frac1{2^{n-1}}\\\\\n\\end{array}\n$</p>\n\n<p>If $k \\le cn$,\n$2^{-k}(\\frac{k/n}{1-k/n})\n\\le 2^{-k}\\frac{c}{1-c}\n$\nso\n$\\sum_{k=0}^{\\lfloor nc \\rfloor} 2^{-k}(\\frac{k/n}{1-k/n})\n\\le 2\\frac{c}{1-c}\n\\lt 4c$\nif\n$0 &lt; c &lt; \\frac12$.</p>\n\n<p>If\n$k &gt; cn$,</p>\n\n<p>$\\begin{array}\\\\\n\\sum_{k=cn}^{n-1} 2^{-k}(\\frac{k/n}{1-k/n})\n&amp;&lt;\\sum_{k=cn}^{n-1} 2^{-cn}(\\frac{k/n}{1-k/n})\\\\\n&amp;=2^{-cn}\\sum_{k=cn}^{n-1} (\\frac{k}{n-k})\\\\\n&amp;&lt;2^{-cn}\\sum_{k=cn}^{n-1} n\\\\\n&amp;&lt;n^22^{-cn}\\\\\n&amp;&lt;e^{-cn \\ln 2 + 2\\ln n}\\\\\n\\end{array}\n$</p>\n\n<p>For any fixed $c &gt; 0$,\n$-cn \\ln 2 + 2\\ln n\n\\to -\\infty$\nas\n$n \\to \\infty$,\nso that\n$e^{-cn \\ln 2 + 2\\ln n}\n\\to 0$.</p>\n\n<p>By first choosing $c$ small\nand then $n$ large,\nboth\n$4c$ and\n$e^{-cn \\ln 2 + 2\\ln n}$\ncan be made as small as we want,\nso that\n$\\frac{n}{2^n}\\sum_{k=1}^n \\frac{2^k}{k}-2\n$\ncan be made as small as we want,\nso that\n$\\lim_{n \\to \\infty}\\frac{n}{2^n}\\sum_{k=1}^n \\frac{2^k}{k}\n=2\n$.</p>\n" }, { "answer_id": 2092461, "score": 2, "text": "<p>Let\n$$ S_n=\\frac{n}{2^n}\\sum_{k=1}^n\\frac{2^k}{k}. \\tag{1}$$\nClearly\n$$ S_n=\\sum_{k=1}^n\\frac{n}{k}\\frac1{2^{n-k}}\\ge \\sum_{k=1}^n\\frac1{2^{n-k}}=2-\\frac{1}{2^n}. $$\nDefine\n\\begin{eqnarray}\nf_n(x)=\\frac{n}{2^n}\\sum_{k=1}^n\\frac{2^k}{k}\\left(\\frac12x\\right)^k\n\\end{eqnarray}\nand then $f_n(0)=0,f_n(2)=S_n$ and\n$$ f_n'(x)=\\frac{n}{2^n}\\sum_{k=1}^nx^{k-1}=\\frac{n}{2^n}\\frac{1-x^n}{1-x}.$$\nSo\n\\begin{eqnarray}\nS_n&amp;=&amp;\\frac{n}{2^n}\\int_0^2\\frac{1-x^n}{1-x}dx\\\\\n&amp;=&amp;\\frac{n}{2^n}\\int_0^1\\frac{1-x^n}{1-x}dx+\\frac{n}{2^n}\\int_1^2\\frac{x^n-1}{x-1}dx.\n\\end{eqnarray}\nSince\n$$ \\int_0^1\\frac{1-x^n}{1-x}dx=\\ln n+\\gamma+o(1),$$\none \n$$ \\frac{n}{2^n}\\int_0^1\\frac{1-x^n}{1-x}dx=o(1). $$\nNoting that if $x\\in[1,2]$, then $x-1\\ge1$ and $x^n-1\\le x^n$, one has\n$$ S_n\\le o(1)+\\frac{n}{2^n}\\int_1^2\\frac{x^n-1}{x-1}dx\\le o(1)+\\frac{n}{2^n}\\int_1^2x^ndx=o(1)+\\frac{n}{2^n}\\frac{1}{n+1}(2^{n+1}-1).\\tag{2}$$\nFrom (1) and (2), one has\n$$ 2-\\frac{1}{2^n}\\le S_n\\le o(1)+\\frac{n}{2^n}\\frac{1}{n+1}(2^{n+1}-1).$$\nLetting $n\\to\\infty$, one has\n$$ \\lim_{n\\to\\infty}S_n=2. $$</p>\n" }, { "answer_id": 2092465, "score": 2, "text": "<p>Let $f(x)=\\sum_{k=1}^n\\frac{x^k}{k}$. Then, we can write</p>\n\n<p>$$f(x)=\\sum_{k=1}^n \\int_0^x t^{k-1}\\,dk=\\int_0^x \\frac{1-t^n}{1-t}\\,dt$$</p>\n\n<p>so that $f(2)=\\sum_{k=1}^n\\frac{2^k}{k}=\\int_0^2 \\frac{1-t^n}{1-t}\\,dt$.</p>\n\n<hr>\n\n<p>Next, we choose $0&lt;\\delta&lt;1$ and write</p>\n\n<p>$$\\begin{align}\n\\frac{n}{2^n}\\int_0^2 \\frac{1-t^n}{1-t}\\,dt&amp;=\\frac{n}{2^n}\\int_0^{2-\\delta}\\frac{1-t^n}{1-t}\\,dt+\\frac{n}{2^n}\\int_{2-\\delta}^2\\frac{t^n-1}{t-1}\\,dt \\tag 1\n\\end{align}$$</p>\n\n<p>We can apply the mean-value theorem for the second term on the right-hand side of $(1)$ to reveal that for some $\\xi_n\\in [2-\\delta,2]$ </p>\n\n<p>$$\\begin{align}\n\\frac{n}{2^n}\\int_{2-\\delta}^2 \\frac{t^n-1}{t-1}\\,dt&amp;=\\frac{n}{2^n(\\xi_n-1)}\\int_{2-\\delta}^2(t^n-1)\\,dt\\\\\\\\\n&amp;=\\frac{n}{2^n(\\xi-1)}\\left(\\frac{2^{n+1}-(2-\\delta)^{n+1}}{n+1}+\\delta\\right) \\tag 2\n\\end{align}$$</p>\n\n<p>Letting $n\\to \\infty$ in $(2)$ yields</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{n}{2^n}\\int_{2-\\delta}^2 \\frac{t^n-1}{t-1}\\,dt=\\frac{2}{\\xi-1}$$</p>\n\n<p>For any given $\\epsilon&gt;0$, we can choose $0&lt;\\delta&lt;1$ so small that $\\left|\\frac{2}{\\xi_n-1}-2\\right|&lt;\\epsilon$. Now, we proceed with that $0&lt;\\delta&lt;1$ fixed. </p>\n\n<p>Noting that $\\frac{1-t^n}{1-t}$ is positive and monotonically increasing on $[0,2]$, the first term on the right-hand side of $(1)$ is bounded above by $\\frac{n}{2^n}\\frac{(2-\\delta)^n-1}{1-\\delta}\\to 0$ as $n\\to \\infty$.</p>\n\n<hr>\n\n<p>Putting it all together, we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{n}{2^n}\\sum_{k=1}^n\\frac{2^k}{k}=2$$</p>\n</blockquote>\n" }, { "answer_id": 2092680, "score": 2, "text": "<p>I encountered this limit when computing\n$$\n\\sum_{k=0}^n\\frac1{\\binom{n}{k}}=\\frac{n+1}{2^n}\\sum_{k=0}^n\\frac{2^k}{k+1}\n$$\nin equation $(8)$ of <a href=\"https://mathematica.stackexchange.com/a/39470\">this answer</a>, and for which an asymptotic expansion is given in equation $(6)$ of <a href=\"https://math.stackexchange.com/a/629637\">this answer</a>:\n$$\n\\begin{align}\n\\frac{n}{2^n}\\sum_{k=1}^n\\frac{2^k}k\n&amp;=\\sum_{k=1}^n\\frac1{\\binom{n-1}{k-1}}\\\\\n&amp;\\sim2+\\frac2{n-1}+\\frac4{(n-1)(n-2)}+\\frac{12}{(n-1)(n-2)(n-3)}\\\\\n&amp;+\\frac{48}{(n-1)(n-2)(n-3)(n-4)}+O\\left(\\frac1{n^5}\\right)\n\\end{align}\n$$</p>\n" } ]
Question: Computing $\lim_n \frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k}$ Answer 1: <p>Write the expression as</p> <p>$$\frac{\sum_{k=1}^{n}2^k/k}{2^n/n}.$$</p> <p>Note the denominator $\to \infty.$ That rings the Stolz-Cesaro bell, so consider</p> <p>$$\frac{2^{n+1}/(n+1)}{2^{n+1}/(n+1) -2^n/n} = \frac{1}{1 -(n+1)/(2n)} \to \frac{1}{1/2} = 2.$$</p> <p>By Stolz-Cesaro, the desired limit is $2.$</p> Answer 2: <p>Let $S_n = \frac{n}{2^n} \sum_{k=1}^{n} \frac{2^k}{k}$. It has the following trivial lower bound.</p> <p>$$ 2 - \frac{1}{2^n} = \sum_{k=1}^{n} \frac{1}{2^{n-k}} \leq S_n. $$</p> <p>For an upper bound, fix $r \in (0, 1)$ and let $N = N(r,n) = \lfloor rn \rfloor$. Then for $n$ large, we have $1 &lt; N &lt; n$ and hence</p> <p>\begin{align*} S_n &amp;= \frac{n}{2^n} \sum_{k=1}^{N} \color{red}{\frac{2^k}{k}} + \frac{n}{2^n} \sum_{k=N+1}^{n} \color{blue}{\frac{2^k}{k}} \\ &amp;\leq \frac{n}{2^n} \sum_{k=1}^{N} \color{red}{2^k}+ \frac{n}{2^n} \sum_{k=N+1}^{n} \color{blue}{\frac{2^k}{rn}} \\ &amp;= \frac{n}{2^n} (2^{N+1} - 1) + \frac{1}{r 2^n}(2^{n+1} - 2^{N+1}) \\ &amp;\leq n 2^{-(1-r)n+1} + \frac{2}{r}. \end{align*}</p> <p>For the red terms, we utilized the trivial bound $\frac{1}{k} \leq 1$. For the blue terms, we utilized the fact that $k \geq N+1$ implies $k \geq nr$. Thus it follows that</p> <p>$$ 2 \leq \liminf_{n\to\infty} S_n \leq \limsup_{n\to\infty} S_n \leq \frac{2}{r}. $$</p> <p>Taking $r \uparrow 1$ gives the wanted conclusion.</p> Answer 3: <p>$\begin{array}\\ \frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k} &amp;=n\sum_{k=1}^n \frac{2^{k-n}}{k}\\ &amp;=n\sum_{k=0}^{n-1} \frac{2^{-k}}{n-k}\\ &amp;=\sum_{k=0}^{n-1} \frac{2^{-k}}{1-k/n}\\ \text{so}\\ \frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k}-2 &amp;=\sum_{k=0}^{n-1} 2^{-k}(\frac1{1-k/n}-1)-\frac1{2^{n-1}}\\ &amp;=\sum_{k=0}^{n-1} 2^{-k}(\frac{k/n}{1-k/n})-\frac1{2^{n-1}}\\ \end{array} $</p> <p>If $k \le cn$, $2^{-k}(\frac{k/n}{1-k/n}) \le 2^{-k}\frac{c}{1-c} $ so $\sum_{k=0}^{\lfloor nc \rfloor} 2^{-k}(\frac{k/n}{1-k/n}) \le 2\frac{c}{1-c} \lt 4c$ if $0 &lt; c &lt; \frac12$.</p> <p>If $k &gt; cn$,</p> <p>$\begin{array}\\ \sum_{k=cn}^{n-1} 2^{-k}(\frac{k/n}{1-k/n}) &amp;&lt;\sum_{k=cn}^{n-1} 2^{-cn}(\frac{k/n}{1-k/n})\\ &amp;=2^{-cn}\sum_{k=cn}^{n-1} (\frac{k}{n-k})\\ &amp;&lt;2^{-cn}\sum_{k=cn}^{n-1} n\\ &amp;&lt;n^22^{-cn}\\ &amp;&lt;e^{-cn \ln 2 + 2\ln n}\\ \end{array} $</p> <p>For any fixed $c &gt; 0$, $-cn \ln 2 + 2\ln n \to -\infty$ as $n \to \infty$, so that $e^{-cn \ln 2 + 2\ln n} \to 0$.</p> <p>By first choosing $c$ small and then $n$ large, both $4c$ and $e^{-cn \ln 2 + 2\ln n}$ can be made as small as we want, so that $\frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k}-2 $ can be made as small as we want, so that $\lim_{n \to \infty}\frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k} =2 $.</p> Answer 4: <p>Let $$ S_n=\frac{n}{2^n}\sum_{k=1}^n\frac{2^k}{k}. \tag{1}$$ Clearly $$ S_n=\sum_{k=1}^n\frac{n}{k}\frac1{2^{n-k}}\ge \sum_{k=1}^n\frac1{2^{n-k}}=2-\frac{1}{2^n}. $$ Define \begin{eqnarray} f_n(x)=\frac{n}{2^n}\sum_{k=1}^n\frac{2^k}{k}\left(\frac12x\right)^k \end{eqnarray} and then $f_n(0)=0,f_n(2)=S_n$ and $$ f_n'(x)=\frac{n}{2^n}\sum_{k=1}^nx^{k-1}=\frac{n}{2^n}\frac{1-x^n}{1-x}.$$ So \begin{eqnarray} S_n&amp;=&amp;\frac{n}{2^n}\int_0^2\frac{1-x^n}{1-x}dx\\ &amp;=&amp;\frac{n}{2^n}\int_0^1\frac{1-x^n}{1-x}dx+\frac{n}{2^n}\int_1^2\frac{x^n-1}{x-1}dx. \end{eqnarray} Since $$ \int_0^1\frac{1-x^n}{1-x}dx=\ln n+\gamma+o(1),$$ one $$ \frac{n}{2^n}\int_0^1\frac{1-x^n}{1-x}dx=o(1). $$ Noting that if $x\in[1,2]$, then $x-1\ge1$ and $x^n-1\le x^n$, one has $$ S_n\le o(1)+\frac{n}{2^n}\int_1^2\frac{x^n-1}{x-1}dx\le o(1)+\frac{n}{2^n}\int_1^2x^ndx=o(1)+\frac{n}{2^n}\frac{1}{n+1}(2^{n+1}-1).\tag{2}$$ From (1) and (2), one has $$ 2-\frac{1}{2^n}\le S_n\le o(1)+\frac{n}{2^n}\frac{1}{n+1}(2^{n+1}-1).$$ Letting $n\to\infty$, one has $$ \lim_{n\to\infty}S_n=2. $$</p> Answer 5: <p>Let $f(x)=\sum_{k=1}^n\frac{x^k}{k}$. Then, we can write</p> <p>$$f(x)=\sum_{k=1}^n \int_0^x t^{k-1}\,dk=\int_0^x \frac{1-t^n}{1-t}\,dt$$</p> <p>so that $f(2)=\sum_{k=1}^n\frac{2^k}{k}=\int_0^2 \frac{1-t^n}{1-t}\,dt$.</p> <hr> <p>Next, we choose $0&lt;\delta&lt;1$ and write</p> <p>$$\begin{align} \frac{n}{2^n}\int_0^2 \frac{1-t^n}{1-t}\,dt&amp;=\frac{n}{2^n}\int_0^{2-\delta}\frac{1-t^n}{1-t}\,dt+\frac{n}{2^n}\int_{2-\delta}^2\frac{t^n-1}{t-1}\,dt \tag 1 \end{align}$$</p> <p>We can apply the mean-value theorem for the second term on the right-hand side of $(1)$ to reveal that for some $\xi_n\in [2-\delta,2]$ </p> <p>$$\begin{align} \frac{n}{2^n}\int_{2-\delta}^2 \frac{t^n-1}{t-1}\,dt&amp;=\frac{n}{2^n(\xi_n-1)}\int_{2-\delta}^2(t^n-1)\,dt\\\\ &amp;=\frac{n}{2^n(\xi-1)}\left(\frac{2^{n+1}-(2-\delta)^{n+1}}{n+1}+\delta\right) \tag 2 \end{align}$$</p> <p>Letting $n\to \infty$ in $(2)$ yields</p> <p>$$\lim_{n\to \infty}\frac{n}{2^n}\int_{2-\delta}^2 \frac{t^n-1}{t-1}\,dt=\frac{2}{\xi-1}$$</p> <p>For any given $\epsilon&gt;0$, we can choose $0&lt;\delta&lt;1$ so small that $\left|\frac{2}{\xi_n-1}-2\right|&lt;\epsilon$. Now, we proceed with that $0&lt;\delta&lt;1$ fixed. </p> <p>Noting that $\frac{1-t^n}{1-t}$ is positive and monotonically increasing on $[0,2]$, the first term on the right-hand side of $(1)$ is bounded above by $\frac{n}{2^n}\frac{(2-\delta)^n-1}{1-\delta}\to 0$ as $n\to \infty$.</p> <hr> <p>Putting it all together, we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{n}{2^n}\sum_{k=1}^n\frac{2^k}{k}=2$$</p> </blockquote> Answer 6: <p>I encountered this limit when computing $$ \sum_{k=0}^n\frac1{\binom{n}{k}}=\frac{n+1}{2^n}\sum_{k=0}^n\frac{2^k}{k+1} $$ in equation $(8)$ of <a href="https://mathematica.stackexchange.com/a/39470">this answer</a>, and for which an asymptotic expansion is given in equation $(6)$ of <a href="https://math.stackexchange.com/a/629637">this answer</a>: $$ \begin{align} \frac{n}{2^n}\sum_{k=1}^n\frac{2^k}k &amp;=\sum_{k=1}^n\frac1{\binom{n-1}{k-1}}\\ &amp;\sim2+\frac2{n-1}+\frac4{(n-1)(n-2)}+\frac{12}{(n-1)(n-2)(n-3)}\\ &amp;+\frac{48}{(n-1)(n-2)(n-3)(n-4)}+O\left(\frac1{n^5}\right) \end{align} $$</p> Answer 7: <p>Write the expression as</p> <p>$$\frac{\sum_{k=1}^{n}2^k/k}{2^n/n}.$$</p> <p>Note the denominator $\to \infty.$ That rings the Stolz-Cesaro bell, so consider</p> <p>$$\frac{2^{n+1}/(n+1)}{2^{n+1}/(n+1) -2^n/n} = \frac{1}{1 -(n+1)/(2n)} \to \frac{1}{1/2} = 2.$$</p> <p>By Stolz-Cesaro, the desired limit is $2.$</p> Answer 8: <p>Let $S_n = \frac{n}{2^n} \sum_{k=1}^{n} \frac{2^k}{k}$. It has the following trivial lower bound.</p> <p>$$ 2 - \frac{1}{2^n} = \sum_{k=1}^{n} \frac{1}{2^{n-k}} \leq S_n. $$</p> <p>For an upper bound, fix $r \in (0, 1)$ and let $N = N(r,n) = \lfloor rn \rfloor$. Then for $n$ large, we have $1 &lt; N &lt; n$ and hence</p> <p>\begin{align*} S_n &amp;= \frac{n}{2^n} \sum_{k=1}^{N} \color{red}{\frac{2^k}{k}} + \frac{n}{2^n} \sum_{k=N+1}^{n} \color{blue}{\frac{2^k}{k}} \\ &amp;\leq \frac{n}{2^n} \sum_{k=1}^{N} \color{red}{2^k}+ \frac{n}{2^n} \sum_{k=N+1}^{n} \color{blue}{\frac{2^k}{rn}} \\ &amp;= \frac{n}{2^n} (2^{N+1} - 1) + \frac{1}{r 2^n}(2^{n+1} - 2^{N+1}) \\ &amp;\leq n 2^{-(1-r)n+1} + \frac{2}{r}. \end{align*}</p> <p>For the red terms, we utilized the trivial bound $\frac{1}{k} \leq 1$. For the blue terms, we utilized the fact that $k \geq N+1$ implies $k \geq nr$. Thus it follows that</p> <p>$$ 2 \leq \liminf_{n\to\infty} S_n \leq \limsup_{n\to\infty} S_n \leq \frac{2}{r}. $$</p> <p>Taking $r \uparrow 1$ gives the wanted conclusion.</p> Answer 9: <p>$\begin{array}\\ \frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k} &amp;=n\sum_{k=1}^n \frac{2^{k-n}}{k}\\ &amp;=n\sum_{k=0}^{n-1} \frac{2^{-k}}{n-k}\\ &amp;=\sum_{k=0}^{n-1} \frac{2^{-k}}{1-k/n}\\ \text{so}\\ \frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k}-2 &amp;=\sum_{k=0}^{n-1} 2^{-k}(\frac1{1-k/n}-1)-\frac1{2^{n-1}}\\ &amp;=\sum_{k=0}^{n-1} 2^{-k}(\frac{k/n}{1-k/n})-\frac1{2^{n-1}}\\ \end{array} $</p> <p>If $k \le cn$, $2^{-k}(\frac{k/n}{1-k/n}) \le 2^{-k}\frac{c}{1-c} $ so $\sum_{k=0}^{\lfloor nc \rfloor} 2^{-k}(\frac{k/n}{1-k/n}) \le 2\frac{c}{1-c} \lt 4c$ if $0 &lt; c &lt; \frac12$.</p> <p>If $k &gt; cn$,</p> <p>$\begin{array}\\ \sum_{k=cn}^{n-1} 2^{-k}(\frac{k/n}{1-k/n}) &amp;&lt;\sum_{k=cn}^{n-1} 2^{-cn}(\frac{k/n}{1-k/n})\\ &amp;=2^{-cn}\sum_{k=cn}^{n-1} (\frac{k}{n-k})\\ &amp;&lt;2^{-cn}\sum_{k=cn}^{n-1} n\\ &amp;&lt;n^22^{-cn}\\ &amp;&lt;e^{-cn \ln 2 + 2\ln n}\\ \end{array} $</p> <p>For any fixed $c &gt; 0$, $-cn \ln 2 + 2\ln n \to -\infty$ as $n \to \infty$, so that $e^{-cn \ln 2 + 2\ln n} \to 0$.</p> <p>By first choosing $c$ small and then $n$ large, both $4c$ and $e^{-cn \ln 2 + 2\ln n}$ can be made as small as we want, so that $\frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k}-2 $ can be made as small as we want, so that $\lim_{n \to \infty}\frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k} =2 $.</p> Answer 10: <p>Let $$ S_n=\frac{n}{2^n}\sum_{k=1}^n\frac{2^k}{k}. \tag{1}$$ Clearly $$ S_n=\sum_{k=1}^n\frac{n}{k}\frac1{2^{n-k}}\ge \sum_{k=1}^n\frac1{2^{n-k}}=2-\frac{1}{2^n}. $$ Define \begin{eqnarray} f_n(x)=\frac{n}{2^n}\sum_{k=1}^n\frac{2^k}{k}\left(\frac12x\right)^k \end{eqnarray} and then $f_n(0)=0,f_n(2)=S_n$ and $$ f_n'(x)=\frac{n}{2^n}\sum_{k=1}^nx^{k-1}=\frac{n}{2^n}\frac{1-x^n}{1-x}.$$ So \begin{eqnarray} S_n&amp;=&amp;\frac{n}{2^n}\int_0^2\frac{1-x^n}{1-x}dx\\ &amp;=&amp;\frac{n}{2^n}\int_0^1\frac{1-x^n}{1-x}dx+\frac{n}{2^n}\int_1^2\frac{x^n-1}{x-1}dx. \end{eqnarray} Since $$ \int_0^1\frac{1-x^n}{1-x}dx=\ln n+\gamma+o(1),$$ one $$ \frac{n}{2^n}\int_0^1\frac{1-x^n}{1-x}dx=o(1). $$ Noting that if $x\in[1,2]$, then $x-1\ge1$ and $x^n-1\le x^n$, one has $$ S_n\le o(1)+\frac{n}{2^n}\int_1^2\frac{x^n-1}{x-1}dx\le o(1)+\frac{n}{2^n}\int_1^2x^ndx=o(1)+\frac{n}{2^n}\frac{1}{n+1}(2^{n+1}-1).\tag{2}$$ From (1) and (2), one has $$ 2-\frac{1}{2^n}\le S_n\le o(1)+\frac{n}{2^n}\frac{1}{n+1}(2^{n+1}-1).$$ Letting $n\to\infty$, one has $$ \lim_{n\to\infty}S_n=2. $$</p> Answer 11: <p>Let $f(x)=\sum_{k=1}^n\frac{x^k}{k}$. Then, we can write</p> <p>$$f(x)=\sum_{k=1}^n \int_0^x t^{k-1}\,dk=\int_0^x \frac{1-t^n}{1-t}\,dt$$</p> <p>so that $f(2)=\sum_{k=1}^n\frac{2^k}{k}=\int_0^2 \frac{1-t^n}{1-t}\,dt$.</p> <hr> <p>Next, we choose $0&lt;\delta&lt;1$ and write</p> <p>$$\begin{align} \frac{n}{2^n}\int_0^2 \frac{1-t^n}{1-t}\,dt&amp;=\frac{n}{2^n}\int_0^{2-\delta}\frac{1-t^n}{1-t}\,dt+\frac{n}{2^n}\int_{2-\delta}^2\frac{t^n-1}{t-1}\,dt \tag 1 \end{align}$$</p> <p>We can apply the mean-value theorem for the second term on the right-hand side of $(1)$ to reveal that for some $\xi_n\in [2-\delta,2]$ </p> <p>$$\begin{align} \frac{n}{2^n}\int_{2-\delta}^2 \frac{t^n-1}{t-1}\,dt&amp;=\frac{n}{2^n(\xi_n-1)}\int_{2-\delta}^2(t^n-1)\,dt\\\\ &amp;=\frac{n}{2^n(\xi-1)}\left(\frac{2^{n+1}-(2-\delta)^{n+1}}{n+1}+\delta\right) \tag 2 \end{align}$$</p> <p>Letting $n\to \infty$ in $(2)$ yields</p> <p>$$\lim_{n\to \infty}\frac{n}{2^n}\int_{2-\delta}^2 \frac{t^n-1}{t-1}\,dt=\frac{2}{\xi-1}$$</p> <p>For any given $\epsilon&gt;0$, we can choose $0&lt;\delta&lt;1$ so small that $\left|\frac{2}{\xi_n-1}-2\right|&lt;\epsilon$. Now, we proceed with that $0&lt;\delta&lt;1$ fixed. </p> <p>Noting that $\frac{1-t^n}{1-t}$ is positive and monotonically increasing on $[0,2]$, the first term on the right-hand side of $(1)$ is bounded above by $\frac{n}{2^n}\frac{(2-\delta)^n-1}{1-\delta}\to 0$ as $n\to \infty$.</p> <hr> <p>Putting it all together, we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{n}{2^n}\sum_{k=1}^n\frac{2^k}{k}=2$$</p> </blockquote> Answer 12: <p>I encountered this limit when computing $$ \sum_{k=0}^n\frac1{\binom{n}{k}}=\frac{n+1}{2^n}\sum_{k=0}^n\frac{2^k}{k+1} $$ in equation $(8)$ of <a href="https://mathematica.stackexchange.com/a/39470">this answer</a>, and for which an asymptotic expansion is given in equation $(6)$ of <a href="https://math.stackexchange.com/a/629637">this answer</a>: $$ \begin{align} \frac{n}{2^n}\sum_{k=1}^n\frac{2^k}k &amp;=\sum_{k=1}^n\frac1{\binom{n-1}{k-1}}\\ &amp;\sim2+\frac2{n-1}+\frac4{(n-1)(n-2)}+\frac{12}{(n-1)(n-2)(n-3)}\\ &amp;+\frac{48}{(n-1)(n-2)(n-3)(n-4)}+O\left(\frac1{n^5}\right) \end{align} $$</p>
How to prove that this set is countable?
2,092,611
4
https://math.stackexchange.com/questions/2092611/how-to-prove-that-this-set-is-countable
<p>Suppose that $(S, \Sigma)$ is a measurable space and $\mu$ is a finite measure on $(S, \Sigma)$. Suppose that whenever $x \in S$, the singleton $\{x\}$ belongs to $\Sigma$. </p> <p>Prove that the set $\{x \in S : \mu(\{x\}) &gt; 0 \}$ must be countable. </p>
[ { "answer_id": 2092619, "score": 2, "text": "<p>suppose it is not countable, Define $B_n$ as the set $\\{s\\in S | \\mu\\{s\\}&gt; \\frac{1}{n}\\}$. Notice that at least one of the $B_n$ must be uncountable.</p>\n\n<p>Pick a numerable subset of $B_n: \\{x_1,x_2,\\dots\\}$ and notice that $\\mu (\\bigcup\\limits_{i=1}^\\infty \\{x_i\\})=\\sum\\limits_{i=1}^\\infty \\mu(\\{x_i\\})=\\infty$, contradicting the fact that $\\mu$ is finite.</p>\n" }, { "answer_id": 2092618, "score": 1, "text": "<p>Hint: the sum of an uncountable numbers of strictly positive numbers is infinite.</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/1413874/can-we-add-an-uncountable-number-of-positive-elements-and-can-this-sum-be-finit\">Can we add an uncountable number of positive elements, and can this sum be finite?</a></p>\n" }, { "answer_id": 2092621, "score": 0, "text": "<p><strong>HINT</strong>: The set can be written $$ \\bigcup_{n=1}^\\infty \\{x\\in S\\mid \\mu(\\{x\\}&gt;1/n)\\} $$</p>\n\n<p>Claim each set in the union is finite. If you can prove that, it is a countable union of finite sets and thus countable.</p>\n" }, { "answer_id": 2092619, "score": 2, "text": "<p>suppose it is not countable, Define $B_n$ as the set $\\{s\\in S | \\mu\\{s\\}&gt; \\frac{1}{n}\\}$. Notice that at least one of the $B_n$ must be uncountable.</p>\n\n<p>Pick a numerable subset of $B_n: \\{x_1,x_2,\\dots\\}$ and notice that $\\mu (\\bigcup\\limits_{i=1}^\\infty \\{x_i\\})=\\sum\\limits_{i=1}^\\infty \\mu(\\{x_i\\})=\\infty$, contradicting the fact that $\\mu$ is finite.</p>\n" }, { "answer_id": 2092618, "score": 1, "text": "<p>Hint: the sum of an uncountable numbers of strictly positive numbers is infinite.</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/1413874/can-we-add-an-uncountable-number-of-positive-elements-and-can-this-sum-be-finit\">Can we add an uncountable number of positive elements, and can this sum be finite?</a></p>\n" }, { "answer_id": 2092621, "score": 0, "text": "<p><strong>HINT</strong>: The set can be written $$ \\bigcup_{n=1}^\\infty \\{x\\in S\\mid \\mu(\\{x\\}&gt;1/n)\\} $$</p>\n\n<p>Claim each set in the union is finite. If you can prove that, it is a countable union of finite sets and thus countable.</p>\n" } ]
Question: How to prove that this set is countable? Answer 1: <p>suppose it is not countable, Define $B_n$ as the set $\{s\in S | \mu\{s\}&gt; \frac{1}{n}\}$. Notice that at least one of the $B_n$ must be uncountable.</p> <p>Pick a numerable subset of $B_n: \{x_1,x_2,\dots\}$ and notice that $\mu (\bigcup\limits_{i=1}^\infty \{x_i\})=\sum\limits_{i=1}^\infty \mu(\{x_i\})=\infty$, contradicting the fact that $\mu$ is finite.</p> Answer 2: <p>Hint: the sum of an uncountable numbers of strictly positive numbers is infinite.</p> <p><a href="https://math.stackexchange.com/questions/1413874/can-we-add-an-uncountable-number-of-positive-elements-and-can-this-sum-be-finit">Can we add an uncountable number of positive elements, and can this sum be finite?</a></p> Answer 3: <p><strong>HINT</strong>: The set can be written $$ \bigcup_{n=1}^\infty \{x\in S\mid \mu(\{x\}&gt;1/n)\} $$</p> <p>Claim each set in the union is finite. If you can prove that, it is a countable union of finite sets and thus countable.</p> Answer 4: <p>suppose it is not countable, Define $B_n$ as the set $\{s\in S | \mu\{s\}&gt; \frac{1}{n}\}$. Notice that at least one of the $B_n$ must be uncountable.</p> <p>Pick a numerable subset of $B_n: \{x_1,x_2,\dots\}$ and notice that $\mu (\bigcup\limits_{i=1}^\infty \{x_i\})=\sum\limits_{i=1}^\infty \mu(\{x_i\})=\infty$, contradicting the fact that $\mu$ is finite.</p> Answer 5: <p>Hint: the sum of an uncountable numbers of strictly positive numbers is infinite.</p> <p><a href="https://math.stackexchange.com/questions/1413874/can-we-add-an-uncountable-number-of-positive-elements-and-can-this-sum-be-finit">Can we add an uncountable number of positive elements, and can this sum be finite?</a></p> Answer 6: <p><strong>HINT</strong>: The set can be written $$ \bigcup_{n=1}^\infty \{x\in S\mid \mu(\{x\}&gt;1/n)\} $$</p> <p>Claim each set in the union is finite. If you can prove that, it is a countable union of finite sets and thus countable.</p>
Are there any rigid algebraically closed fields?
2,092,789
4
https://math.stackexchange.com/questions/2092789/are-there-any-rigid-algebraically-closed-fields
<p>A field $k$ is <strong>rigid</strong> if its group of automorphisms $\mathrm{Aut}(k)$ is trivial. For example, the reals $\mathbb R$ and the prime fields $\mathbb Q$ and $\mathbb F_p$.</p> <p>A field $k$ is <strong>algebraically closed</strong> if every nonconstant polynomial $p(x) \in k[x]$ has a root in $k$. For instance, the algebraic closures of the above: $\bar{\mathbb R} = \mathbb C$, $\bar{\mathbb Q}$, etc.</p> <blockquote> <p>Is any field both rigid and algebraically closed?</p> </blockquote> <p>I am inclined to say no, mostly because I couldn't think of any, but also because it feels like in an algebraically closed field, there should always be finite index subfields which can be fiddled around with. Regardless, I have no idea how to prove or disprove this.</p>
[ { "answer_id": 2092817, "score": 4, "text": "<p>Any algebraically closed field has tons of automorphisms. The key fact is that if $K$ and $L$ are algebraically closed and algebraic over subfields $k\\subseteq K$ and $\\ell\\subseteq L$, then any isomorphism $f:k\\to \\ell$ extends to an isomorphism $\\bar{f}:K\\to L$. The proof is to just extend $f$ to one element of $K$ at a time: given $\\alpha\\in K\\setminus k$, there exists $\\beta\\in L$ whose minimal polynomial over $\\ell$ is the polynomial obtained by applying $f$ to the coefficients of minimal polynomial of $\\alpha$. You can then extend $f$ to an isomorphism $k(\\alpha)\\to\\ell(\\beta)$ which sends $\\alpha$ to $\\beta$. Repeating this by transfinite induction, you get an extension of $f$ to an isomorphism $\\bar{f}$ defined on all of $K$. The image of $\\bar{f}$ must be all of $L$, since it is an algebraically closed field containing $\\ell$ and $L$ is algebraic over $\\ell$.</p>\n\n<p>So given an algebraically closed field $K$, let $k_0$ be the prime subfield and let $B$ be a transcendence basis for $K$ over $k_0(B)$, so $K$ is algebraic over $k_0(B)$. Now let $k\\subset K$ be some nontrivial finite Galois extension of $k_0(B)$ and let $f:k\\to k$ be a nontrivial automorphism. By the key fact above (with $\\ell=k$ and $L=K$), $f$ extends to a nontrivial automorphism of $K$.</p>\n\n<p>(Note that this argument uses the axiom of choice, both in the proof of the key fact (to choose $\\alpha$ and $\\beta$ at every step of the induction) and in getting a transcendence basis for $K$ over $k_0$. I don't know whether it is possible for a rigid algebraically closed field to exist if you don't assume the axiom of choice.)</p>\n" }, { "answer_id": 2092806, "score": 0, "text": "<p><a href=\"https://mathoverflow.net/questions/22897/fields-with-trivial-automorphism-group\">This question</a> deals with a similar topic. Specifically, one answer cites a paper that constructs (at least one) real closed field with no non-trivial automorphisms. Unfortunately, often fields of this sort are very difficult to describe, and I'm unable to find a .pdf online.</p>\n\n<p>I encourage you to check out the question, but the paper (which I was able to find at sci hub, but won't link) is:</p>\n\n<blockquote>\n <p>S. Shelah, Models with second order properties. IV. A general method and eliminating diamonds -- Annals Pure and Applied Logic 25 (1983) 183-212</p>\n</blockquote>\n\n<p>DOI is <a href=\"http://dx.doi.org/10.1016/0168-0072(83)90013-1\" rel=\"nofollow noreferrer\">http://dx.doi.org/10.1016/0168-0072(83)90013-1</a></p>\n" }, { "answer_id": 2092817, "score": 4, "text": "<p>Any algebraically closed field has tons of automorphisms. The key fact is that if $K$ and $L$ are algebraically closed and algebraic over subfields $k\\subseteq K$ and $\\ell\\subseteq L$, then any isomorphism $f:k\\to \\ell$ extends to an isomorphism $\\bar{f}:K\\to L$. The proof is to just extend $f$ to one element of $K$ at a time: given $\\alpha\\in K\\setminus k$, there exists $\\beta\\in L$ whose minimal polynomial over $\\ell$ is the polynomial obtained by applying $f$ to the coefficients of minimal polynomial of $\\alpha$. You can then extend $f$ to an isomorphism $k(\\alpha)\\to\\ell(\\beta)$ which sends $\\alpha$ to $\\beta$. Repeating this by transfinite induction, you get an extension of $f$ to an isomorphism $\\bar{f}$ defined on all of $K$. The image of $\\bar{f}$ must be all of $L$, since it is an algebraically closed field containing $\\ell$ and $L$ is algebraic over $\\ell$.</p>\n\n<p>So given an algebraically closed field $K$, let $k_0$ be the prime subfield and let $B$ be a transcendence basis for $K$ over $k_0(B)$, so $K$ is algebraic over $k_0(B)$. Now let $k\\subset K$ be some nontrivial finite Galois extension of $k_0(B)$ and let $f:k\\to k$ be a nontrivial automorphism. By the key fact above (with $\\ell=k$ and $L=K$), $f$ extends to a nontrivial automorphism of $K$.</p>\n\n<p>(Note that this argument uses the axiom of choice, both in the proof of the key fact (to choose $\\alpha$ and $\\beta$ at every step of the induction) and in getting a transcendence basis for $K$ over $k_0$. I don't know whether it is possible for a rigid algebraically closed field to exist if you don't assume the axiom of choice.)</p>\n" }, { "answer_id": 2092806, "score": 0, "text": "<p><a href=\"https://mathoverflow.net/questions/22897/fields-with-trivial-automorphism-group\">This question</a> deals with a similar topic. Specifically, one answer cites a paper that constructs (at least one) real closed field with no non-trivial automorphisms. Unfortunately, often fields of this sort are very difficult to describe, and I'm unable to find a .pdf online.</p>\n\n<p>I encourage you to check out the question, but the paper (which I was able to find at sci hub, but won't link) is:</p>\n\n<blockquote>\n <p>S. Shelah, Models with second order properties. IV. A general method and eliminating diamonds -- Annals Pure and Applied Logic 25 (1983) 183-212</p>\n</blockquote>\n\n<p>DOI is <a href=\"http://dx.doi.org/10.1016/0168-0072(83)90013-1\" rel=\"nofollow noreferrer\">http://dx.doi.org/10.1016/0168-0072(83)90013-1</a></p>\n" } ]
Question: Are there any rigid algebraically closed fields? Answer 1: <p>Any algebraically closed field has tons of automorphisms. The key fact is that if $K$ and $L$ are algebraically closed and algebraic over subfields $k\subseteq K$ and $\ell\subseteq L$, then any isomorphism $f:k\to \ell$ extends to an isomorphism $\bar{f}:K\to L$. The proof is to just extend $f$ to one element of $K$ at a time: given $\alpha\in K\setminus k$, there exists $\beta\in L$ whose minimal polynomial over $\ell$ is the polynomial obtained by applying $f$ to the coefficients of minimal polynomial of $\alpha$. You can then extend $f$ to an isomorphism $k(\alpha)\to\ell(\beta)$ which sends $\alpha$ to $\beta$. Repeating this by transfinite induction, you get an extension of $f$ to an isomorphism $\bar{f}$ defined on all of $K$. The image of $\bar{f}$ must be all of $L$, since it is an algebraically closed field containing $\ell$ and $L$ is algebraic over $\ell$.</p> <p>So given an algebraically closed field $K$, let $k_0$ be the prime subfield and let $B$ be a transcendence basis for $K$ over $k_0(B)$, so $K$ is algebraic over $k_0(B)$. Now let $k\subset K$ be some nontrivial finite Galois extension of $k_0(B)$ and let $f:k\to k$ be a nontrivial automorphism. By the key fact above (with $\ell=k$ and $L=K$), $f$ extends to a nontrivial automorphism of $K$.</p> <p>(Note that this argument uses the axiom of choice, both in the proof of the key fact (to choose $\alpha$ and $\beta$ at every step of the induction) and in getting a transcendence basis for $K$ over $k_0$. I don't know whether it is possible for a rigid algebraically closed field to exist if you don't assume the axiom of choice.)</p> Answer 2: <p><a href="https://mathoverflow.net/questions/22897/fields-with-trivial-automorphism-group">This question</a> deals with a similar topic. Specifically, one answer cites a paper that constructs (at least one) real closed field with no non-trivial automorphisms. Unfortunately, often fields of this sort are very difficult to describe, and I'm unable to find a .pdf online.</p> <p>I encourage you to check out the question, but the paper (which I was able to find at sci hub, but won't link) is:</p> <blockquote> <p>S. Shelah, Models with second order properties. IV. A general method and eliminating diamonds -- Annals Pure and Applied Logic 25 (1983) 183-212</p> </blockquote> <p>DOI is <a href="http://dx.doi.org/10.1016/0168-0072(83)90013-1" rel="nofollow noreferrer">http://dx.doi.org/10.1016/0168-0072(83)90013-1</a></p> Answer 3: <p>Any algebraically closed field has tons of automorphisms. The key fact is that if $K$ and $L$ are algebraically closed and algebraic over subfields $k\subseteq K$ and $\ell\subseteq L$, then any isomorphism $f:k\to \ell$ extends to an isomorphism $\bar{f}:K\to L$. The proof is to just extend $f$ to one element of $K$ at a time: given $\alpha\in K\setminus k$, there exists $\beta\in L$ whose minimal polynomial over $\ell$ is the polynomial obtained by applying $f$ to the coefficients of minimal polynomial of $\alpha$. You can then extend $f$ to an isomorphism $k(\alpha)\to\ell(\beta)$ which sends $\alpha$ to $\beta$. Repeating this by transfinite induction, you get an extension of $f$ to an isomorphism $\bar{f}$ defined on all of $K$. The image of $\bar{f}$ must be all of $L$, since it is an algebraically closed field containing $\ell$ and $L$ is algebraic over $\ell$.</p> <p>So given an algebraically closed field $K$, let $k_0$ be the prime subfield and let $B$ be a transcendence basis for $K$ over $k_0(B)$, so $K$ is algebraic over $k_0(B)$. Now let $k\subset K$ be some nontrivial finite Galois extension of $k_0(B)$ and let $f:k\to k$ be a nontrivial automorphism. By the key fact above (with $\ell=k$ and $L=K$), $f$ extends to a nontrivial automorphism of $K$.</p> <p>(Note that this argument uses the axiom of choice, both in the proof of the key fact (to choose $\alpha$ and $\beta$ at every step of the induction) and in getting a transcendence basis for $K$ over $k_0$. I don't know whether it is possible for a rigid algebraically closed field to exist if you don't assume the axiom of choice.)</p> Answer 4: <p><a href="https://mathoverflow.net/questions/22897/fields-with-trivial-automorphism-group">This question</a> deals with a similar topic. Specifically, one answer cites a paper that constructs (at least one) real closed field with no non-trivial automorphisms. Unfortunately, often fields of this sort are very difficult to describe, and I'm unable to find a .pdf online.</p> <p>I encourage you to check out the question, but the paper (which I was able to find at sci hub, but won't link) is:</p> <blockquote> <p>S. Shelah, Models with second order properties. IV. A general method and eliminating diamonds -- Annals Pure and Applied Logic 25 (1983) 183-212</p> </blockquote> <p>DOI is <a href="http://dx.doi.org/10.1016/0168-0072(83)90013-1" rel="nofollow noreferrer">http://dx.doi.org/10.1016/0168-0072(83)90013-1</a></p>
Find probability that the sum of two dice is not 6 and not 5?
2,092,879
4
https://math.stackexchange.com/questions/2092879/find-probability-that-the-sum-of-two-dice-is-not-6-and-not-5
<p>When rolling two fair dice, what is P(sum of two dice is not 6 and not 5)? </p> <p>Calculation: First, I found the probability of two numbers that would roll a sum of 6: </p> <p>(1,5) (2,4) (3,3) (4,2) (5,1) = 5/36 (each probability is 1/36)</p> <p>Then, I found the probability of two numbers that would roll a sum of 5: </p> <p>(1,4) (2,3) (3,2) (4,1) = 4/36 </p> <p>My first method was to add the probabilities and then subtract from one to give the probability of NOT rolling a sum of 6 or 5: </p> <p>(5/36) + (4/36) = 9/36</p> <p>1 - 9/36 = 27/36 = 0.75 </p> <p>This turned out to be the accepted answer of my online homework. But then, I realized that the original question asked for P(not sum of 6 AND not sum of 5). </p> <p>So, I recalculated:</p> <p>1 - 5/36 = 31/36</p> <p>1 - 4/36 = 32/36 </p> <p>(31/36)(32/36) = 992/1296 ~ 0.7654 (using multiplication rule)</p> <p>When I entered this fraction in, it was incorrect. But since the questions asks for "AND" and not "or", wouldn't the second probability be the actual correct answer? </p>
[ { "answer_id": 2092883, "score": 9, "text": "<p>By <a href=\"https://en.wikipedia.org/wiki/De_Morgan&#39;s_laws\" rel=\"nofollow noreferrer\">De Morgan's Laws</a>:</p>\n\n<p>not sum of 6 and not sum of 5 is the same to not (sum of 6 or sum of 5)</p>\n\n<p>The second answer is not correct because \"not sum of 6\" and \"not sum of 5\" are not independent. So you cannot write the probability of their intersection as product of the individual probabilities (or as you called it \"multiplication rule\")</p>\n\n<p>The first answer is correct because \"sum of 6\" and \"sum of 5\" are disjoint, and the probability of their union is the sum of the probabilities.</p>\n" }, { "answer_id": 2092888, "score": 4, "text": "<p>$P(A~\\text{and}~B)$ i.e. $P(A\\cap B)$ is equal to $P(A)\\cdot P(B)$ <strong>ONLY WHEN THEY ARE INDEPENDENT EVENTS</strong>.</p>\n\n<p>In general $P(A\\cap B)\\neq P(A)\\cdot P(B)$</p>\n\n<p>Instead, $P(A\\cap B)=P(A)\\cdot P(B\\mid A)$</p>\n\n<p>If you have no reason to expect the events to be independent, you must approach differently and your initial approach was indeed the correct one.</p>\n" }, { "answer_id": 2092883, "score": 9, "text": "<p>By <a href=\"https://en.wikipedia.org/wiki/De_Morgan&#39;s_laws\" rel=\"nofollow noreferrer\">De Morgan's Laws</a>:</p>\n\n<p>not sum of 6 and not sum of 5 is the same to not (sum of 6 or sum of 5)</p>\n\n<p>The second answer is not correct because \"not sum of 6\" and \"not sum of 5\" are not independent. So you cannot write the probability of their intersection as product of the individual probabilities (or as you called it \"multiplication rule\")</p>\n\n<p>The first answer is correct because \"sum of 6\" and \"sum of 5\" are disjoint, and the probability of their union is the sum of the probabilities.</p>\n" }, { "answer_id": 2092888, "score": 4, "text": "<p>$P(A~\\text{and}~B)$ i.e. $P(A\\cap B)$ is equal to $P(A)\\cdot P(B)$ <strong>ONLY WHEN THEY ARE INDEPENDENT EVENTS</strong>.</p>\n\n<p>In general $P(A\\cap B)\\neq P(A)\\cdot P(B)$</p>\n\n<p>Instead, $P(A\\cap B)=P(A)\\cdot P(B\\mid A)$</p>\n\n<p>If you have no reason to expect the events to be independent, you must approach differently and your initial approach was indeed the correct one.</p>\n" } ]
Question: Find probability that the sum of two dice is not 6 and not 5? Answer 1: <p>By <a href="https://en.wikipedia.org/wiki/De_Morgan&#39;s_laws" rel="nofollow noreferrer">De Morgan's Laws</a>:</p> <p>not sum of 6 and not sum of 5 is the same to not (sum of 6 or sum of 5)</p> <p>The second answer is not correct because "not sum of 6" and "not sum of 5" are not independent. So you cannot write the probability of their intersection as product of the individual probabilities (or as you called it "multiplication rule")</p> <p>The first answer is correct because "sum of 6" and "sum of 5" are disjoint, and the probability of their union is the sum of the probabilities.</p> Answer 2: <p>$P(A~\text{and}~B)$ i.e. $P(A\cap B)$ is equal to $P(A)\cdot P(B)$ <strong>ONLY WHEN THEY ARE INDEPENDENT EVENTS</strong>.</p> <p>In general $P(A\cap B)\neq P(A)\cdot P(B)$</p> <p>Instead, $P(A\cap B)=P(A)\cdot P(B\mid A)$</p> <p>If you have no reason to expect the events to be independent, you must approach differently and your initial approach was indeed the correct one.</p> Answer 3: <p>By <a href="https://en.wikipedia.org/wiki/De_Morgan&#39;s_laws" rel="nofollow noreferrer">De Morgan's Laws</a>:</p> <p>not sum of 6 and not sum of 5 is the same to not (sum of 6 or sum of 5)</p> <p>The second answer is not correct because "not sum of 6" and "not sum of 5" are not independent. So you cannot write the probability of their intersection as product of the individual probabilities (or as you called it "multiplication rule")</p> <p>The first answer is correct because "sum of 6" and "sum of 5" are disjoint, and the probability of their union is the sum of the probabilities.</p> Answer 4: <p>$P(A~\text{and}~B)$ i.e. $P(A\cap B)$ is equal to $P(A)\cdot P(B)$ <strong>ONLY WHEN THEY ARE INDEPENDENT EVENTS</strong>.</p> <p>In general $P(A\cap B)\neq P(A)\cdot P(B)$</p> <p>Instead, $P(A\cap B)=P(A)\cdot P(B\mid A)$</p> <p>If you have no reason to expect the events to be independent, you must approach differently and your initial approach was indeed the correct one.</p>
Where can I get a list of important theorems for Math Olympiads?
2,092,973
4
https://math.stackexchange.com/questions/2092973/where-can-i-get-a-list-of-important-theorems-for-math-olympiads
<p>I am a high school student from India. I participated in RMO (The first stage) and qualified for INMO. The questions asked in RMO were not very tough. Still, I could not solve most of the problems because I didn't know the theorems and axioms required to solve them. Although I have full command over my school syllabus, most of the topics for matg olympiads are not even taught at school. For example, Number theory. Some important theorems like Wilson's Theorem, Ptolemy's theorem etc. are beyond my school's syllabus. </p> <p>Where can I get a comprehensive list of theorems (with proofs)? Can someone give me links to websites or names of books? </p>
[ { "answer_id": 2093057, "score": 5, "text": "<p>Here is a pdf which include many tool and problems.\n<a href=\"http://www.cs.elte.hu/~nagyzoli/compendium.pdf\" rel=\"noreferrer\">http://www.cs.elte.hu/~nagyzoli/compendium.pdf</a></p>\n" }, { "answer_id": 2092981, "score": 2, "text": "<p><a href=\"https://www.artofproblemsolving.com/wiki/index.php/Category:Theorems\" rel=\"nofollow noreferrer\">this is</a> an incomplete list, and some of the theorems are overly comlplex but maybe it helps.</p>\n\n<p>The website mathlinks.ro is used by a lot of matheletes, eveyone I know that has gone to the IMO has used it intensely for some period of time.</p>\n" }, { "answer_id": 2093057, "score": 5, "text": "<p>Here is a pdf which include many tool and problems.\n<a href=\"http://www.cs.elte.hu/~nagyzoli/compendium.pdf\" rel=\"noreferrer\">http://www.cs.elte.hu/~nagyzoli/compendium.pdf</a></p>\n" }, { "answer_id": 2092981, "score": 2, "text": "<p><a href=\"https://www.artofproblemsolving.com/wiki/index.php/Category:Theorems\" rel=\"nofollow noreferrer\">this is</a> an incomplete list, and some of the theorems are overly comlplex but maybe it helps.</p>\n\n<p>The website mathlinks.ro is used by a lot of matheletes, eveyone I know that has gone to the IMO has used it intensely for some period of time.</p>\n" } ]
Question: Where can I get a list of important theorems for Math Olympiads? Answer 1: <p>Here is a pdf which include many tool and problems. <a href="http://www.cs.elte.hu/~nagyzoli/compendium.pdf" rel="noreferrer">http://www.cs.elte.hu/~nagyzoli/compendium.pdf</a></p> Answer 2: <p><a href="https://www.artofproblemsolving.com/wiki/index.php/Category:Theorems" rel="nofollow noreferrer">this is</a> an incomplete list, and some of the theorems are overly comlplex but maybe it helps.</p> <p>The website mathlinks.ro is used by a lot of matheletes, eveyone I know that has gone to the IMO has used it intensely for some period of time.</p> Answer 3: <p>Here is a pdf which include many tool and problems. <a href="http://www.cs.elte.hu/~nagyzoli/compendium.pdf" rel="noreferrer">http://www.cs.elte.hu/~nagyzoli/compendium.pdf</a></p> Answer 4: <p><a href="https://www.artofproblemsolving.com/wiki/index.php/Category:Theorems" rel="nofollow noreferrer">this is</a> an incomplete list, and some of the theorems are overly comlplex but maybe it helps.</p> <p>The website mathlinks.ro is used by a lot of matheletes, eveyone I know that has gone to the IMO has used it intensely for some period of time.</p>
Is the sequence $\frac{1}{\arctan(-n)}\cdot\frac{3n-2}{n^2+n+10}$ increasing or decreasing?
2,093,006
4
https://math.stackexchange.com/questions/2093006/is-the-sequence-frac1-arctan-n-cdot-frac3n-2n2n10-increasing-or
<p>Is the following sequence increasing or decreasing $$\frac{1}{\arctan(-n)}\cdot\frac{3n-2}{n^2+n+10}$$?</p> <p>I managed to come to conclusion that $a_n=\frac{1}{\arctan(-n)}$ is strictly increasing and that for $b_n=\frac{3n-2}{n^2+n+10}, b_1&lt;b_2&lt;b_3&gt;b_4&gt;b_5...$. So $b_n$ is increasing for the first $3$ terms and after that it's decreasing. So what can I conclude about the product of theses $2$ sequences? I know that the product of $2$ increasing sequences doesn't have to be increasing.</p>
[ { "answer_id": 2093025, "score": 1, "text": "<p>Let $(a_n)$ be $\\ge 0$ and decreasing, and $(b_n)$ be $\\le 0$ and increasing. Then:</p>\n\n<p>$$a_{n+1}b_{n+1} - a_n b_n = a_{n+1}(b_{n+1} - b_n) + (a_{n+1} - a_n)b_n \\ge 0$$</p>\n\n<p>So your sequence is increasing.</p>\n" }, { "answer_id": 2093215, "score": 0, "text": "<p>Your approach is correct and almost solves the problem, you just have to pay attention to the details.</p>\n\n<p>Since you do not give a proof for the monotonicity of $(b_n)$, let us do it here. We suspect that from a certain $n$ onwards, $(b_n)$ will decrease. Indeed,</p>\n\n<p>$$b_n \\ge b_{n+1} \\iff \\frac {3n-2} {n^2+n+10} \\ge \\frac {3n+1} {n^2+3n+12} \\iff 7n^2 + 30n -24 \\ge 4n^2 + 31n + 10 \\iff \\\\\n3n^2 -n -34 \\ge 0 .$$</p>\n\n<p>Given that $n \\in \\Bbb N$, the above is equivalent to $n \\ge \\dfrac {1 + \\sqrt{409}} 6 \\in (3,4)$ it follows that, indeed, $(b_n)_{n \\ge 4}$ decreases. Clearly, it is also positive.</p>\n\n<p>On the other hand, since $\\arctan$ is an increasing function, it follows that $n \\mapsto \\dfrac 1 {\\arctan n}$ is decreasing. It is also positive.</p>\n\n<p>Now, while it's true that the product of two decreasing functions is not necessarily decreasing (think of $(-n) \\cdot (-n) = n^2)$), it is nevertheless true that the product of two <em>positive</em> decreasing functions is decreasing: if $x &lt; y$ then $f(x) \\ge f(y)$ and $g(x) \\ge g(y)$, so $f(x) g(x) \\ge f(y) g(x) \\ge f(y) g(y)$ (a similar argument can be given for increasing functions).</p>\n\n<p>Remember that $\\arctan(-n) = -\\arctan n$. Since $n \\mapsto \\dfrac 1 {\\arctan n}$ and $n \\mapsto \\dfrac {3n-2} {n^2+n+10}$ are both decreasing and positive, it follows that $n \\mapsto \\dfrac 1 {\\arctan n} \\dfrac {3n-2} {n^2+n+10}$ will also be so. Finally, putting the missing \"$-$\" in front of this product makes it negative and increasing.</p>\n" }, { "answer_id": 2093025, "score": 1, "text": "<p>Let $(a_n)$ be $\\ge 0$ and decreasing, and $(b_n)$ be $\\le 0$ and increasing. Then:</p>\n\n<p>$$a_{n+1}b_{n+1} - a_n b_n = a_{n+1}(b_{n+1} - b_n) + (a_{n+1} - a_n)b_n \\ge 0$$</p>\n\n<p>So your sequence is increasing.</p>\n" }, { "answer_id": 2093215, "score": 0, "text": "<p>Your approach is correct and almost solves the problem, you just have to pay attention to the details.</p>\n\n<p>Since you do not give a proof for the monotonicity of $(b_n)$, let us do it here. We suspect that from a certain $n$ onwards, $(b_n)$ will decrease. Indeed,</p>\n\n<p>$$b_n \\ge b_{n+1} \\iff \\frac {3n-2} {n^2+n+10} \\ge \\frac {3n+1} {n^2+3n+12} \\iff 7n^2 + 30n -24 \\ge 4n^2 + 31n + 10 \\iff \\\\\n3n^2 -n -34 \\ge 0 .$$</p>\n\n<p>Given that $n \\in \\Bbb N$, the above is equivalent to $n \\ge \\dfrac {1 + \\sqrt{409}} 6 \\in (3,4)$ it follows that, indeed, $(b_n)_{n \\ge 4}$ decreases. Clearly, it is also positive.</p>\n\n<p>On the other hand, since $\\arctan$ is an increasing function, it follows that $n \\mapsto \\dfrac 1 {\\arctan n}$ is decreasing. It is also positive.</p>\n\n<p>Now, while it's true that the product of two decreasing functions is not necessarily decreasing (think of $(-n) \\cdot (-n) = n^2)$), it is nevertheless true that the product of two <em>positive</em> decreasing functions is decreasing: if $x &lt; y$ then $f(x) \\ge f(y)$ and $g(x) \\ge g(y)$, so $f(x) g(x) \\ge f(y) g(x) \\ge f(y) g(y)$ (a similar argument can be given for increasing functions).</p>\n\n<p>Remember that $\\arctan(-n) = -\\arctan n$. Since $n \\mapsto \\dfrac 1 {\\arctan n}$ and $n \\mapsto \\dfrac {3n-2} {n^2+n+10}$ are both decreasing and positive, it follows that $n \\mapsto \\dfrac 1 {\\arctan n} \\dfrac {3n-2} {n^2+n+10}$ will also be so. Finally, putting the missing \"$-$\" in front of this product makes it negative and increasing.</p>\n" } ]
Question: Is the sequence $\frac{1}{\arctan(-n)}\cdot\frac{3n-2}{n^2+n+10}$ increasing or decreasing? Answer 1: <p>Let $(a_n)$ be $\ge 0$ and decreasing, and $(b_n)$ be $\le 0$ and increasing. Then:</p> <p>$$a_{n+1}b_{n+1} - a_n b_n = a_{n+1}(b_{n+1} - b_n) + (a_{n+1} - a_n)b_n \ge 0$$</p> <p>So your sequence is increasing.</p> Answer 2: <p>Your approach is correct and almost solves the problem, you just have to pay attention to the details.</p> <p>Since you do not give a proof for the monotonicity of $(b_n)$, let us do it here. We suspect that from a certain $n$ onwards, $(b_n)$ will decrease. Indeed,</p> <p>$$b_n \ge b_{n+1} \iff \frac {3n-2} {n^2+n+10} \ge \frac {3n+1} {n^2+3n+12} \iff 7n^2 + 30n -24 \ge 4n^2 + 31n + 10 \iff \\ 3n^2 -n -34 \ge 0 .$$</p> <p>Given that $n \in \Bbb N$, the above is equivalent to $n \ge \dfrac {1 + \sqrt{409}} 6 \in (3,4)$ it follows that, indeed, $(b_n)_{n \ge 4}$ decreases. Clearly, it is also positive.</p> <p>On the other hand, since $\arctan$ is an increasing function, it follows that $n \mapsto \dfrac 1 {\arctan n}$ is decreasing. It is also positive.</p> <p>Now, while it's true that the product of two decreasing functions is not necessarily decreasing (think of $(-n) \cdot (-n) = n^2)$), it is nevertheless true that the product of two <em>positive</em> decreasing functions is decreasing: if $x &lt; y$ then $f(x) \ge f(y)$ and $g(x) \ge g(y)$, so $f(x) g(x) \ge f(y) g(x) \ge f(y) g(y)$ (a similar argument can be given for increasing functions).</p> <p>Remember that $\arctan(-n) = -\arctan n$. Since $n \mapsto \dfrac 1 {\arctan n}$ and $n \mapsto \dfrac {3n-2} {n^2+n+10}$ are both decreasing and positive, it follows that $n \mapsto \dfrac 1 {\arctan n} \dfrac {3n-2} {n^2+n+10}$ will also be so. Finally, putting the missing "$-$" in front of this product makes it negative and increasing.</p> Answer 3: <p>Let $(a_n)$ be $\ge 0$ and decreasing, and $(b_n)$ be $\le 0$ and increasing. Then:</p> <p>$$a_{n+1}b_{n+1} - a_n b_n = a_{n+1}(b_{n+1} - b_n) + (a_{n+1} - a_n)b_n \ge 0$$</p> <p>So your sequence is increasing.</p> Answer 4: <p>Your approach is correct and almost solves the problem, you just have to pay attention to the details.</p> <p>Since you do not give a proof for the monotonicity of $(b_n)$, let us do it here. We suspect that from a certain $n$ onwards, $(b_n)$ will decrease. Indeed,</p> <p>$$b_n \ge b_{n+1} \iff \frac {3n-2} {n^2+n+10} \ge \frac {3n+1} {n^2+3n+12} \iff 7n^2 + 30n -24 \ge 4n^2 + 31n + 10 \iff \\ 3n^2 -n -34 \ge 0 .$$</p> <p>Given that $n \in \Bbb N$, the above is equivalent to $n \ge \dfrac {1 + \sqrt{409}} 6 \in (3,4)$ it follows that, indeed, $(b_n)_{n \ge 4}$ decreases. Clearly, it is also positive.</p> <p>On the other hand, since $\arctan$ is an increasing function, it follows that $n \mapsto \dfrac 1 {\arctan n}$ is decreasing. It is also positive.</p> <p>Now, while it's true that the product of two decreasing functions is not necessarily decreasing (think of $(-n) \cdot (-n) = n^2)$), it is nevertheless true that the product of two <em>positive</em> decreasing functions is decreasing: if $x &lt; y$ then $f(x) \ge f(y)$ and $g(x) \ge g(y)$, so $f(x) g(x) \ge f(y) g(x) \ge f(y) g(y)$ (a similar argument can be given for increasing functions).</p> <p>Remember that $\arctan(-n) = -\arctan n$. Since $n \mapsto \dfrac 1 {\arctan n}$ and $n \mapsto \dfrac {3n-2} {n^2+n+10}$ are both decreasing and positive, it follows that $n \mapsto \dfrac 1 {\arctan n} \dfrac {3n-2} {n^2+n+10}$ will also be so. Finally, putting the missing "$-$" in front of this product makes it negative and increasing.</p>
Find the set of values of $\lambda$ for which the equation $|x^2-4|x|-12|=\lambda$ has 6 distinct real roots
2,093,190
4
https://math.stackexchange.com/questions/2093190/find-the-set-of-values-of-lambda-for-which-the-equation-x2-4x-12-lambd
<blockquote> <p>Find the set of values of $\lambda$ for which the equation $|x^2-4|x|-12|=\lambda$ has 6 distinct real roots </p> </blockquote> <p><strong>My Approach:</strong> </p> <p>$|x^2-4|x|-12|=\lambda$ </p> <p><em>Case 1:</em></p> <p>$x^2-4|x|-12=\lambda$ </p> <ul> <li><p>If $x\geq 0$<br> $x^2-4x-12=\lambda\cdots(i)$ </p></li> <li><p>If $x&lt;0$ $x^2+4x-12=\lambda\cdots(ii)$ </p></li> </ul> <p><em>Case 2:</em></p> <p>$x^2-4|x|-12=-\lambda$ </p> <ul> <li><p>If $x\geq 0$<br> $x^2-4x-12=-\lambda\cdots(iii)$ </p></li> <li><p>If $x&lt;0$ $x^2+4x-12=-\lambda\cdots(iv)$ </p></li> </ul> <p>Now, we know the equation has 6 distinct real roots. So either only 3 equations have real roots which are all distinct too, or, some roots are common. I don't now how to solve further and I need a hint to proceed.</p>
[ { "answer_id": 2093199, "score": 3, "text": "<p><strong>Hint:</strong>\n<a href=\"https://i.stack.imgur.com/GyyB1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/GyyB1.png\" alt=\"enter image description here\"></a></p>\n\n<p>$$|x^2-4|x|-12|=\\lambda$$</p>\n\n<p>$$y=|x^2-4|x|-12|$$\n$$y=\\lambda$$\nAnswer: $$12&lt;\\lambda&lt;16$$</p>\n" }, { "answer_id": 2093200, "score": 3, "text": "<p>you can plot the $|x^2-4|x|-12|$</p>\n\n<p><a href=\"https://i.stack.imgur.com/31wTI.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/31wTI.png\" alt=\"enter image description here\"></a></p>\n\n<p>then you easy to find\n$$12&lt;\\lambda&lt;16$$</p>\n" }, { "answer_id": 2093221, "score": 3, "text": "<p>$\\lambda$ must obviously be non-negative. Then assume</p>\n\n<ul>\n<li><p>$x\\ge0$ and </p>\n\n<ol>\n<li><p>$x^2-4x-12-\\lambda=0$. The roots are $x=2\\pm{\\sqrt{16+\\lambda}}$, but only the $+$ sign is valid. For all $\\lambda$, one root.</p></li>\n<li><p>$x^2-4x-12+\\lambda=0$. The roots are $x=2\\pm{\\sqrt{16-\\lambda}}$. They are both positive and distinct for $12&lt;\\lambda&lt;16$.</p></li>\n</ol></li>\n<li><p>$x\\le0$ and </p>\n\n<ol>\n<li><p>$x^2+4x-12-\\lambda=0$. The roots are $x=-2\\pm{\\sqrt{16+\\lambda}}$, but only the $-$ sign is valid. For all $\\lambda$, one root.</p></li>\n<li><p>$x^2+4x-12+\\lambda=0$. The roots are $x=-2\\pm{\\sqrt{16-\\lambda}}$. They are both negative and distinct for $12&lt;\\lambda&lt;16$.</p></li>\n</ol></li>\n</ul>\n\n<p>In conclusion, for $12&lt;\\lambda&lt;16$, there are three positive and three negative roots.</p>\n" }, { "answer_id": 2093199, "score": 3, "text": "<p><strong>Hint:</strong>\n<a href=\"https://i.stack.imgur.com/GyyB1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/GyyB1.png\" alt=\"enter image description here\"></a></p>\n\n<p>$$|x^2-4|x|-12|=\\lambda$$</p>\n\n<p>$$y=|x^2-4|x|-12|$$\n$$y=\\lambda$$\nAnswer: $$12&lt;\\lambda&lt;16$$</p>\n" }, { "answer_id": 2093200, "score": 3, "text": "<p>you can plot the $|x^2-4|x|-12|$</p>\n\n<p><a href=\"https://i.stack.imgur.com/31wTI.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/31wTI.png\" alt=\"enter image description here\"></a></p>\n\n<p>then you easy to find\n$$12&lt;\\lambda&lt;16$$</p>\n" }, { "answer_id": 2093221, "score": 3, "text": "<p>$\\lambda$ must obviously be non-negative. Then assume</p>\n\n<ul>\n<li><p>$x\\ge0$ and </p>\n\n<ol>\n<li><p>$x^2-4x-12-\\lambda=0$. The roots are $x=2\\pm{\\sqrt{16+\\lambda}}$, but only the $+$ sign is valid. For all $\\lambda$, one root.</p></li>\n<li><p>$x^2-4x-12+\\lambda=0$. The roots are $x=2\\pm{\\sqrt{16-\\lambda}}$. They are both positive and distinct for $12&lt;\\lambda&lt;16$.</p></li>\n</ol></li>\n<li><p>$x\\le0$ and </p>\n\n<ol>\n<li><p>$x^2+4x-12-\\lambda=0$. The roots are $x=-2\\pm{\\sqrt{16+\\lambda}}$, but only the $-$ sign is valid. For all $\\lambda$, one root.</p></li>\n<li><p>$x^2+4x-12+\\lambda=0$. The roots are $x=-2\\pm{\\sqrt{16-\\lambda}}$. They are both negative and distinct for $12&lt;\\lambda&lt;16$.</p></li>\n</ol></li>\n</ul>\n\n<p>In conclusion, for $12&lt;\\lambda&lt;16$, there are three positive and three negative roots.</p>\n" } ]
Question: Find the set of values of $\lambda$ for which the equation $|x^2-4|x|-12|=\lambda$ has 6 distinct real roots Answer 1: <p><strong>Hint:</strong> <a href="https://i.stack.imgur.com/GyyB1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GyyB1.png" alt="enter image description here"></a></p> <p>$$|x^2-4|x|-12|=\lambda$$</p> <p>$$y=|x^2-4|x|-12|$$ $$y=\lambda$$ Answer: $$12&lt;\lambda&lt;16$$</p> Answer 2: <p>you can plot the $|x^2-4|x|-12|$</p> <p><a href="https://i.stack.imgur.com/31wTI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/31wTI.png" alt="enter image description here"></a></p> <p>then you easy to find $$12&lt;\lambda&lt;16$$</p> Answer 3: <p>$\lambda$ must obviously be non-negative. Then assume</p> <ul> <li><p>$x\ge0$ and </p> <ol> <li><p>$x^2-4x-12-\lambda=0$. The roots are $x=2\pm{\sqrt{16+\lambda}}$, but only the $+$ sign is valid. For all $\lambda$, one root.</p></li> <li><p>$x^2-4x-12+\lambda=0$. The roots are $x=2\pm{\sqrt{16-\lambda}}$. They are both positive and distinct for $12&lt;\lambda&lt;16$.</p></li> </ol></li> <li><p>$x\le0$ and </p> <ol> <li><p>$x^2+4x-12-\lambda=0$. The roots are $x=-2\pm{\sqrt{16+\lambda}}$, but only the $-$ sign is valid. For all $\lambda$, one root.</p></li> <li><p>$x^2+4x-12+\lambda=0$. The roots are $x=-2\pm{\sqrt{16-\lambda}}$. They are both negative and distinct for $12&lt;\lambda&lt;16$.</p></li> </ol></li> </ul> <p>In conclusion, for $12&lt;\lambda&lt;16$, there are three positive and three negative roots.</p> Answer 4: <p><strong>Hint:</strong> <a href="https://i.stack.imgur.com/GyyB1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GyyB1.png" alt="enter image description here"></a></p> <p>$$|x^2-4|x|-12|=\lambda$$</p> <p>$$y=|x^2-4|x|-12|$$ $$y=\lambda$$ Answer: $$12&lt;\lambda&lt;16$$</p> Answer 5: <p>you can plot the $|x^2-4|x|-12|$</p> <p><a href="https://i.stack.imgur.com/31wTI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/31wTI.png" alt="enter image description here"></a></p> <p>then you easy to find $$12&lt;\lambda&lt;16$$</p> Answer 6: <p>$\lambda$ must obviously be non-negative. Then assume</p> <ul> <li><p>$x\ge0$ and </p> <ol> <li><p>$x^2-4x-12-\lambda=0$. The roots are $x=2\pm{\sqrt{16+\lambda}}$, but only the $+$ sign is valid. For all $\lambda$, one root.</p></li> <li><p>$x^2-4x-12+\lambda=0$. The roots are $x=2\pm{\sqrt{16-\lambda}}$. They are both positive and distinct for $12&lt;\lambda&lt;16$.</p></li> </ol></li> <li><p>$x\le0$ and </p> <ol> <li><p>$x^2+4x-12-\lambda=0$. The roots are $x=-2\pm{\sqrt{16+\lambda}}$, but only the $-$ sign is valid. For all $\lambda$, one root.</p></li> <li><p>$x^2+4x-12+\lambda=0$. The roots are $x=-2\pm{\sqrt{16-\lambda}}$. They are both negative and distinct for $12&lt;\lambda&lt;16$.</p></li> </ol></li> </ul> <p>In conclusion, for $12&lt;\lambda&lt;16$, there are three positive and three negative roots.</p>
What&#39;s a closed form for $\sum_{k=0}^n\frac{1}{k+1}\sum_{r=0\\r~is~odd}^k(-1)^r{k\choose r}r^n$
2,093,487
4
https://math.stackexchange.com/questions/2093487/whats-a-closed-form-for-sum-k-0n-frac1k1-sum-r-0-risoddk-1r
<p>I want to use a closed form of $$\sum_{k=0}^n\frac{1}{k+1}\sum_{\ \ \ r=0\\r\text{ is odd}}^k(-1)^r{k\choose r}r^n$$ and $$\sum_{k=0}^n\frac{1}{k+1}\sum_{\ \ \ r=0\\r\text{ is even}}^k(-1)^r{k\choose r}r^n$$ Thanks.</p>
[ { "answer_id": 2093509, "score": 1, "text": "<p>Hint:\n$$S_1 = \\sum_{k=0}^n\\frac{1}{k+1}\\sum_{r=0\\\\r~is~odd}^k(-1)^r{k\\choose r}r^n$$</p>\n\n<p>$$S_2 = \\sum_{k=0}^n\\frac{1}{k+1}\\sum_{r=0\\\\r~is~even}^k(-1)^r{k\\choose r}r^n$$</p>\n\n<p>What is $$S_1 + i S_2$$?</p>\n" }, { "answer_id": 2115485, "score": 0, "text": "<p>By the binomial formula, we have:\n$$ \\sum_{k=0}^{n} \\binom{n}{k} = \\sum_{k=0}^{n} \\binom{n}{k}1^{n-k}1^{k} = (1+1)^n=2^n$$\nIn a similar way:\n$$ \\sum_{k=0}^{n} \\binom{n}{k}(-1)^k = (1-1)^n = 0$$\nAdding and subtracting these formulas, we deduce \n$$ \\sum_{k=0\\\\ k\\ odd}^{n} \\binom{n}{k} = \\sum_{k=0\\\\ k\\ even}^{n} \\binom{n}{k} = 2^{n-1}$$\nSince $(-1)^r=1$ when $r$ is even, it follows:\n$$\\sum_{k=0}^{n} \\frac{1}{k+1}\\sum_{r=0\\\\ r\\ even}^{k} \\binom{n}{k}(-1)^r = \\sum_{k=0}^{n} \\frac{2^{k-1}}{k+1}=\\frac{1}{4}\\sum_{k=0}^{n} \\frac{2^{k+1}}{k+1}=\\frac{1}{4}\\int_{0}^{2}\\left ( \\sum_{k=0}^{n} x^{k} \\right )=\\\\ \\frac{1}{4}\\int_{0}^{2} \\frac{x^{n+1}-1}{x-1}$$</p>\n\n<p>pd. I think you was asking for \n$$ \\sum_{k=0}^{n}\\frac{1}{k+1}\\left (\\binom{k}{0}-\\binom{k}{2}+\\binom{k}{4}-...\\right )$$\nIn that case, thinking in $S_{1}+iS_{2}$ works. One obtain something like\n$$ S_{1}+iS_{2} = \\sum_{k=0}^{n}\\sum_{r=0}^{k} \\frac{e^{\\frac{i\\pi r}{2}}}{k+1}$$\nIt can be manipulated by the same techniques showed here.</p>\n" }, { "answer_id": 2093509, "score": 1, "text": "<p>Hint:\n$$S_1 = \\sum_{k=0}^n\\frac{1}{k+1}\\sum_{r=0\\\\r~is~odd}^k(-1)^r{k\\choose r}r^n$$</p>\n\n<p>$$S_2 = \\sum_{k=0}^n\\frac{1}{k+1}\\sum_{r=0\\\\r~is~even}^k(-1)^r{k\\choose r}r^n$$</p>\n\n<p>What is $$S_1 + i S_2$$?</p>\n" }, { "answer_id": 2115485, "score": 0, "text": "<p>By the binomial formula, we have:\n$$ \\sum_{k=0}^{n} \\binom{n}{k} = \\sum_{k=0}^{n} \\binom{n}{k}1^{n-k}1^{k} = (1+1)^n=2^n$$\nIn a similar way:\n$$ \\sum_{k=0}^{n} \\binom{n}{k}(-1)^k = (1-1)^n = 0$$\nAdding and subtracting these formulas, we deduce \n$$ \\sum_{k=0\\\\ k\\ odd}^{n} \\binom{n}{k} = \\sum_{k=0\\\\ k\\ even}^{n} \\binom{n}{k} = 2^{n-1}$$\nSince $(-1)^r=1$ when $r$ is even, it follows:\n$$\\sum_{k=0}^{n} \\frac{1}{k+1}\\sum_{r=0\\\\ r\\ even}^{k} \\binom{n}{k}(-1)^r = \\sum_{k=0}^{n} \\frac{2^{k-1}}{k+1}=\\frac{1}{4}\\sum_{k=0}^{n} \\frac{2^{k+1}}{k+1}=\\frac{1}{4}\\int_{0}^{2}\\left ( \\sum_{k=0}^{n} x^{k} \\right )=\\\\ \\frac{1}{4}\\int_{0}^{2} \\frac{x^{n+1}-1}{x-1}$$</p>\n\n<p>pd. I think you was asking for \n$$ \\sum_{k=0}^{n}\\frac{1}{k+1}\\left (\\binom{k}{0}-\\binom{k}{2}+\\binom{k}{4}-...\\right )$$\nIn that case, thinking in $S_{1}+iS_{2}$ works. One obtain something like\n$$ S_{1}+iS_{2} = \\sum_{k=0}^{n}\\sum_{r=0}^{k} \\frac{e^{\\frac{i\\pi r}{2}}}{k+1}$$\nIt can be manipulated by the same techniques showed here.</p>\n" } ]
Question: What&#39;s a closed form for $\sum_{k=0}^n\frac{1}{k+1}\sum_{r=0\\r~is~odd}^k(-1)^r{k\choose r}r^n$ Answer 1: <p>Hint: $$S_1 = \sum_{k=0}^n\frac{1}{k+1}\sum_{r=0\\r~is~odd}^k(-1)^r{k\choose r}r^n$$</p> <p>$$S_2 = \sum_{k=0}^n\frac{1}{k+1}\sum_{r=0\\r~is~even}^k(-1)^r{k\choose r}r^n$$</p> <p>What is $$S_1 + i S_2$$?</p> Answer 2: <p>By the binomial formula, we have: $$ \sum_{k=0}^{n} \binom{n}{k} = \sum_{k=0}^{n} \binom{n}{k}1^{n-k}1^{k} = (1+1)^n=2^n$$ In a similar way: $$ \sum_{k=0}^{n} \binom{n}{k}(-1)^k = (1-1)^n = 0$$ Adding and subtracting these formulas, we deduce $$ \sum_{k=0\\ k\ odd}^{n} \binom{n}{k} = \sum_{k=0\\ k\ even}^{n} \binom{n}{k} = 2^{n-1}$$ Since $(-1)^r=1$ when $r$ is even, it follows: $$\sum_{k=0}^{n} \frac{1}{k+1}\sum_{r=0\\ r\ even}^{k} \binom{n}{k}(-1)^r = \sum_{k=0}^{n} \frac{2^{k-1}}{k+1}=\frac{1}{4}\sum_{k=0}^{n} \frac{2^{k+1}}{k+1}=\frac{1}{4}\int_{0}^{2}\left ( \sum_{k=0}^{n} x^{k} \right )=\\ \frac{1}{4}\int_{0}^{2} \frac{x^{n+1}-1}{x-1}$$</p> <p>pd. I think you was asking for $$ \sum_{k=0}^{n}\frac{1}{k+1}\left (\binom{k}{0}-\binom{k}{2}+\binom{k}{4}-...\right )$$ In that case, thinking in $S_{1}+iS_{2}$ works. One obtain something like $$ S_{1}+iS_{2} = \sum_{k=0}^{n}\sum_{r=0}^{k} \frac{e^{\frac{i\pi r}{2}}}{k+1}$$ It can be manipulated by the same techniques showed here.</p> Answer 3: <p>Hint: $$S_1 = \sum_{k=0}^n\frac{1}{k+1}\sum_{r=0\\r~is~odd}^k(-1)^r{k\choose r}r^n$$</p> <p>$$S_2 = \sum_{k=0}^n\frac{1}{k+1}\sum_{r=0\\r~is~even}^k(-1)^r{k\choose r}r^n$$</p> <p>What is $$S_1 + i S_2$$?</p> Answer 4: <p>By the binomial formula, we have: $$ \sum_{k=0}^{n} \binom{n}{k} = \sum_{k=0}^{n} \binom{n}{k}1^{n-k}1^{k} = (1+1)^n=2^n$$ In a similar way: $$ \sum_{k=0}^{n} \binom{n}{k}(-1)^k = (1-1)^n = 0$$ Adding and subtracting these formulas, we deduce $$ \sum_{k=0\\ k\ odd}^{n} \binom{n}{k} = \sum_{k=0\\ k\ even}^{n} \binom{n}{k} = 2^{n-1}$$ Since $(-1)^r=1$ when $r$ is even, it follows: $$\sum_{k=0}^{n} \frac{1}{k+1}\sum_{r=0\\ r\ even}^{k} \binom{n}{k}(-1)^r = \sum_{k=0}^{n} \frac{2^{k-1}}{k+1}=\frac{1}{4}\sum_{k=0}^{n} \frac{2^{k+1}}{k+1}=\frac{1}{4}\int_{0}^{2}\left ( \sum_{k=0}^{n} x^{k} \right )=\\ \frac{1}{4}\int_{0}^{2} \frac{x^{n+1}-1}{x-1}$$</p> <p>pd. I think you was asking for $$ \sum_{k=0}^{n}\frac{1}{k+1}\left (\binom{k}{0}-\binom{k}{2}+\binom{k}{4}-...\right )$$ In that case, thinking in $S_{1}+iS_{2}$ works. One obtain something like $$ S_{1}+iS_{2} = \sum_{k=0}^{n}\sum_{r=0}^{k} \frac{e^{\frac{i\pi r}{2}}}{k+1}$$ It can be manipulated by the same techniques showed here.</p>
Ramanujan&#39;s continued fraction for ratio of gamma values
2,093,499
4
https://math.stackexchange.com/questions/2093499/ramanujans-continued-fraction-for-ratio-of-gamma-values
<p>Ramanujan mentioned the following continued fraction formula in his famous letter to G. H. Hardy in 1913:</p> <p>$$\cfrac{4}{x +}\cfrac{1^{2}}{2x +}\cfrac{3^{2}}{2x +}\cdots = \left(\dfrac{\Gamma\left(\dfrac{x + 1}{4}\right)}{\Gamma\left(\dfrac{x + 3}{4}\right)}\right)^{2}\tag{1}$$ The form of the continued fraction is very familiar and I am aware of the standard result corresponding to $x = 1$ namely $$\pi = \cfrac{4}{1 +}\cfrac{1^{2}}{2 +}\cfrac{3^{2}}{2 +}\cdots\tag{2}$$ How do we generalize $(2)$ to get $(1)$?</p> <hr> <p>Just for clarity the notation $$\cfrac{a_{1}}{b_{1}+}\cfrac{a_{2}}{b_{2}+}\cfrac{a_{3}}{b_{3}+}\cdots$$ consumes less space in typing compared to the more cumbersome but easier to understand continued fraction $$\cfrac{a_{1}}{b_{1} + \cfrac{a_{2}}{b_{2} + \cfrac{a_{3}}{b_{3} + \cdots}}}$$</p>
[ { "answer_id": 3757207, "score": 3, "text": "<p>A fairly straightforward approach, that generalizes to <span class=\"math-container\">$${\\raise{-1ex}\\mathop{\\huge\\mathrm{K}}_{n=0}^\\infty}\\frac{a_2 n^2+a_1 n+a_0}{b_1 n+b_0}$$</span> (using <a href=\"https://en.wikipedia.org/wiki/Generalized_continued_fraction#Notation\" rel=\"nofollow noreferrer\">Gauss' notation</a>), is to consider exponential generating functions of the convergents.</p>\n<p>The value of the above CF (if it exists) is <span class=\"math-container\">$\\lim\\limits_{n\\to\\infty}(P_n/Q_n)$</span>, where the vectors <span class=\"math-container\">$R_n=(P_n,Q_n)$</span> satisfy <span class=\"math-container\">$$R_0=(1,0),\\quad R_1=(0,1),\\quad R_{n+2}=(b_1 n+b_0)R_{n+1}+(a_2 n^2+a_1 n+a_0)R_n,\\quad(n\\geqslant 0)$$</span> and we find that the EGF <span class=\"math-container\">$R(z)=\\sum_{n=0}^\\infty R_n z^n/n!$</span> satisfies <span class=\"math-container\">$$(1-b_1 z-a_2 z^2)R''(z)-\\big(b_0+(a_1+a_2)z\\big)R'(z)-a_0 R(z)=0,$$</span> an ODE of hypergeometric <span class=\"math-container\">$_2F_1$</span> type (after a linear change of variable).</p>\n<p>In our case, <span class=\"math-container\">$b_0=2x$</span>, <span class=\"math-container\">$b_1=0$</span>, <span class=\"math-container\">$a_0=1$</span>, and <span class=\"math-container\">$a_1=a_2=4$</span>; we suppose <span class=\"math-container\">$x&gt;0$</span> is real (for simplicity). <a href=\"https://en.wikipedia.org/wiki/Hypergeometric_function#The_hypergeometric_differential_equation\" rel=\"nofollow noreferrer\">Then</a> <span class=\"math-container\">$$R(z)=C_1\\times{}_2F_1\\left(\\frac12,\\frac12;1+\\frac{x}{2};\\frac12-z\\right)+C_2\\times\\ldots,$$</span> where &quot;<span class=\"math-container\">$\\ldots$</span>&quot; stands for the second solution of the ODE, which is <em>unbounded</em> at <span class=\"math-container\">$z\\to 1/2$</span>, and <span class=\"math-container\">$C_1,C_2$</span> are some constants (which of course depend on <span class=\"math-container\">$x$</span>, and determined by <span class=\"math-container\">$R(0)=R_0$</span> and <span class=\"math-container\">$R'(0)=R_1$</span>).</p>\n<p>Writing <span class=\"math-container\">$R(z)=\\big(P(z),Q(z)\\big)$</span> [recall that <span class=\"math-container\">$R_n$</span> are vectors] and assuming that our CF converges (in fact it's not hard to prove) to <span class=\"math-container\">$f(x)$</span>, we <a href=\"https://math.stackexchange.com/a/3260827\">know</a> that <span class=\"math-container\">$$f(x)=\\lim_{z\\to 1/2^-}\\frac{P(z)}{Q(z)}.$$</span></p>\n<p>This is equal to <span class=\"math-container\">$A/B$</span>, where <span class=\"math-container\">$(A,B)=C_2$</span> is our second &quot;constant&quot;. Computing it, we find <span class=\"math-container\">$$f(x)=\\frac{F'(1/2)}{F(1/2)},\\qquad F(z)={}_2F_1\\left(\\frac12,\\frac12;1+\\frac{x}{2};z\\right).$$</span></p>\n<p>The value of <span class=\"math-container\">$F(1/2)=\\Gamma\\big((x+1)/4\\big)/\\Gamma\\big((x+3)/4\\big)$</span> can be computed using the <a href=\"https://en.wikipedia.org/wiki/Hypergeometric_function#Euler_type\" rel=\"nofollow noreferrer\">integral representation</a> of <span class=\"math-container\">$_2F_1$</span> (the substitution <span class=\"math-container\">$t=x(2-x)$</span> helps; there is also a <a href=\"https://dlmf.nist.gov/15.4#E30\" rel=\"nofollow noreferrer\">known formula</a>). The value of <span class=\"math-container\">$F'(1/2)$</span> is obtained similarly, after integration by parts.</p>\n<p>Finally, we obtain the CF <span class=\"math-container\">$(1)$</span> in the OP as <span class=\"math-container\">$4/\\big(x+f(x)\\big)$</span>. <em>This gives the expected result.</em></p>\n<p>An equivalent (and a bit easier) approach is to use the corresponding <a href=\"https://dlmf.nist.gov/15.5#E19\" rel=\"nofollow noreferrer\">recurrence</a> for <span class=\"math-container\">$_2F_1$</span> instead of the ODE.</p>\n" }, { "answer_id": 4793719, "score": 2, "text": "<p>We'll go the <em>other</em> direction and generalize <span class=\"math-container\">$(1)$</span> to get more pi formulas <span class=\"math-container\">$(2)$</span>. Ramanujan gave the more general,</p>\n<p><span class=\"math-container\">$$F(a,b) = \\frac{\\Gamma\\left(\\frac{a+b+1}4\\right)\\Gamma\\left(\\frac{a-b+1}4\\right)}{\\Gamma\\left(\\frac{a+b+3}4\\right)\\Gamma\\left(\\frac{a-b+3}4\\right)} = \\cfrac{4}{a + \\cfrac{1^2-b^2}{2a + \\cfrac{3^2-b^2}{2a + \\cfrac{5^2-b^2}{2a + \\ddots}}}}$$</span></p>\n<p>For <span class=\"math-container\">$b=0$</span>, this reduces to the one in Ramanujan's letter,</p>\n<p><span class=\"math-container\">$$F(a,0) = \\frac{\\Gamma^2\\left(\\frac{a+1}4\\right) }{\\Gamma^2\\left(\\frac{a+3}4\\right) } = \\cfrac{4}{a + \\cfrac{1^2}{2a + \\cfrac{3^2}{2a + \\cfrac{5^2}{2a + \\ddots}}}}$$</span></p>\n<p>Let <span class=\"math-container\">$a$</span> be an <em>odd</em> integer, then <span class=\"math-container\">$F(a,0)=F(a)$</span> is a rational multiple of <span class=\"math-container\">$\\pi$</span> or <span class=\"math-container\">$1/\\pi$</span>. In closed-form,</p>\n<p><span class=\"math-container\">\\begin{align}\nF(4m+1) &amp;= \\pi\\,\\left(\\frac{(2m)!}{2^{2m} m!^2}\\right)^2\\\\\nF(4m+3) &amp;= \\frac{4}{\\pi}\\left(\\frac{2^{2m} m!^2}{(2m+1)!}\\right)^2 \n\\end{align}</span></p>\n<p>The simplest is <span class=\"math-container\">$m=0,$</span> yielding the familiar cfracs,</p>\n<p><span class=\"math-container\">$$F(1) = \\pi = \\cfrac{4}{1 + \\cfrac{1^2}{2 + \\cfrac{3^2}{2 + \\cfrac{5^2}{2 + \\ddots}}}}, \\qquad F(3) = \\frac4{\\pi} = \\cfrac{4}{3 + \\cfrac{1^2}{6 + \\cfrac{3^2}{6 + \\cfrac{5^2}{6 + \\ddots}}}}$$</span></p>\n<p>and so on.</p>\n" }, { "answer_id": 3757207, "score": 3, "text": "<p>A fairly straightforward approach, that generalizes to <span class=\"math-container\">$${\\raise{-1ex}\\mathop{\\huge\\mathrm{K}}_{n=0}^\\infty}\\frac{a_2 n^2+a_1 n+a_0}{b_1 n+b_0}$$</span> (using <a href=\"https://en.wikipedia.org/wiki/Generalized_continued_fraction#Notation\" rel=\"nofollow noreferrer\">Gauss' notation</a>), is to consider exponential generating functions of the convergents.</p>\n<p>The value of the above CF (if it exists) is <span class=\"math-container\">$\\lim\\limits_{n\\to\\infty}(P_n/Q_n)$</span>, where the vectors <span class=\"math-container\">$R_n=(P_n,Q_n)$</span> satisfy <span class=\"math-container\">$$R_0=(1,0),\\quad R_1=(0,1),\\quad R_{n+2}=(b_1 n+b_0)R_{n+1}+(a_2 n^2+a_1 n+a_0)R_n,\\quad(n\\geqslant 0)$$</span> and we find that the EGF <span class=\"math-container\">$R(z)=\\sum_{n=0}^\\infty R_n z^n/n!$</span> satisfies <span class=\"math-container\">$$(1-b_1 z-a_2 z^2)R''(z)-\\big(b_0+(a_1+a_2)z\\big)R'(z)-a_0 R(z)=0,$$</span> an ODE of hypergeometric <span class=\"math-container\">$_2F_1$</span> type (after a linear change of variable).</p>\n<p>In our case, <span class=\"math-container\">$b_0=2x$</span>, <span class=\"math-container\">$b_1=0$</span>, <span class=\"math-container\">$a_0=1$</span>, and <span class=\"math-container\">$a_1=a_2=4$</span>; we suppose <span class=\"math-container\">$x&gt;0$</span> is real (for simplicity). <a href=\"https://en.wikipedia.org/wiki/Hypergeometric_function#The_hypergeometric_differential_equation\" rel=\"nofollow noreferrer\">Then</a> <span class=\"math-container\">$$R(z)=C_1\\times{}_2F_1\\left(\\frac12,\\frac12;1+\\frac{x}{2};\\frac12-z\\right)+C_2\\times\\ldots,$$</span> where &quot;<span class=\"math-container\">$\\ldots$</span>&quot; stands for the second solution of the ODE, which is <em>unbounded</em> at <span class=\"math-container\">$z\\to 1/2$</span>, and <span class=\"math-container\">$C_1,C_2$</span> are some constants (which of course depend on <span class=\"math-container\">$x$</span>, and determined by <span class=\"math-container\">$R(0)=R_0$</span> and <span class=\"math-container\">$R'(0)=R_1$</span>).</p>\n<p>Writing <span class=\"math-container\">$R(z)=\\big(P(z),Q(z)\\big)$</span> [recall that <span class=\"math-container\">$R_n$</span> are vectors] and assuming that our CF converges (in fact it's not hard to prove) to <span class=\"math-container\">$f(x)$</span>, we <a href=\"https://math.stackexchange.com/a/3260827\">know</a> that <span class=\"math-container\">$$f(x)=\\lim_{z\\to 1/2^-}\\frac{P(z)}{Q(z)}.$$</span></p>\n<p>This is equal to <span class=\"math-container\">$A/B$</span>, where <span class=\"math-container\">$(A,B)=C_2$</span> is our second &quot;constant&quot;. Computing it, we find <span class=\"math-container\">$$f(x)=\\frac{F'(1/2)}{F(1/2)},\\qquad F(z)={}_2F_1\\left(\\frac12,\\frac12;1+\\frac{x}{2};z\\right).$$</span></p>\n<p>The value of <span class=\"math-container\">$F(1/2)=\\Gamma\\big((x+1)/4\\big)/\\Gamma\\big((x+3)/4\\big)$</span> can be computed using the <a href=\"https://en.wikipedia.org/wiki/Hypergeometric_function#Euler_type\" rel=\"nofollow noreferrer\">integral representation</a> of <span class=\"math-container\">$_2F_1$</span> (the substitution <span class=\"math-container\">$t=x(2-x)$</span> helps; there is also a <a href=\"https://dlmf.nist.gov/15.4#E30\" rel=\"nofollow noreferrer\">known formula</a>). The value of <span class=\"math-container\">$F'(1/2)$</span> is obtained similarly, after integration by parts.</p>\n<p>Finally, we obtain the CF <span class=\"math-container\">$(1)$</span> in the OP as <span class=\"math-container\">$4/\\big(x+f(x)\\big)$</span>. <em>This gives the expected result.</em></p>\n<p>An equivalent (and a bit easier) approach is to use the corresponding <a href=\"https://dlmf.nist.gov/15.5#E19\" rel=\"nofollow noreferrer\">recurrence</a> for <span class=\"math-container\">$_2F_1$</span> instead of the ODE.</p>\n" }, { "answer_id": 4793719, "score": 2, "text": "<p>We'll go the <em>other</em> direction and generalize <span class=\"math-container\">$(1)$</span> to get more pi formulas <span class=\"math-container\">$(2)$</span>. Ramanujan gave the more general,</p>\n<p><span class=\"math-container\">$$F(a,b) = \\frac{\\Gamma\\left(\\frac{a+b+1}4\\right)\\Gamma\\left(\\frac{a-b+1}4\\right)}{\\Gamma\\left(\\frac{a+b+3}4\\right)\\Gamma\\left(\\frac{a-b+3}4\\right)} = \\cfrac{4}{a + \\cfrac{1^2-b^2}{2a + \\cfrac{3^2-b^2}{2a + \\cfrac{5^2-b^2}{2a + \\ddots}}}}$$</span></p>\n<p>For <span class=\"math-container\">$b=0$</span>, this reduces to the one in Ramanujan's letter,</p>\n<p><span class=\"math-container\">$$F(a,0) = \\frac{\\Gamma^2\\left(\\frac{a+1}4\\right) }{\\Gamma^2\\left(\\frac{a+3}4\\right) } = \\cfrac{4}{a + \\cfrac{1^2}{2a + \\cfrac{3^2}{2a + \\cfrac{5^2}{2a + \\ddots}}}}$$</span></p>\n<p>Let <span class=\"math-container\">$a$</span> be an <em>odd</em> integer, then <span class=\"math-container\">$F(a,0)=F(a)$</span> is a rational multiple of <span class=\"math-container\">$\\pi$</span> or <span class=\"math-container\">$1/\\pi$</span>. In closed-form,</p>\n<p><span class=\"math-container\">\\begin{align}\nF(4m+1) &amp;= \\pi\\,\\left(\\frac{(2m)!}{2^{2m} m!^2}\\right)^2\\\\\nF(4m+3) &amp;= \\frac{4}{\\pi}\\left(\\frac{2^{2m} m!^2}{(2m+1)!}\\right)^2 \n\\end{align}</span></p>\n<p>The simplest is <span class=\"math-container\">$m=0,$</span> yielding the familiar cfracs,</p>\n<p><span class=\"math-container\">$$F(1) = \\pi = \\cfrac{4}{1 + \\cfrac{1^2}{2 + \\cfrac{3^2}{2 + \\cfrac{5^2}{2 + \\ddots}}}}, \\qquad F(3) = \\frac4{\\pi} = \\cfrac{4}{3 + \\cfrac{1^2}{6 + \\cfrac{3^2}{6 + \\cfrac{5^2}{6 + \\ddots}}}}$$</span></p>\n<p>and so on.</p>\n" } ]
Question: Ramanujan&#39;s continued fraction for ratio of gamma values Answer 1: <p>A fairly straightforward approach, that generalizes to <span class="math-container">$${\raise{-1ex}\mathop{\huge\mathrm{K}}_{n=0}^\infty}\frac{a_2 n^2+a_1 n+a_0}{b_1 n+b_0}$$</span> (using <a href="https://en.wikipedia.org/wiki/Generalized_continued_fraction#Notation" rel="nofollow noreferrer">Gauss' notation</a>), is to consider exponential generating functions of the convergents.</p> <p>The value of the above CF (if it exists) is <span class="math-container">$\lim\limits_{n\to\infty}(P_n/Q_n)$</span>, where the vectors <span class="math-container">$R_n=(P_n,Q_n)$</span> satisfy <span class="math-container">$$R_0=(1,0),\quad R_1=(0,1),\quad R_{n+2}=(b_1 n+b_0)R_{n+1}+(a_2 n^2+a_1 n+a_0)R_n,\quad(n\geqslant 0)$$</span> and we find that the EGF <span class="math-container">$R(z)=\sum_{n=0}^\infty R_n z^n/n!$</span> satisfies <span class="math-container">$$(1-b_1 z-a_2 z^2)R''(z)-\big(b_0+(a_1+a_2)z\big)R'(z)-a_0 R(z)=0,$$</span> an ODE of hypergeometric <span class="math-container">$_2F_1$</span> type (after a linear change of variable).</p> <p>In our case, <span class="math-container">$b_0=2x$</span>, <span class="math-container">$b_1=0$</span>, <span class="math-container">$a_0=1$</span>, and <span class="math-container">$a_1=a_2=4$</span>; we suppose <span class="math-container">$x&gt;0$</span> is real (for simplicity). <a href="https://en.wikipedia.org/wiki/Hypergeometric_function#The_hypergeometric_differential_equation" rel="nofollow noreferrer">Then</a> <span class="math-container">$$R(z)=C_1\times{}_2F_1\left(\frac12,\frac12;1+\frac{x}{2};\frac12-z\right)+C_2\times\ldots,$$</span> where &quot;<span class="math-container">$\ldots$</span>&quot; stands for the second solution of the ODE, which is <em>unbounded</em> at <span class="math-container">$z\to 1/2$</span>, and <span class="math-container">$C_1,C_2$</span> are some constants (which of course depend on <span class="math-container">$x$</span>, and determined by <span class="math-container">$R(0)=R_0$</span> and <span class="math-container">$R'(0)=R_1$</span>).</p> <p>Writing <span class="math-container">$R(z)=\big(P(z),Q(z)\big)$</span> [recall that <span class="math-container">$R_n$</span> are vectors] and assuming that our CF converges (in fact it's not hard to prove) to <span class="math-container">$f(x)$</span>, we <a href="https://math.stackexchange.com/a/3260827">know</a> that <span class="math-container">$$f(x)=\lim_{z\to 1/2^-}\frac{P(z)}{Q(z)}.$$</span></p> <p>This is equal to <span class="math-container">$A/B$</span>, where <span class="math-container">$(A,B)=C_2$</span> is our second &quot;constant&quot;. Computing it, we find <span class="math-container">$$f(x)=\frac{F'(1/2)}{F(1/2)},\qquad F(z)={}_2F_1\left(\frac12,\frac12;1+\frac{x}{2};z\right).$$</span></p> <p>The value of <span class="math-container">$F(1/2)=\Gamma\big((x+1)/4\big)/\Gamma\big((x+3)/4\big)$</span> can be computed using the <a href="https://en.wikipedia.org/wiki/Hypergeometric_function#Euler_type" rel="nofollow noreferrer">integral representation</a> of <span class="math-container">$_2F_1$</span> (the substitution <span class="math-container">$t=x(2-x)$</span> helps; there is also a <a href="https://dlmf.nist.gov/15.4#E30" rel="nofollow noreferrer">known formula</a>). The value of <span class="math-container">$F'(1/2)$</span> is obtained similarly, after integration by parts.</p> <p>Finally, we obtain the CF <span class="math-container">$(1)$</span> in the OP as <span class="math-container">$4/\big(x+f(x)\big)$</span>. <em>This gives the expected result.</em></p> <p>An equivalent (and a bit easier) approach is to use the corresponding <a href="https://dlmf.nist.gov/15.5#E19" rel="nofollow noreferrer">recurrence</a> for <span class="math-container">$_2F_1$</span> instead of the ODE.</p> Answer 2: <p>We'll go the <em>other</em> direction and generalize <span class="math-container">$(1)$</span> to get more pi formulas <span class="math-container">$(2)$</span>. Ramanujan gave the more general,</p> <p><span class="math-container">$$F(a,b) = \frac{\Gamma\left(\frac{a+b+1}4\right)\Gamma\left(\frac{a-b+1}4\right)}{\Gamma\left(\frac{a+b+3}4\right)\Gamma\left(\frac{a-b+3}4\right)} = \cfrac{4}{a + \cfrac{1^2-b^2}{2a + \cfrac{3^2-b^2}{2a + \cfrac{5^2-b^2}{2a + \ddots}}}}$$</span></p> <p>For <span class="math-container">$b=0$</span>, this reduces to the one in Ramanujan's letter,</p> <p><span class="math-container">$$F(a,0) = \frac{\Gamma^2\left(\frac{a+1}4\right) }{\Gamma^2\left(\frac{a+3}4\right) } = \cfrac{4}{a + \cfrac{1^2}{2a + \cfrac{3^2}{2a + \cfrac{5^2}{2a + \ddots}}}}$$</span></p> <p>Let <span class="math-container">$a$</span> be an <em>odd</em> integer, then <span class="math-container">$F(a,0)=F(a)$</span> is a rational multiple of <span class="math-container">$\pi$</span> or <span class="math-container">$1/\pi$</span>. In closed-form,</p> <p><span class="math-container">\begin{align} F(4m+1) &amp;= \pi\,\left(\frac{(2m)!}{2^{2m} m!^2}\right)^2\\ F(4m+3) &amp;= \frac{4}{\pi}\left(\frac{2^{2m} m!^2}{(2m+1)!}\right)^2 \end{align}</span></p> <p>The simplest is <span class="math-container">$m=0,$</span> yielding the familiar cfracs,</p> <p><span class="math-container">$$F(1) = \pi = \cfrac{4}{1 + \cfrac{1^2}{2 + \cfrac{3^2}{2 + \cfrac{5^2}{2 + \ddots}}}}, \qquad F(3) = \frac4{\pi} = \cfrac{4}{3 + \cfrac{1^2}{6 + \cfrac{3^2}{6 + \cfrac{5^2}{6 + \ddots}}}}$$</span></p> <p>and so on.</p> Answer 3: <p>A fairly straightforward approach, that generalizes to <span class="math-container">$${\raise{-1ex}\mathop{\huge\mathrm{K}}_{n=0}^\infty}\frac{a_2 n^2+a_1 n+a_0}{b_1 n+b_0}$$</span> (using <a href="https://en.wikipedia.org/wiki/Generalized_continued_fraction#Notation" rel="nofollow noreferrer">Gauss' notation</a>), is to consider exponential generating functions of the convergents.</p> <p>The value of the above CF (if it exists) is <span class="math-container">$\lim\limits_{n\to\infty}(P_n/Q_n)$</span>, where the vectors <span class="math-container">$R_n=(P_n,Q_n)$</span> satisfy <span class="math-container">$$R_0=(1,0),\quad R_1=(0,1),\quad R_{n+2}=(b_1 n+b_0)R_{n+1}+(a_2 n^2+a_1 n+a_0)R_n,\quad(n\geqslant 0)$$</span> and we find that the EGF <span class="math-container">$R(z)=\sum_{n=0}^\infty R_n z^n/n!$</span> satisfies <span class="math-container">$$(1-b_1 z-a_2 z^2)R''(z)-\big(b_0+(a_1+a_2)z\big)R'(z)-a_0 R(z)=0,$$</span> an ODE of hypergeometric <span class="math-container">$_2F_1$</span> type (after a linear change of variable).</p> <p>In our case, <span class="math-container">$b_0=2x$</span>, <span class="math-container">$b_1=0$</span>, <span class="math-container">$a_0=1$</span>, and <span class="math-container">$a_1=a_2=4$</span>; we suppose <span class="math-container">$x&gt;0$</span> is real (for simplicity). <a href="https://en.wikipedia.org/wiki/Hypergeometric_function#The_hypergeometric_differential_equation" rel="nofollow noreferrer">Then</a> <span class="math-container">$$R(z)=C_1\times{}_2F_1\left(\frac12,\frac12;1+\frac{x}{2};\frac12-z\right)+C_2\times\ldots,$$</span> where &quot;<span class="math-container">$\ldots$</span>&quot; stands for the second solution of the ODE, which is <em>unbounded</em> at <span class="math-container">$z\to 1/2$</span>, and <span class="math-container">$C_1,C_2$</span> are some constants (which of course depend on <span class="math-container">$x$</span>, and determined by <span class="math-container">$R(0)=R_0$</span> and <span class="math-container">$R'(0)=R_1$</span>).</p> <p>Writing <span class="math-container">$R(z)=\big(P(z),Q(z)\big)$</span> [recall that <span class="math-container">$R_n$</span> are vectors] and assuming that our CF converges (in fact it's not hard to prove) to <span class="math-container">$f(x)$</span>, we <a href="https://math.stackexchange.com/a/3260827">know</a> that <span class="math-container">$$f(x)=\lim_{z\to 1/2^-}\frac{P(z)}{Q(z)}.$$</span></p> <p>This is equal to <span class="math-container">$A/B$</span>, where <span class="math-container">$(A,B)=C_2$</span> is our second &quot;constant&quot;. Computing it, we find <span class="math-container">$$f(x)=\frac{F'(1/2)}{F(1/2)},\qquad F(z)={}_2F_1\left(\frac12,\frac12;1+\frac{x}{2};z\right).$$</span></p> <p>The value of <span class="math-container">$F(1/2)=\Gamma\big((x+1)/4\big)/\Gamma\big((x+3)/4\big)$</span> can be computed using the <a href="https://en.wikipedia.org/wiki/Hypergeometric_function#Euler_type" rel="nofollow noreferrer">integral representation</a> of <span class="math-container">$_2F_1$</span> (the substitution <span class="math-container">$t=x(2-x)$</span> helps; there is also a <a href="https://dlmf.nist.gov/15.4#E30" rel="nofollow noreferrer">known formula</a>). The value of <span class="math-container">$F'(1/2)$</span> is obtained similarly, after integration by parts.</p> <p>Finally, we obtain the CF <span class="math-container">$(1)$</span> in the OP as <span class="math-container">$4/\big(x+f(x)\big)$</span>. <em>This gives the expected result.</em></p> <p>An equivalent (and a bit easier) approach is to use the corresponding <a href="https://dlmf.nist.gov/15.5#E19" rel="nofollow noreferrer">recurrence</a> for <span class="math-container">$_2F_1$</span> instead of the ODE.</p> Answer 4: <p>We'll go the <em>other</em> direction and generalize <span class="math-container">$(1)$</span> to get more pi formulas <span class="math-container">$(2)$</span>. Ramanujan gave the more general,</p> <p><span class="math-container">$$F(a,b) = \frac{\Gamma\left(\frac{a+b+1}4\right)\Gamma\left(\frac{a-b+1}4\right)}{\Gamma\left(\frac{a+b+3}4\right)\Gamma\left(\frac{a-b+3}4\right)} = \cfrac{4}{a + \cfrac{1^2-b^2}{2a + \cfrac{3^2-b^2}{2a + \cfrac{5^2-b^2}{2a + \ddots}}}}$$</span></p> <p>For <span class="math-container">$b=0$</span>, this reduces to the one in Ramanujan's letter,</p> <p><span class="math-container">$$F(a,0) = \frac{\Gamma^2\left(\frac{a+1}4\right) }{\Gamma^2\left(\frac{a+3}4\right) } = \cfrac{4}{a + \cfrac{1^2}{2a + \cfrac{3^2}{2a + \cfrac{5^2}{2a + \ddots}}}}$$</span></p> <p>Let <span class="math-container">$a$</span> be an <em>odd</em> integer, then <span class="math-container">$F(a,0)=F(a)$</span> is a rational multiple of <span class="math-container">$\pi$</span> or <span class="math-container">$1/\pi$</span>. In closed-form,</p> <p><span class="math-container">\begin{align} F(4m+1) &amp;= \pi\,\left(\frac{(2m)!}{2^{2m} m!^2}\right)^2\\ F(4m+3) &amp;= \frac{4}{\pi}\left(\frac{2^{2m} m!^2}{(2m+1)!}\right)^2 \end{align}</span></p> <p>The simplest is <span class="math-container">$m=0,$</span> yielding the familiar cfracs,</p> <p><span class="math-container">$$F(1) = \pi = \cfrac{4}{1 + \cfrac{1^2}{2 + \cfrac{3^2}{2 + \cfrac{5^2}{2 + \ddots}}}}, \qquad F(3) = \frac4{\pi} = \cfrac{4}{3 + \cfrac{1^2}{6 + \cfrac{3^2}{6 + \cfrac{5^2}{6 + \ddots}}}}$$</span></p> <p>and so on.</p>
Generate integer solutions from a hyperbolic equation
2,093,896
4
https://math.stackexchange.com/questions/2093896/generate-integer-solutions-from-a-hyperbolic-equation
<p>I want to generate any integer solution from the equation $\sqrt{8r^2+1} = n$. I know there are integer solutions to this, but I have no idea how to approach this. (When working with whole numbers) Since division and square roots are the only operations that can turn a whole number to a fraction, I thought I would be able to manipulate it to get rid of those operations, but any manipulating of the equation gets me nowhere closer and it seems I simply lack the mathematical understanding to solve this. Is there any way to solve this problem?</p>
[ { "answer_id": 2093923, "score": 4, "text": "<p>Another way to express this is to write $s=2r$ and look for integer solutions to\n$$n^2-2s^2=1,$$\nwith $s$ even and $n$ positive. This is known as a <a href=\"https://en.wikipedia.org/wiki/Pell&#39;s_equation\" rel=\"nofollow noreferrer\">Pell equation</a>, and its solutions are well understood. It is known that the solutions $(n,s)$ with $n$ positive and $s$ even are precisely the pairs of the form\n$$n+s\\sqrt{2}=(3+2\\sqrt{2})^k,$$\nfor some integer $k$. </p>\n\n<p>To show that every such pair $(n,s)$ is a solution to $n^2-2s^2=1$ is not too difficult, but to show that <em>every</em> solution is of this form requires some more advanced number theory (i.e. some ring theory and perhaps Dirichlet's unit theorem).</p>\n\n<hr>\n\n<p><strong>EDIT:</strong> If you are only interested in finding a single integer solution to this equation, then by simple trial and error with small values of $r$ and $n$ you will find one very quickly.</p>\n" }, { "answer_id": 2093952, "score": 1, "text": "<p>The solutions to $$ x^2 - 8 y^2 $$\nbegin $$ (1,0), \\;\\; (3,1), \\;\\; (17,6), \\; \\; (99,35), \\cdots $$ \nand continue forever with \n$$ x_{n+2} = 6 x_{n+1} - x_n, $$\n$$ y_{n+2} = 6 y_{n+1} - y_n. $$</p>\n\n<p>In turn, these come from a matrix that gives an automorphism of the quadratic form,\n$$\nA = \\left(\n\\begin{array}{cc}\n3 &amp; 8 \\\\\n1 &amp; 3\n\\end{array}\n\\right)\n$$\nThe automorphism property is\n$$\n \\left(\n\\begin{array}{cc}\n3 &amp; 1 \\\\\n8 &amp; 3\n\\end{array}\n\\right)\n \\left(\n\\begin{array}{cc}\n1 &amp; 0 \\\\\n0 &amp; -8\n\\end{array}\n\\right)\n \\left(\n\\begin{array}{cc}\n3 &amp; 8 \\\\\n1 &amp; 3\n\\end{array}\n\\right) =\n \\left(\n\\begin{array}{cc}\n1 &amp; 0 \\\\\n0 &amp; -8\n\\end{array}\n\\right),\n$$\nwhere $A$ is the \"smallest\" matrix that works.</p>\n\n<p>This allows us to write each $x,y$ ordered pair as a column vector, with\n$$\n\\left(\n\\begin{array}{c}\nx_{n+1} \\\\\ny_{n+1}\n\\end{array}\n\\right) = \n \\left(\n\\begin{array}{cc}\n3 &amp; 8 \\\\\n1 &amp; 3\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{c}\nx_n \\\\\ny_n\n\\end{array}\n\\right).\n$$\nThe two separate linear recurrences above are the result of applying the Cayley-Hamilton Theorem using $A,$ which satisfies $A^2 - 6 A + I = 0.$</p>\n\n<pre><code>x: 3 y: 1 ratio: 3 SEED BACK ONE STEP 1 , 0\nx: 17 y: 6 ratio: 2.83333\nx: 99 y: 35 ratio: 2.82857\nx: 577 y: 204 ratio: 2.82843\nx: 3363 y: 1189 ratio: 2.82843\nx: 19601 y: 6930 ratio: 2.82843\nx: 114243 y: 40391 ratio: 2.82843\nx: 665857 y: 235416 ratio: 2.82843\nx: 3880899 y: 1372105 ratio: 2.82843\nx: 22619537 y: 7997214 ratio: 2.82843\n</code></pre>\n\n<p>=====================================================</p>\n\n<p>If we switch the right hand side, things immediately get a little different. Here we do $x^2 - 8 y^2 = 17.$ The recurrence deals with every <em>second</em> solution:</p>\n\n<pre><code>x: 5 y: -1 \nx: 5 y: 1 ratio: 5 SEED KEEP +- \nx: 7 y: 2 ratio: 3.5 SEED BACK ONE STEP 5 , -1\nx: 23 y: 8 ratio: 2.875\nx: 37 y: 13 ratio: 2.84615\nx: 133 y: 47 ratio: 2.82979\nx: 215 y: 76 ratio: 2.82895\nx: 775 y: 274 ratio: 2.82847\nx: 1253 y: 443 ratio: 2.82844\nx: 4517 y: 1597 ratio: 2.82843\nx: 7303 y: 2582 ratio: 2.82843\nx: 26327 y: 9308 ratio: 2.82843\nx: 42565 y: 15049 ratio: 2.82843\nx: 153445 y: 54251 ratio: 2.82843\nx: 248087 y: 87712 ratio: 2.82843\nx: 894343 y: 316198 ratio: 2.82843\nx: 1445957 y: 511223 ratio: 2.82843\nx: 5212613 y: 1842937 ratio: 2.82843\nx: 8427655 y: 2979626 ratio: 2.82843\n\nWed Jan 11 17:28:11 PST 2017\n\n x^2 - 8 y^2 = 17\n</code></pre>\n\n<p>==========================================</p>\n" }, { "answer_id": 2093923, "score": 4, "text": "<p>Another way to express this is to write $s=2r$ and look for integer solutions to\n$$n^2-2s^2=1,$$\nwith $s$ even and $n$ positive. This is known as a <a href=\"https://en.wikipedia.org/wiki/Pell&#39;s_equation\" rel=\"nofollow noreferrer\">Pell equation</a>, and its solutions are well understood. It is known that the solutions $(n,s)$ with $n$ positive and $s$ even are precisely the pairs of the form\n$$n+s\\sqrt{2}=(3+2\\sqrt{2})^k,$$\nfor some integer $k$. </p>\n\n<p>To show that every such pair $(n,s)$ is a solution to $n^2-2s^2=1$ is not too difficult, but to show that <em>every</em> solution is of this form requires some more advanced number theory (i.e. some ring theory and perhaps Dirichlet's unit theorem).</p>\n\n<hr>\n\n<p><strong>EDIT:</strong> If you are only interested in finding a single integer solution to this equation, then by simple trial and error with small values of $r$ and $n$ you will find one very quickly.</p>\n" }, { "answer_id": 2093952, "score": 1, "text": "<p>The solutions to $$ x^2 - 8 y^2 $$\nbegin $$ (1,0), \\;\\; (3,1), \\;\\; (17,6), \\; \\; (99,35), \\cdots $$ \nand continue forever with \n$$ x_{n+2} = 6 x_{n+1} - x_n, $$\n$$ y_{n+2} = 6 y_{n+1} - y_n. $$</p>\n\n<p>In turn, these come from a matrix that gives an automorphism of the quadratic form,\n$$\nA = \\left(\n\\begin{array}{cc}\n3 &amp; 8 \\\\\n1 &amp; 3\n\\end{array}\n\\right)\n$$\nThe automorphism property is\n$$\n \\left(\n\\begin{array}{cc}\n3 &amp; 1 \\\\\n8 &amp; 3\n\\end{array}\n\\right)\n \\left(\n\\begin{array}{cc}\n1 &amp; 0 \\\\\n0 &amp; -8\n\\end{array}\n\\right)\n \\left(\n\\begin{array}{cc}\n3 &amp; 8 \\\\\n1 &amp; 3\n\\end{array}\n\\right) =\n \\left(\n\\begin{array}{cc}\n1 &amp; 0 \\\\\n0 &amp; -8\n\\end{array}\n\\right),\n$$\nwhere $A$ is the \"smallest\" matrix that works.</p>\n\n<p>This allows us to write each $x,y$ ordered pair as a column vector, with\n$$\n\\left(\n\\begin{array}{c}\nx_{n+1} \\\\\ny_{n+1}\n\\end{array}\n\\right) = \n \\left(\n\\begin{array}{cc}\n3 &amp; 8 \\\\\n1 &amp; 3\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{c}\nx_n \\\\\ny_n\n\\end{array}\n\\right).\n$$\nThe two separate linear recurrences above are the result of applying the Cayley-Hamilton Theorem using $A,$ which satisfies $A^2 - 6 A + I = 0.$</p>\n\n<pre><code>x: 3 y: 1 ratio: 3 SEED BACK ONE STEP 1 , 0\nx: 17 y: 6 ratio: 2.83333\nx: 99 y: 35 ratio: 2.82857\nx: 577 y: 204 ratio: 2.82843\nx: 3363 y: 1189 ratio: 2.82843\nx: 19601 y: 6930 ratio: 2.82843\nx: 114243 y: 40391 ratio: 2.82843\nx: 665857 y: 235416 ratio: 2.82843\nx: 3880899 y: 1372105 ratio: 2.82843\nx: 22619537 y: 7997214 ratio: 2.82843\n</code></pre>\n\n<p>=====================================================</p>\n\n<p>If we switch the right hand side, things immediately get a little different. Here we do $x^2 - 8 y^2 = 17.$ The recurrence deals with every <em>second</em> solution:</p>\n\n<pre><code>x: 5 y: -1 \nx: 5 y: 1 ratio: 5 SEED KEEP +- \nx: 7 y: 2 ratio: 3.5 SEED BACK ONE STEP 5 , -1\nx: 23 y: 8 ratio: 2.875\nx: 37 y: 13 ratio: 2.84615\nx: 133 y: 47 ratio: 2.82979\nx: 215 y: 76 ratio: 2.82895\nx: 775 y: 274 ratio: 2.82847\nx: 1253 y: 443 ratio: 2.82844\nx: 4517 y: 1597 ratio: 2.82843\nx: 7303 y: 2582 ratio: 2.82843\nx: 26327 y: 9308 ratio: 2.82843\nx: 42565 y: 15049 ratio: 2.82843\nx: 153445 y: 54251 ratio: 2.82843\nx: 248087 y: 87712 ratio: 2.82843\nx: 894343 y: 316198 ratio: 2.82843\nx: 1445957 y: 511223 ratio: 2.82843\nx: 5212613 y: 1842937 ratio: 2.82843\nx: 8427655 y: 2979626 ratio: 2.82843\n\nWed Jan 11 17:28:11 PST 2017\n\n x^2 - 8 y^2 = 17\n</code></pre>\n\n<p>==========================================</p>\n" } ]
Question: Generate integer solutions from a hyperbolic equation Answer 1: <p>Another way to express this is to write $s=2r$ and look for integer solutions to $$n^2-2s^2=1,$$ with $s$ even and $n$ positive. This is known as a <a href="https://en.wikipedia.org/wiki/Pell&#39;s_equation" rel="nofollow noreferrer">Pell equation</a>, and its solutions are well understood. It is known that the solutions $(n,s)$ with $n$ positive and $s$ even are precisely the pairs of the form $$n+s\sqrt{2}=(3+2\sqrt{2})^k,$$ for some integer $k$. </p> <p>To show that every such pair $(n,s)$ is a solution to $n^2-2s^2=1$ is not too difficult, but to show that <em>every</em> solution is of this form requires some more advanced number theory (i.e. some ring theory and perhaps Dirichlet's unit theorem).</p> <hr> <p><strong>EDIT:</strong> If you are only interested in finding a single integer solution to this equation, then by simple trial and error with small values of $r$ and $n$ you will find one very quickly.</p> Answer 2: <p>The solutions to $$ x^2 - 8 y^2 $$ begin $$ (1,0), \;\; (3,1), \;\; (17,6), \; \; (99,35), \cdots $$ and continue forever with $$ x_{n+2} = 6 x_{n+1} - x_n, $$ $$ y_{n+2} = 6 y_{n+1} - y_n. $$</p> <p>In turn, these come from a matrix that gives an automorphism of the quadratic form, $$ A = \left( \begin{array}{cc} 3 &amp; 8 \\ 1 &amp; 3 \end{array} \right) $$ The automorphism property is $$ \left( \begin{array}{cc} 3 &amp; 1 \\ 8 &amp; 3 \end{array} \right) \left( \begin{array}{cc} 1 &amp; 0 \\ 0 &amp; -8 \end{array} \right) \left( \begin{array}{cc} 3 &amp; 8 \\ 1 &amp; 3 \end{array} \right) = \left( \begin{array}{cc} 1 &amp; 0 \\ 0 &amp; -8 \end{array} \right), $$ where $A$ is the "smallest" matrix that works.</p> <p>This allows us to write each $x,y$ ordered pair as a column vector, with $$ \left( \begin{array}{c} x_{n+1} \\ y_{n+1} \end{array} \right) = \left( \begin{array}{cc} 3 &amp; 8 \\ 1 &amp; 3 \end{array} \right) \left( \begin{array}{c} x_n \\ y_n \end{array} \right). $$ The two separate linear recurrences above are the result of applying the Cayley-Hamilton Theorem using $A,$ which satisfies $A^2 - 6 A + I = 0.$</p> <pre><code>x: 3 y: 1 ratio: 3 SEED BACK ONE STEP 1 , 0 x: 17 y: 6 ratio: 2.83333 x: 99 y: 35 ratio: 2.82857 x: 577 y: 204 ratio: 2.82843 x: 3363 y: 1189 ratio: 2.82843 x: 19601 y: 6930 ratio: 2.82843 x: 114243 y: 40391 ratio: 2.82843 x: 665857 y: 235416 ratio: 2.82843 x: 3880899 y: 1372105 ratio: 2.82843 x: 22619537 y: 7997214 ratio: 2.82843 </code></pre> <p>=====================================================</p> <p>If we switch the right hand side, things immediately get a little different. Here we do $x^2 - 8 y^2 = 17.$ The recurrence deals with every <em>second</em> solution:</p> <pre><code>x: 5 y: -1 x: 5 y: 1 ratio: 5 SEED KEEP +- x: 7 y: 2 ratio: 3.5 SEED BACK ONE STEP 5 , -1 x: 23 y: 8 ratio: 2.875 x: 37 y: 13 ratio: 2.84615 x: 133 y: 47 ratio: 2.82979 x: 215 y: 76 ratio: 2.82895 x: 775 y: 274 ratio: 2.82847 x: 1253 y: 443 ratio: 2.82844 x: 4517 y: 1597 ratio: 2.82843 x: 7303 y: 2582 ratio: 2.82843 x: 26327 y: 9308 ratio: 2.82843 x: 42565 y: 15049 ratio: 2.82843 x: 153445 y: 54251 ratio: 2.82843 x: 248087 y: 87712 ratio: 2.82843 x: 894343 y: 316198 ratio: 2.82843 x: 1445957 y: 511223 ratio: 2.82843 x: 5212613 y: 1842937 ratio: 2.82843 x: 8427655 y: 2979626 ratio: 2.82843 Wed Jan 11 17:28:11 PST 2017 x^2 - 8 y^2 = 17 </code></pre> <p>==========================================</p> Answer 3: <p>Another way to express this is to write $s=2r$ and look for integer solutions to $$n^2-2s^2=1,$$ with $s$ even and $n$ positive. This is known as a <a href="https://en.wikipedia.org/wiki/Pell&#39;s_equation" rel="nofollow noreferrer">Pell equation</a>, and its solutions are well understood. It is known that the solutions $(n,s)$ with $n$ positive and $s$ even are precisely the pairs of the form $$n+s\sqrt{2}=(3+2\sqrt{2})^k,$$ for some integer $k$. </p> <p>To show that every such pair $(n,s)$ is a solution to $n^2-2s^2=1$ is not too difficult, but to show that <em>every</em> solution is of this form requires some more advanced number theory (i.e. some ring theory and perhaps Dirichlet's unit theorem).</p> <hr> <p><strong>EDIT:</strong> If you are only interested in finding a single integer solution to this equation, then by simple trial and error with small values of $r$ and $n$ you will find one very quickly.</p> Answer 4: <p>The solutions to $$ x^2 - 8 y^2 $$ begin $$ (1,0), \;\; (3,1), \;\; (17,6), \; \; (99,35), \cdots $$ and continue forever with $$ x_{n+2} = 6 x_{n+1} - x_n, $$ $$ y_{n+2} = 6 y_{n+1} - y_n. $$</p> <p>In turn, these come from a matrix that gives an automorphism of the quadratic form, $$ A = \left( \begin{array}{cc} 3 &amp; 8 \\ 1 &amp; 3 \end{array} \right) $$ The automorphism property is $$ \left( \begin{array}{cc} 3 &amp; 1 \\ 8 &amp; 3 \end{array} \right) \left( \begin{array}{cc} 1 &amp; 0 \\ 0 &amp; -8 \end{array} \right) \left( \begin{array}{cc} 3 &amp; 8 \\ 1 &amp; 3 \end{array} \right) = \left( \begin{array}{cc} 1 &amp; 0 \\ 0 &amp; -8 \end{array} \right), $$ where $A$ is the "smallest" matrix that works.</p> <p>This allows us to write each $x,y$ ordered pair as a column vector, with $$ \left( \begin{array}{c} x_{n+1} \\ y_{n+1} \end{array} \right) = \left( \begin{array}{cc} 3 &amp; 8 \\ 1 &amp; 3 \end{array} \right) \left( \begin{array}{c} x_n \\ y_n \end{array} \right). $$ The two separate linear recurrences above are the result of applying the Cayley-Hamilton Theorem using $A,$ which satisfies $A^2 - 6 A + I = 0.$</p> <pre><code>x: 3 y: 1 ratio: 3 SEED BACK ONE STEP 1 , 0 x: 17 y: 6 ratio: 2.83333 x: 99 y: 35 ratio: 2.82857 x: 577 y: 204 ratio: 2.82843 x: 3363 y: 1189 ratio: 2.82843 x: 19601 y: 6930 ratio: 2.82843 x: 114243 y: 40391 ratio: 2.82843 x: 665857 y: 235416 ratio: 2.82843 x: 3880899 y: 1372105 ratio: 2.82843 x: 22619537 y: 7997214 ratio: 2.82843 </code></pre> <p>=====================================================</p> <p>If we switch the right hand side, things immediately get a little different. Here we do $x^2 - 8 y^2 = 17.$ The recurrence deals with every <em>second</em> solution:</p> <pre><code>x: 5 y: -1 x: 5 y: 1 ratio: 5 SEED KEEP +- x: 7 y: 2 ratio: 3.5 SEED BACK ONE STEP 5 , -1 x: 23 y: 8 ratio: 2.875 x: 37 y: 13 ratio: 2.84615 x: 133 y: 47 ratio: 2.82979 x: 215 y: 76 ratio: 2.82895 x: 775 y: 274 ratio: 2.82847 x: 1253 y: 443 ratio: 2.82844 x: 4517 y: 1597 ratio: 2.82843 x: 7303 y: 2582 ratio: 2.82843 x: 26327 y: 9308 ratio: 2.82843 x: 42565 y: 15049 ratio: 2.82843 x: 153445 y: 54251 ratio: 2.82843 x: 248087 y: 87712 ratio: 2.82843 x: 894343 y: 316198 ratio: 2.82843 x: 1445957 y: 511223 ratio: 2.82843 x: 5212613 y: 1842937 ratio: 2.82843 x: 8427655 y: 2979626 ratio: 2.82843 Wed Jan 11 17:28:11 PST 2017 x^2 - 8 y^2 = 17 </code></pre> <p>==========================================</p>
determine the size of angle
2,094,040
4
https://math.stackexchange.com/questions/2094040/determine-the-size-of-angle
<p><a href="https://i.stack.imgur.com/Zn8Gx.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zn8Gx.jpg" alt="enter image description here"></a></p> <p>Can you help me to determine the angle marked with a question mark?</p> <p>$\overline {AB}$ and $\overline {DE}$ are parallel</p>
[ { "answer_id": 2094098, "score": 2, "text": "<p>HINT: Draw a line through $C$ that is parallel to $AB$ and $DE$. This splits the angle in question into two parts. Can you find the measures of the two parts using alternate interior angles? </p>\n" }, { "answer_id": 2094117, "score": 2, "text": "<p><a href=\"https://i.stack.imgur.com/Uk6ju.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Uk6ju.png\" alt=\"enter image description here\"></a></p>\n\n<p>Divide $\\angle C$ into angles $C_1$ and $C_2$ as shown. </p>\n\n<ul>\n<li>Since $\\overline {AB}$ and $\\overline {CF}$ are parallel, then $\\angle C_2=40^\\circ$ by alternate interior angles. </li>\n<li>You can see that the supplement of $\\angle D$ is $60^\\circ$. Since $\\overline {DE}$ and $\\overline {CF}$ are parallel, then $\\angle C_1=60^\\circ$ by alternate interior angles. </li>\n</ul>\n\n<p>Add up $\\angle C_1$ and $\\angle C_2$ and you get $40^\\circ+60^\\circ=\\color{red}{100^\\circ}$</p>\n" }, { "answer_id": 2094067, "score": 1, "text": "<p>Assuming lines $AB$ and $DE$ are parallel, you can draw a line perpendicular to the line $DE$ (let's call this point $M$) to the point $B$ to obtain the quadrilateral $BCDM$.</p>\n\n<p>Now, use the fact that the sum of the interior angles in a quadrilateral is $360^{\\circ}$.</p>\n\n<p>From this, you can deduce that angles $\\angle{CBM}=90^{\\circ}-40^{\\circ}=50^{\\circ}$ and $\\angle{BMD}=90^{\\circ}$.</p>\n\n<p>Thus, the angle of the question mark is:</p>\n\n<p>$\\angle{DCB}=360^{\\circ}-50^{\\circ}-90^{\\circ}-120^{\\circ}=100^{\\circ}$</p>\n" }, { "answer_id": 2094313, "score": 1, "text": "<p>Draw a line perpendicular to $AB$ through $C$.</p>\n\n<p>Then $\\angle A'CB=180^\\circ-90^\\circ-40^\\circ=90^\\circ-40^\\circ=50^\\circ$.</p>\n\n<p>Do the same for $DE$ to get $\\angle D'CE=90^\\circ-60^\\circ=30^\\circ$.</p>\n\n<p>Now $\\angle BCD=180^\\circ-(50^\\circ+30^\\circ)=100^\\circ$.</p>\n\n<p>This, conveniently, is $\\angle ABC+(180^\\circ-\\angle EDC)$. </p>\n" }, { "answer_id": 2094098, "score": 2, "text": "<p>HINT: Draw a line through $C$ that is parallel to $AB$ and $DE$. This splits the angle in question into two parts. Can you find the measures of the two parts using alternate interior angles? </p>\n" }, { "answer_id": 2094117, "score": 2, "text": "<p><a href=\"https://i.stack.imgur.com/Uk6ju.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Uk6ju.png\" alt=\"enter image description here\"></a></p>\n\n<p>Divide $\\angle C$ into angles $C_1$ and $C_2$ as shown. </p>\n\n<ul>\n<li>Since $\\overline {AB}$ and $\\overline {CF}$ are parallel, then $\\angle C_2=40^\\circ$ by alternate interior angles. </li>\n<li>You can see that the supplement of $\\angle D$ is $60^\\circ$. Since $\\overline {DE}$ and $\\overline {CF}$ are parallel, then $\\angle C_1=60^\\circ$ by alternate interior angles. </li>\n</ul>\n\n<p>Add up $\\angle C_1$ and $\\angle C_2$ and you get $40^\\circ+60^\\circ=\\color{red}{100^\\circ}$</p>\n" }, { "answer_id": 2094067, "score": 1, "text": "<p>Assuming lines $AB$ and $DE$ are parallel, you can draw a line perpendicular to the line $DE$ (let's call this point $M$) to the point $B$ to obtain the quadrilateral $BCDM$.</p>\n\n<p>Now, use the fact that the sum of the interior angles in a quadrilateral is $360^{\\circ}$.</p>\n\n<p>From this, you can deduce that angles $\\angle{CBM}=90^{\\circ}-40^{\\circ}=50^{\\circ}$ and $\\angle{BMD}=90^{\\circ}$.</p>\n\n<p>Thus, the angle of the question mark is:</p>\n\n<p>$\\angle{DCB}=360^{\\circ}-50^{\\circ}-90^{\\circ}-120^{\\circ}=100^{\\circ}$</p>\n" }, { "answer_id": 2094313, "score": 1, "text": "<p>Draw a line perpendicular to $AB$ through $C$.</p>\n\n<p>Then $\\angle A'CB=180^\\circ-90^\\circ-40^\\circ=90^\\circ-40^\\circ=50^\\circ$.</p>\n\n<p>Do the same for $DE$ to get $\\angle D'CE=90^\\circ-60^\\circ=30^\\circ$.</p>\n\n<p>Now $\\angle BCD=180^\\circ-(50^\\circ+30^\\circ)=100^\\circ$.</p>\n\n<p>This, conveniently, is $\\angle ABC+(180^\\circ-\\angle EDC)$. </p>\n" } ]
Question: determine the size of angle Answer 1: <p>HINT: Draw a line through $C$ that is parallel to $AB$ and $DE$. This splits the angle in question into two parts. Can you find the measures of the two parts using alternate interior angles? </p> Answer 2: <p><a href="https://i.stack.imgur.com/Uk6ju.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uk6ju.png" alt="enter image description here"></a></p> <p>Divide $\angle C$ into angles $C_1$ and $C_2$ as shown. </p> <ul> <li>Since $\overline {AB}$ and $\overline {CF}$ are parallel, then $\angle C_2=40^\circ$ by alternate interior angles. </li> <li>You can see that the supplement of $\angle D$ is $60^\circ$. Since $\overline {DE}$ and $\overline {CF}$ are parallel, then $\angle C_1=60^\circ$ by alternate interior angles. </li> </ul> <p>Add up $\angle C_1$ and $\angle C_2$ and you get $40^\circ+60^\circ=\color{red}{100^\circ}$</p> Answer 3: <p>Assuming lines $AB$ and $DE$ are parallel, you can draw a line perpendicular to the line $DE$ (let's call this point $M$) to the point $B$ to obtain the quadrilateral $BCDM$.</p> <p>Now, use the fact that the sum of the interior angles in a quadrilateral is $360^{\circ}$.</p> <p>From this, you can deduce that angles $\angle{CBM}=90^{\circ}-40^{\circ}=50^{\circ}$ and $\angle{BMD}=90^{\circ}$.</p> <p>Thus, the angle of the question mark is:</p> <p>$\angle{DCB}=360^{\circ}-50^{\circ}-90^{\circ}-120^{\circ}=100^{\circ}$</p> Answer 4: <p>Draw a line perpendicular to $AB$ through $C$.</p> <p>Then $\angle A'CB=180^\circ-90^\circ-40^\circ=90^\circ-40^\circ=50^\circ$.</p> <p>Do the same for $DE$ to get $\angle D'CE=90^\circ-60^\circ=30^\circ$.</p> <p>Now $\angle BCD=180^\circ-(50^\circ+30^\circ)=100^\circ$.</p> <p>This, conveniently, is $\angle ABC+(180^\circ-\angle EDC)$. </p> Answer 5: <p>HINT: Draw a line through $C$ that is parallel to $AB$ and $DE$. This splits the angle in question into two parts. Can you find the measures of the two parts using alternate interior angles? </p> Answer 6: <p><a href="https://i.stack.imgur.com/Uk6ju.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uk6ju.png" alt="enter image description here"></a></p> <p>Divide $\angle C$ into angles $C_1$ and $C_2$ as shown. </p> <ul> <li>Since $\overline {AB}$ and $\overline {CF}$ are parallel, then $\angle C_2=40^\circ$ by alternate interior angles. </li> <li>You can see that the supplement of $\angle D$ is $60^\circ$. Since $\overline {DE}$ and $\overline {CF}$ are parallel, then $\angle C_1=60^\circ$ by alternate interior angles. </li> </ul> <p>Add up $\angle C_1$ and $\angle C_2$ and you get $40^\circ+60^\circ=\color{red}{100^\circ}$</p> Answer 7: <p>Assuming lines $AB$ and $DE$ are parallel, you can draw a line perpendicular to the line $DE$ (let's call this point $M$) to the point $B$ to obtain the quadrilateral $BCDM$.</p> <p>Now, use the fact that the sum of the interior angles in a quadrilateral is $360^{\circ}$.</p> <p>From this, you can deduce that angles $\angle{CBM}=90^{\circ}-40^{\circ}=50^{\circ}$ and $\angle{BMD}=90^{\circ}$.</p> <p>Thus, the angle of the question mark is:</p> <p>$\angle{DCB}=360^{\circ}-50^{\circ}-90^{\circ}-120^{\circ}=100^{\circ}$</p> Answer 8: <p>Draw a line perpendicular to $AB$ through $C$.</p> <p>Then $\angle A'CB=180^\circ-90^\circ-40^\circ=90^\circ-40^\circ=50^\circ$.</p> <p>Do the same for $DE$ to get $\angle D'CE=90^\circ-60^\circ=30^\circ$.</p> <p>Now $\angle BCD=180^\circ-(50^\circ+30^\circ)=100^\circ$.</p> <p>This, conveniently, is $\angle ABC+(180^\circ-\angle EDC)$. </p>
Is $n^{0.01}$ Big Omega or Big O of $\log(n)$
2,094,165
4
https://math.stackexchange.com/questions/2094165/is-n0-01-big-omega-or-big-o-of-logn
<p>I was graphing $n^{0.01}$ for huge values of $n$ and noticed this function grows incredibly slow, much slower then $\log(n)$ it appeared from the graph. So my initial thought was that</p> <p>$$n^{0.01}\in\mathcal O(\log(n))$$</p> <p>$$\log^x(n)\in \mathcal O(n^y)$$</p> <p>for any fixed constant $x &gt; 0$ and $y &gt; 0$, which made me think my initial thought was wrong. <a href="https://i.stack.imgur.com/NjgnP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NjgnP.png" alt="enter image description here"></a></p>
[ { "answer_id": 2094183, "score": 2, "text": "<p>It is enough to take the limit $\\frac{n^{\\alpha}}{\\log n}$ with $\\alpha &gt;0$ to see that it tends to infinity and hence we get a stronger relation: $n^{\\alpha} = \\omega (\\log n)$. In other words, the ratio is not bounded above by any constant.</p>\n" }, { "answer_id": 2094214, "score": 2, "text": "<p>To show that $\\lim_{n\\to\\infty} \\frac{n^a}{\\log n}=\\infty$ you can apply L'Hopital.</p>\n\n<p>$$\\lim_{n\\to\\infty}\\frac{n^a}{\\log n}=\\lim_{n_\\infty}\\frac{a\\cdot n^{a-1}}{\\frac{1}{n}}=\\lim_{n\\to\\infty}a\\cdot n^a=\\infty\\ (\\because a&gt;0)$$</p>\n" }, { "answer_id": 2094195, "score": 1, "text": "<p>How about this: Let $x_1 = 10^{1000}$ and $x_2 = 10^{2000}$. Then, assuming $\\log$ is the base-10 logarithm, </p>\n\n<p>$$\\log(x_1) = 1000 \\quad\\text{and}\\quad \\log(x_2) = 2000.$$</p>\n\n<p>From $x_1$ to $x_2$, $\\log(x)$ increases by $1000$. However, $x_1^{0.01} = 10^{10}$ and\n$x_2^{0.01} = 10^{20}$. The difference between $10^{10}$ and $10^{20}$ is clearly greater than $2000$. So, at least here, we can see $n^{0.01}$ growing faster than $\\log$.</p>\n" }, { "answer_id": 2094183, "score": 2, "text": "<p>It is enough to take the limit $\\frac{n^{\\alpha}}{\\log n}$ with $\\alpha &gt;0$ to see that it tends to infinity and hence we get a stronger relation: $n^{\\alpha} = \\omega (\\log n)$. In other words, the ratio is not bounded above by any constant.</p>\n" }, { "answer_id": 2094214, "score": 2, "text": "<p>To show that $\\lim_{n\\to\\infty} \\frac{n^a}{\\log n}=\\infty$ you can apply L'Hopital.</p>\n\n<p>$$\\lim_{n\\to\\infty}\\frac{n^a}{\\log n}=\\lim_{n_\\infty}\\frac{a\\cdot n^{a-1}}{\\frac{1}{n}}=\\lim_{n\\to\\infty}a\\cdot n^a=\\infty\\ (\\because a&gt;0)$$</p>\n" }, { "answer_id": 2094195, "score": 1, "text": "<p>How about this: Let $x_1 = 10^{1000}$ and $x_2 = 10^{2000}$. Then, assuming $\\log$ is the base-10 logarithm, </p>\n\n<p>$$\\log(x_1) = 1000 \\quad\\text{and}\\quad \\log(x_2) = 2000.$$</p>\n\n<p>From $x_1$ to $x_2$, $\\log(x)$ increases by $1000$. However, $x_1^{0.01} = 10^{10}$ and\n$x_2^{0.01} = 10^{20}$. The difference between $10^{10}$ and $10^{20}$ is clearly greater than $2000$. So, at least here, we can see $n^{0.01}$ growing faster than $\\log$.</p>\n" } ]
Question: Is $n^{0.01}$ Big Omega or Big O of $\log(n)$ Answer 1: <p>It is enough to take the limit $\frac{n^{\alpha}}{\log n}$ with $\alpha &gt;0$ to see that it tends to infinity and hence we get a stronger relation: $n^{\alpha} = \omega (\log n)$. In other words, the ratio is not bounded above by any constant.</p> Answer 2: <p>To show that $\lim_{n\to\infty} \frac{n^a}{\log n}=\infty$ you can apply L'Hopital.</p> <p>$$\lim_{n\to\infty}\frac{n^a}{\log n}=\lim_{n_\infty}\frac{a\cdot n^{a-1}}{\frac{1}{n}}=\lim_{n\to\infty}a\cdot n^a=\infty\ (\because a&gt;0)$$</p> Answer 3: <p>How about this: Let $x_1 = 10^{1000}$ and $x_2 = 10^{2000}$. Then, assuming $\log$ is the base-10 logarithm, </p> <p>$$\log(x_1) = 1000 \quad\text{and}\quad \log(x_2) = 2000.$$</p> <p>From $x_1$ to $x_2$, $\log(x)$ increases by $1000$. However, $x_1^{0.01} = 10^{10}$ and $x_2^{0.01} = 10^{20}$. The difference between $10^{10}$ and $10^{20}$ is clearly greater than $2000$. So, at least here, we can see $n^{0.01}$ growing faster than $\log$.</p> Answer 4: <p>It is enough to take the limit $\frac{n^{\alpha}}{\log n}$ with $\alpha &gt;0$ to see that it tends to infinity and hence we get a stronger relation: $n^{\alpha} = \omega (\log n)$. In other words, the ratio is not bounded above by any constant.</p> Answer 5: <p>To show that $\lim_{n\to\infty} \frac{n^a}{\log n}=\infty$ you can apply L'Hopital.</p> <p>$$\lim_{n\to\infty}\frac{n^a}{\log n}=\lim_{n_\infty}\frac{a\cdot n^{a-1}}{\frac{1}{n}}=\lim_{n\to\infty}a\cdot n^a=\infty\ (\because a&gt;0)$$</p> Answer 6: <p>How about this: Let $x_1 = 10^{1000}$ and $x_2 = 10^{2000}$. Then, assuming $\log$ is the base-10 logarithm, </p> <p>$$\log(x_1) = 1000 \quad\text{and}\quad \log(x_2) = 2000.$$</p> <p>From $x_1$ to $x_2$, $\log(x)$ increases by $1000$. However, $x_1^{0.01} = 10^{10}$ and $x_2^{0.01} = 10^{20}$. The difference between $10^{10}$ and $10^{20}$ is clearly greater than $2000$. So, at least here, we can see $n^{0.01}$ growing faster than $\log$.</p>
Show that if the sum of components of one vector adds up to 1 then the sum of the squares of the same vector is at least 1/n
2,094,253
4
https://math.stackexchange.com/questions/2094253/show-that-if-the-sum-of-components-of-one-vector-adds-up-to-1-then-the-sum-of-th
<p>(NOTE: Already posted <a href="https://math.stackexchange.com/questions/2086752/vectors-and-real-numbers">here</a>, but closed without an answer)</p> <p>Hi, I've been trying to complete the following question:</p> <p>Suppose we have two vectors of $n$ real numbers, $[x_1,x_2,⋯,x_n]$ and $[y_1,y_2 ⋯,y_n]$ and the following inequality holds: $$(x_1y_1+x_2y_2+⋯+x_ny_n)^2≤(x_1^2+x_2^2+⋯+x_n^2)(y_1^2+y_2^2+⋯+y_n^2)$$ Show that if the sum of components of one vector adds up to 1 then the sum of the squares of the same vector is at least $\frac 1n$.</p> <p>I've tried a few avenues, and have come up a proof that I am not confident is right.</p> <p>Proof by induction:</p> <p>Base case is $n=1$, which is trivial, since $x_1^2 = 1^2 = 1$ and so $1 \ge \frac 1 1$. Therefore base case is true.</p> <p>Assume it is true for n.</p> <p>$$x_1^2+...+x_n^2+x_{n+1}^2 \ge \frac 1 {n+1}$$</p> <p>Since $x_1^2+...+x_n^2 \ge \frac 1 n$ by our assumption,</p> <p>$$\frac 1 n + x_{n+1}^2 \ge \frac 1 {n+1}$$ It is this step that I think is incorrect, as the $x_1^2+...+x_n^2$ must get smaller in order to accomodate the new value of $x_{n+1}$, and still remain equal to 1. Therefore I don't think I can do this step?</p> <p>$$x_{n+1}^2 \ge \frac 1 {n+1} - \frac 1 n$$</p> <p>The left hand side must always be $\ge 0$ and the right hand side must always be negative for values of $n \ge 1$. Therefore true for $n+1$, so must be true for all $n$. QED.</p> <p>Can you confirm it is wrong? If it is wrong, could you please explain how to prove it in your answer. Thanks.</p>
[ { "answer_id": 2094264, "score": 3, "text": "<p>You can prove it using Jensen's inequality for $f(x) = x^2$. Since $x^2$ opens upwards, </p>\n\n<p>$$f(\\text{avg}~ a) \\le \\text{avg}~f(a)$$</p>\n\n<p>More specifically:</p>\n\n<p>$$\\left(\\frac 1n \\sum_{k=1}^n a_k\\right)^2 \\le \\frac 1n \\left(\\sum_{k=1}^n a_k{}^2\\right)$$</p>\n\n<p>The rest is just algebra, use $\\sum_{k=1}^n a_k = 1$ to establish $\\sum_{k=1}^n a_k{}^2 \\ge \\frac{1}{n}$.</p>\n" }, { "answer_id": 2113681, "score": 0, "text": "<p><strong>Hint:</strong> Try to look at the inequality with $y_k=1,\\forall i=1,...,n$</p>\n" }, { "answer_id": 2094264, "score": 3, "text": "<p>You can prove it using Jensen's inequality for $f(x) = x^2$. Since $x^2$ opens upwards, </p>\n\n<p>$$f(\\text{avg}~ a) \\le \\text{avg}~f(a)$$</p>\n\n<p>More specifically:</p>\n\n<p>$$\\left(\\frac 1n \\sum_{k=1}^n a_k\\right)^2 \\le \\frac 1n \\left(\\sum_{k=1}^n a_k{}^2\\right)$$</p>\n\n<p>The rest is just algebra, use $\\sum_{k=1}^n a_k = 1$ to establish $\\sum_{k=1}^n a_k{}^2 \\ge \\frac{1}{n}$.</p>\n" }, { "answer_id": 2113681, "score": 0, "text": "<p><strong>Hint:</strong> Try to look at the inequality with $y_k=1,\\forall i=1,...,n$</p>\n" } ]
Question: Show that if the sum of components of one vector adds up to 1 then the sum of the squares of the same vector is at least 1/n Answer 1: <p>You can prove it using Jensen's inequality for $f(x) = x^2$. Since $x^2$ opens upwards, </p> <p>$$f(\text{avg}~ a) \le \text{avg}~f(a)$$</p> <p>More specifically:</p> <p>$$\left(\frac 1n \sum_{k=1}^n a_k\right)^2 \le \frac 1n \left(\sum_{k=1}^n a_k{}^2\right)$$</p> <p>The rest is just algebra, use $\sum_{k=1}^n a_k = 1$ to establish $\sum_{k=1}^n a_k{}^2 \ge \frac{1}{n}$.</p> Answer 2: <p><strong>Hint:</strong> Try to look at the inequality with $y_k=1,\forall i=1,...,n$</p> Answer 3: <p>You can prove it using Jensen's inequality for $f(x) = x^2$. Since $x^2$ opens upwards, </p> <p>$$f(\text{avg}~ a) \le \text{avg}~f(a)$$</p> <p>More specifically:</p> <p>$$\left(\frac 1n \sum_{k=1}^n a_k\right)^2 \le \frac 1n \left(\sum_{k=1}^n a_k{}^2\right)$$</p> <p>The rest is just algebra, use $\sum_{k=1}^n a_k = 1$ to establish $\sum_{k=1}^n a_k{}^2 \ge \frac{1}{n}$.</p> Answer 4: <p><strong>Hint:</strong> Try to look at the inequality with $y_k=1,\forall i=1,...,n$</p>
expansion of logarithmic matrix
2,094,319
4
https://math.stackexchange.com/questions/2094319/expansion-of-logarithmic-matrix
<p>Let $A,B$ be Hermitian matrices ($A$ has positive or zero eigenvalues and $Tr A=Tr[A+\lambda B]=1$), and $\lambda$ is infinitesimal constant. How to expand \begin{equation} \ln(A+\lambda B) \end{equation} and \begin{equation} Tr[(A+\lambda B)\ln(A+\lambda B)] \end{equation} into powers of $\lambda$ as usually do in Taylor expansion of functions?</p>
[ { "answer_id": 2099146, "score": 3, "text": "<p>First note the derivative of the scalar function \n$$\\eqalign{\n f(x)&amp;=x\\log x\\cr\nf^\\prime=\\frac{df}{dx}&amp;=1+\\log x\n}$$\nNext, define a new matrix variable \n$$\\eqalign{\n M(\\lambda) &amp;= A+B\\lambda \\cr\ndM &amp;= B\\,d\\lambda\\cr\n}$$\nThen use this result for the differential of the trace of a matrix function\n$$\\eqalign{\n d\\operatorname{tr}f(M) &amp;= f^\\prime(M^T):dM\n}$$where colon denotes the Frobenius product, i.e. $\\,\\,A:B=\\operatorname{tr}(A^TB)$</p>\n\n<p><br></p>\n\n<p>Now to adress your second question, let $$T(\\lambda)=\\operatorname{tr}(M\\log M)=\\operatorname{tr}(f(M))$$ and use the preceeding to find its gradient $g(\\lambda)$\n$$\\eqalign{\n dT &amp;= f^\\prime(M^T):dM \\cr\n &amp;=(I+\\log M)^T:B\\,d\\lambda \\cr\n &amp;= \\operatorname{tr}(B(I+\\log M))\\,d\\lambda \\cr\\cr\ng(\\lambda) &amp;= \\frac{\\partial T}{\\partial\\lambda} = \\operatorname{tr}(B+B\\log M) \\cr\\cr\n}$$\nAt $\\lambda=0$, we can evaluate each of these quantities \n$$\\eqalign{\nM(0) &amp;= A \\cr\nT(0) &amp;= \\operatorname{tr}(A\\log A) \\cr\ng(0) &amp;= \\operatorname{tr}(B+B\\log A) \\cr\\cr\n}$$\nFinally, we can expand $T(\\lambda)$ about $\\lambda=0$\n$$\\eqalign{\nT(\\lambda) &amp;= T(0) + \\lambda g(0) \\cr\n &amp;\\approx \\operatorname{tr}(A\\log A) + \\lambda\\operatorname{tr}(B+B\\log A) \\cr\\cr\\cr\n}$$</p>\n\n<p>As for your first question, you could apply the block-triangular method of Kenney and Laub \n$$\\eqalign{\nL = \\log\\Bigg(\\begin{bmatrix}A&amp;B\\\\0&amp;A\\end{bmatrix}\\Bigg) = \\begin{bmatrix}\\log A&amp;\\frac{d\\log M}{d\\lambda}|_{\\lambda=0}\\\\0&amp;\\log A\\end{bmatrix} \\cr\\cr\n}$$\nand then \n$$\\eqalign{\n \\log(M) &amp;= \\log(A+\\lambda B) \\cr\n &amp;\\approx \\log(A) + \\,\\lambda\\,\\,\\begin{bmatrix}I&amp;0\\end{bmatrix}L\\begin{bmatrix}0\\\\I\\end{bmatrix} \\cr\n}$$</p>\n" }, { "answer_id": 3174833, "score": 3, "text": "<p>Stephen L Adler wrote a <a href=\"https://www.sns.ias.edu/sites/default/files/files/1-matrixlog_tex(1).pdf\" rel=\"nofollow noreferrer\">nice set of notes</a>, entitled <em>Taylor Expansion and Derivative Formulas for Matrix Logarithms</em>.</p>\n\n<p>He shows (at a physicist level of rigor) that</p>\n\n<p><span class=\"math-container\">$$\\ln (A + t B) = \\ln(A) + t \\int_0^\\infty \\frac{1}{A+z}B\\frac{1}{A+z}dz+\\mathcal O(t^2).$$</span></p>\n\n<p>The starting point is the matrix expansion:</p>\n\n<p><span class=\"math-container\">$$\\frac{1}{A+tB}=\\frac{1}{A}-t\\frac{1}{A}B\\frac{1}{A}+t^2\\frac{1}{A}B\\frac{1}{A}B\\frac{1}{A}-\\dots$$</span></p>\n\n<p>which can be verified by multiplying both sides by <span class=\"math-container\">$A+tB$</span> and expanding.</p>\n\n<p>A second important identity is</p>\n\n<p><span class=\"math-container\">$$\\int_0^u \\frac{1}{X+z}dz=\\ln(X+u)-\\ln(X).$$</span></p>\n\n<p>For us, matrix <span class=\"math-container\">$X$</span> could be either <span class=\"math-container\">$A$</span> or <span class=\"math-container\">$A+tB$</span>. Expanding these inverse matrices under the integral, then manipulating the resulting logarithms, then finally taking the limit <span class=\"math-container\">$u \\rightarrow \\infty$</span>, gives the desired result for <span class=\"math-container\">$\\ln (A+tB)$</span> to all orders in <span class=\"math-container\">$t$</span>.</p>\n" }, { "answer_id": 2099146, "score": 3, "text": "<p>First note the derivative of the scalar function \n$$\\eqalign{\n f(x)&amp;=x\\log x\\cr\nf^\\prime=\\frac{df}{dx}&amp;=1+\\log x\n}$$\nNext, define a new matrix variable \n$$\\eqalign{\n M(\\lambda) &amp;= A+B\\lambda \\cr\ndM &amp;= B\\,d\\lambda\\cr\n}$$\nThen use this result for the differential of the trace of a matrix function\n$$\\eqalign{\n d\\operatorname{tr}f(M) &amp;= f^\\prime(M^T):dM\n}$$where colon denotes the Frobenius product, i.e. $\\,\\,A:B=\\operatorname{tr}(A^TB)$</p>\n\n<p><br></p>\n\n<p>Now to adress your second question, let $$T(\\lambda)=\\operatorname{tr}(M\\log M)=\\operatorname{tr}(f(M))$$ and use the preceeding to find its gradient $g(\\lambda)$\n$$\\eqalign{\n dT &amp;= f^\\prime(M^T):dM \\cr\n &amp;=(I+\\log M)^T:B\\,d\\lambda \\cr\n &amp;= \\operatorname{tr}(B(I+\\log M))\\,d\\lambda \\cr\\cr\ng(\\lambda) &amp;= \\frac{\\partial T}{\\partial\\lambda} = \\operatorname{tr}(B+B\\log M) \\cr\\cr\n}$$\nAt $\\lambda=0$, we can evaluate each of these quantities \n$$\\eqalign{\nM(0) &amp;= A \\cr\nT(0) &amp;= \\operatorname{tr}(A\\log A) \\cr\ng(0) &amp;= \\operatorname{tr}(B+B\\log A) \\cr\\cr\n}$$\nFinally, we can expand $T(\\lambda)$ about $\\lambda=0$\n$$\\eqalign{\nT(\\lambda) &amp;= T(0) + \\lambda g(0) \\cr\n &amp;\\approx \\operatorname{tr}(A\\log A) + \\lambda\\operatorname{tr}(B+B\\log A) \\cr\\cr\\cr\n}$$</p>\n\n<p>As for your first question, you could apply the block-triangular method of Kenney and Laub \n$$\\eqalign{\nL = \\log\\Bigg(\\begin{bmatrix}A&amp;B\\\\0&amp;A\\end{bmatrix}\\Bigg) = \\begin{bmatrix}\\log A&amp;\\frac{d\\log M}{d\\lambda}|_{\\lambda=0}\\\\0&amp;\\log A\\end{bmatrix} \\cr\\cr\n}$$\nand then \n$$\\eqalign{\n \\log(M) &amp;= \\log(A+\\lambda B) \\cr\n &amp;\\approx \\log(A) + \\,\\lambda\\,\\,\\begin{bmatrix}I&amp;0\\end{bmatrix}L\\begin{bmatrix}0\\\\I\\end{bmatrix} \\cr\n}$$</p>\n" }, { "answer_id": 3174833, "score": 3, "text": "<p>Stephen L Adler wrote a <a href=\"https://www.sns.ias.edu/sites/default/files/files/1-matrixlog_tex(1).pdf\" rel=\"nofollow noreferrer\">nice set of notes</a>, entitled <em>Taylor Expansion and Derivative Formulas for Matrix Logarithms</em>.</p>\n\n<p>He shows (at a physicist level of rigor) that</p>\n\n<p><span class=\"math-container\">$$\\ln (A + t B) = \\ln(A) + t \\int_0^\\infty \\frac{1}{A+z}B\\frac{1}{A+z}dz+\\mathcal O(t^2).$$</span></p>\n\n<p>The starting point is the matrix expansion:</p>\n\n<p><span class=\"math-container\">$$\\frac{1}{A+tB}=\\frac{1}{A}-t\\frac{1}{A}B\\frac{1}{A}+t^2\\frac{1}{A}B\\frac{1}{A}B\\frac{1}{A}-\\dots$$</span></p>\n\n<p>which can be verified by multiplying both sides by <span class=\"math-container\">$A+tB$</span> and expanding.</p>\n\n<p>A second important identity is</p>\n\n<p><span class=\"math-container\">$$\\int_0^u \\frac{1}{X+z}dz=\\ln(X+u)-\\ln(X).$$</span></p>\n\n<p>For us, matrix <span class=\"math-container\">$X$</span> could be either <span class=\"math-container\">$A$</span> or <span class=\"math-container\">$A+tB$</span>. Expanding these inverse matrices under the integral, then manipulating the resulting logarithms, then finally taking the limit <span class=\"math-container\">$u \\rightarrow \\infty$</span>, gives the desired result for <span class=\"math-container\">$\\ln (A+tB)$</span> to all orders in <span class=\"math-container\">$t$</span>.</p>\n" } ]
Question: expansion of logarithmic matrix Answer 1: <p>First note the derivative of the scalar function $$\eqalign{ f(x)&amp;=x\log x\cr f^\prime=\frac{df}{dx}&amp;=1+\log x }$$ Next, define a new matrix variable $$\eqalign{ M(\lambda) &amp;= A+B\lambda \cr dM &amp;= B\,d\lambda\cr }$$ Then use this result for the differential of the trace of a matrix function $$\eqalign{ d\operatorname{tr}f(M) &amp;= f^\prime(M^T):dM }$$where colon denotes the Frobenius product, i.e. $\,\,A:B=\operatorname{tr}(A^TB)$</p> <p><br></p> <p>Now to adress your second question, let $$T(\lambda)=\operatorname{tr}(M\log M)=\operatorname{tr}(f(M))$$ and use the preceeding to find its gradient $g(\lambda)$ $$\eqalign{ dT &amp;= f^\prime(M^T):dM \cr &amp;=(I+\log M)^T:B\,d\lambda \cr &amp;= \operatorname{tr}(B(I+\log M))\,d\lambda \cr\cr g(\lambda) &amp;= \frac{\partial T}{\partial\lambda} = \operatorname{tr}(B+B\log M) \cr\cr }$$ At $\lambda=0$, we can evaluate each of these quantities $$\eqalign{ M(0) &amp;= A \cr T(0) &amp;= \operatorname{tr}(A\log A) \cr g(0) &amp;= \operatorname{tr}(B+B\log A) \cr\cr }$$ Finally, we can expand $T(\lambda)$ about $\lambda=0$ $$\eqalign{ T(\lambda) &amp;= T(0) + \lambda g(0) \cr &amp;\approx \operatorname{tr}(A\log A) + \lambda\operatorname{tr}(B+B\log A) \cr\cr\cr }$$</p> <p>As for your first question, you could apply the block-triangular method of Kenney and Laub $$\eqalign{ L = \log\Bigg(\begin{bmatrix}A&amp;B\\0&amp;A\end{bmatrix}\Bigg) = \begin{bmatrix}\log A&amp;\frac{d\log M}{d\lambda}|_{\lambda=0}\\0&amp;\log A\end{bmatrix} \cr\cr }$$ and then $$\eqalign{ \log(M) &amp;= \log(A+\lambda B) \cr &amp;\approx \log(A) + \,\lambda\,\,\begin{bmatrix}I&amp;0\end{bmatrix}L\begin{bmatrix}0\\I\end{bmatrix} \cr }$$</p> Answer 2: <p>Stephen L Adler wrote a <a href="https://www.sns.ias.edu/sites/default/files/files/1-matrixlog_tex(1).pdf" rel="nofollow noreferrer">nice set of notes</a>, entitled <em>Taylor Expansion and Derivative Formulas for Matrix Logarithms</em>.</p> <p>He shows (at a physicist level of rigor) that</p> <p><span class="math-container">$$\ln (A + t B) = \ln(A) + t \int_0^\infty \frac{1}{A+z}B\frac{1}{A+z}dz+\mathcal O(t^2).$$</span></p> <p>The starting point is the matrix expansion:</p> <p><span class="math-container">$$\frac{1}{A+tB}=\frac{1}{A}-t\frac{1}{A}B\frac{1}{A}+t^2\frac{1}{A}B\frac{1}{A}B\frac{1}{A}-\dots$$</span></p> <p>which can be verified by multiplying both sides by <span class="math-container">$A+tB$</span> and expanding.</p> <p>A second important identity is</p> <p><span class="math-container">$$\int_0^u \frac{1}{X+z}dz=\ln(X+u)-\ln(X).$$</span></p> <p>For us, matrix <span class="math-container">$X$</span> could be either <span class="math-container">$A$</span> or <span class="math-container">$A+tB$</span>. Expanding these inverse matrices under the integral, then manipulating the resulting logarithms, then finally taking the limit <span class="math-container">$u \rightarrow \infty$</span>, gives the desired result for <span class="math-container">$\ln (A+tB)$</span> to all orders in <span class="math-container">$t$</span>.</p> Answer 3: <p>First note the derivative of the scalar function $$\eqalign{ f(x)&amp;=x\log x\cr f^\prime=\frac{df}{dx}&amp;=1+\log x }$$ Next, define a new matrix variable $$\eqalign{ M(\lambda) &amp;= A+B\lambda \cr dM &amp;= B\,d\lambda\cr }$$ Then use this result for the differential of the trace of a matrix function $$\eqalign{ d\operatorname{tr}f(M) &amp;= f^\prime(M^T):dM }$$where colon denotes the Frobenius product, i.e. $\,\,A:B=\operatorname{tr}(A^TB)$</p> <p><br></p> <p>Now to adress your second question, let $$T(\lambda)=\operatorname{tr}(M\log M)=\operatorname{tr}(f(M))$$ and use the preceeding to find its gradient $g(\lambda)$ $$\eqalign{ dT &amp;= f^\prime(M^T):dM \cr &amp;=(I+\log M)^T:B\,d\lambda \cr &amp;= \operatorname{tr}(B(I+\log M))\,d\lambda \cr\cr g(\lambda) &amp;= \frac{\partial T}{\partial\lambda} = \operatorname{tr}(B+B\log M) \cr\cr }$$ At $\lambda=0$, we can evaluate each of these quantities $$\eqalign{ M(0) &amp;= A \cr T(0) &amp;= \operatorname{tr}(A\log A) \cr g(0) &amp;= \operatorname{tr}(B+B\log A) \cr\cr }$$ Finally, we can expand $T(\lambda)$ about $\lambda=0$ $$\eqalign{ T(\lambda) &amp;= T(0) + \lambda g(0) \cr &amp;\approx \operatorname{tr}(A\log A) + \lambda\operatorname{tr}(B+B\log A) \cr\cr\cr }$$</p> <p>As for your first question, you could apply the block-triangular method of Kenney and Laub $$\eqalign{ L = \log\Bigg(\begin{bmatrix}A&amp;B\\0&amp;A\end{bmatrix}\Bigg) = \begin{bmatrix}\log A&amp;\frac{d\log M}{d\lambda}|_{\lambda=0}\\0&amp;\log A\end{bmatrix} \cr\cr }$$ and then $$\eqalign{ \log(M) &amp;= \log(A+\lambda B) \cr &amp;\approx \log(A) + \,\lambda\,\,\begin{bmatrix}I&amp;0\end{bmatrix}L\begin{bmatrix}0\\I\end{bmatrix} \cr }$$</p> Answer 4: <p>Stephen L Adler wrote a <a href="https://www.sns.ias.edu/sites/default/files/files/1-matrixlog_tex(1).pdf" rel="nofollow noreferrer">nice set of notes</a>, entitled <em>Taylor Expansion and Derivative Formulas for Matrix Logarithms</em>.</p> <p>He shows (at a physicist level of rigor) that</p> <p><span class="math-container">$$\ln (A + t B) = \ln(A) + t \int_0^\infty \frac{1}{A+z}B\frac{1}{A+z}dz+\mathcal O(t^2).$$</span></p> <p>The starting point is the matrix expansion:</p> <p><span class="math-container">$$\frac{1}{A+tB}=\frac{1}{A}-t\frac{1}{A}B\frac{1}{A}+t^2\frac{1}{A}B\frac{1}{A}B\frac{1}{A}-\dots$$</span></p> <p>which can be verified by multiplying both sides by <span class="math-container">$A+tB$</span> and expanding.</p> <p>A second important identity is</p> <p><span class="math-container">$$\int_0^u \frac{1}{X+z}dz=\ln(X+u)-\ln(X).$$</span></p> <p>For us, matrix <span class="math-container">$X$</span> could be either <span class="math-container">$A$</span> or <span class="math-container">$A+tB$</span>. Expanding these inverse matrices under the integral, then manipulating the resulting logarithms, then finally taking the limit <span class="math-container">$u \rightarrow \infty$</span>, gives the desired result for <span class="math-container">$\ln (A+tB)$</span> to all orders in <span class="math-container">$t$</span>.</p>
A simple modular arithmetic query.
2,094,374
4
https://math.stackexchange.com/questions/2094374/a-simple-modular-arithmetic-query
<p>Given $a,b,c\in\Bbb N$ with $\mathsf{gcd}(a,b)=\mathsf{gcd}(b,c)=\mathsf{gcd}(c,a)=1$ we know that there are $m_1,m_2,m_3\in\Bbb N$ such that $a\equiv m_1a^2\bmod abc$, $ab\equiv m_2ab\bmod abc$ and $b\equiv m_3b^2\bmod abc$ holds.</p> <p>It is also easy to see there is a single $m$ such that $$a\equiv ma\bmod ab,\quad b\equiv mb\bmod ab$$ holds.</p> <p>However how to find a single $m$ coprime to $c$ such that $$1\equiv ma\bmod abc,\quad 1\equiv mb\bmod abc$$ holds?</p> <p>At least how to find a single $m$ such that $$\ell_1 a\equiv ma^2\bmod abc, \quad \ell_2 ab\equiv mab\bmod abc,\quad\ell_3b\equiv mb^2\bmod abc$$ holds where $0&lt;\ell_1,\ell_2,\ell_3&lt;\log abc$ holds and at least one of $\ell_1,\ell_2,\ell_3$ is distinct?</p> <p>If not how small can we make $\max(\ell_1,\ell_2,\ell_3)$ where $a\nmid\ell_1$ and $b\nmid\ell_3$ holds?</p>
[ { "answer_id": 2094468, "score": 0, "text": "<p>I'll answer one of your questions ...\n<p>\nYou ask how to find a single $m$ such that</p>\n\n<p>\\begin{align*}\n1 &amp;\\equiv ma \\text{ mod } (abc)\\\\\n1 &amp;\\equiv mb \\text{ mod } (abc)\\\\\n\\end{align*}</p>\n\n<p>But</p>\n\n<p>\\begin{align*}\n&amp;1 \\equiv ma \\text{ mod } (abc)\\\\\n\\implies\\; &amp;1 \\equiv ma \\text{ mod } (a)\\\\\n\\implies\\; &amp;1 \\equiv 0 \\text{ mod } (a)\\\\\n\\implies\\; &amp;a = 1\\\\\n\\end{align*}</p>\n\n<p>Similarly</p>\n\n<p>\\begin{align*}\n&amp;1 \\equiv mb \\text{ mod } (abc)\\\\\n\\implies\\; &amp;1 \\equiv mb \\text{ mod } (b)\\\\\n\\implies\\; &amp;1 \\equiv 0 \\text{ mod } (b)\\\\\n\\implies\\; &amp;b = 1\\\\\n\\end{align*}</p>\n\n<p>Thus, unless $a = b = 1$, there will be no such $m$.</p>\n" }, { "answer_id": 2094565, "score": 0, "text": "<blockquote>\n <p>At least how to find a single $m$ such that $$\\ell_1 a\\equiv ma^2\\bmod\n abc, \\quad \\ell_2 ab\\equiv mab\\bmod abc,\\quad\\ell_3b\\equiv mb^2\\bmod\n abc$$ holds where $0&lt;\\ell_1,\\ell_2,\\ell_3&lt;\\log abc$ holds?</p>\n</blockquote>\n\n<p>First note that $gcd(ma^2,abc)=a, gcd(ma^2,a\\ell_1)=a$. This is a direct result of $\\gcd(a,r)=a \\iff \\gcd(a,b)=a$ in $a \\equiv r \\pmod b$</p>\n\n<p>Then rewrite $ma^2 \\equiv \\ell_1 \\pmod{abc}$ as:</p>\n\n<p>$ma^2 = abck + a\\ell_1$</p>\n\n<p>$ma = bck + \\ell_1$</p>\n\n<p>Then $ma \\equiv \\ell_1 \\pmod{bc}$</p>\n\n<p>It is easy to follow these simplification to obtain the others.</p>\n\n<hr>\n\n<p>You can simplify as:</p>\n\n<p>$$\\ell_1 \\equiv ma \\pmod{bc}, \\quad \\ell_2 \\equiv m \\pmod c, \\quad \\ell_3 \\equiv mb \\pmod{ac}$$</p>\n\n<p>Now choose a value $r \\in (0,\\log abc)$</p>\n\n<p>Note that if $m&lt;c$ then $m=r$ and $\\ell_1=\\ell_2=\\ell_3=r$ </p>\n\n<p>When $c \\mid m$ then $\\ell_1=\\ell_2=\\ell_3=0$</p>\n\n<p>When $m&gt;c \\Rightarrow$ $m=ck+r, a=b$ then $\\ell_1=\\ell_2=\\ell_3=r$ and you will have:</p>\n\n<p>$$a\\ell_1 \\equiv ma \\pmod{ac}, \\quad \\ell_2 \\equiv m \\pmod c, \\quad b\\ell_3 \\equiv mb \\pmod{bc}$$</p>\n\n<p>For $m&gt;c$ then $\\ell_1\\neq\\ell_2\\neq\\ell_3 \\iff a,b,c$ are pairwise coprime thus the solution must follow these restrictions:</p>\n\n<p>$$\\ell_1 &lt; bc &lt; \\log abc$$</p>\n\n<p>$$\\ell_2 &lt; c &lt; \\log abc$$</p>\n\n<p>$$\\ell_3 &lt; ac &lt; \\log abc$$</p>\n" } ]
Question: A simple modular arithmetic query. Answer 1: <p>I'll answer one of your questions ... <p> You ask how to find a single $m$ such that</p> <p>\begin{align*} 1 &amp;\equiv ma \text{ mod } (abc)\\ 1 &amp;\equiv mb \text{ mod } (abc)\\ \end{align*}</p> <p>But</p> <p>\begin{align*} &amp;1 \equiv ma \text{ mod } (abc)\\ \implies\; &amp;1 \equiv ma \text{ mod } (a)\\ \implies\; &amp;1 \equiv 0 \text{ mod } (a)\\ \implies\; &amp;a = 1\\ \end{align*}</p> <p>Similarly</p> <p>\begin{align*} &amp;1 \equiv mb \text{ mod } (abc)\\ \implies\; &amp;1 \equiv mb \text{ mod } (b)\\ \implies\; &amp;1 \equiv 0 \text{ mod } (b)\\ \implies\; &amp;b = 1\\ \end{align*}</p> <p>Thus, unless $a = b = 1$, there will be no such $m$.</p> Answer 2: <blockquote> <p>At least how to find a single $m$ such that $$\ell_1 a\equiv ma^2\bmod abc, \quad \ell_2 ab\equiv mab\bmod abc,\quad\ell_3b\equiv mb^2\bmod abc$$ holds where $0&lt;\ell_1,\ell_2,\ell_3&lt;\log abc$ holds?</p> </blockquote> <p>First note that $gcd(ma^2,abc)=a, gcd(ma^2,a\ell_1)=a$. This is a direct result of $\gcd(a,r)=a \iff \gcd(a,b)=a$ in $a \equiv r \pmod b$</p> <p>Then rewrite $ma^2 \equiv \ell_1 \pmod{abc}$ as:</p> <p>$ma^2 = abck + a\ell_1$</p> <p>$ma = bck + \ell_1$</p> <p>Then $ma \equiv \ell_1 \pmod{bc}$</p> <p>It is easy to follow these simplification to obtain the others.</p> <hr> <p>You can simplify as:</p> <p>$$\ell_1 \equiv ma \pmod{bc}, \quad \ell_2 \equiv m \pmod c, \quad \ell_3 \equiv mb \pmod{ac}$$</p> <p>Now choose a value $r \in (0,\log abc)$</p> <p>Note that if $m&lt;c$ then $m=r$ and $\ell_1=\ell_2=\ell_3=r$ </p> <p>When $c \mid m$ then $\ell_1=\ell_2=\ell_3=0$</p> <p>When $m&gt;c \Rightarrow$ $m=ck+r, a=b$ then $\ell_1=\ell_2=\ell_3=r$ and you will have:</p> <p>$$a\ell_1 \equiv ma \pmod{ac}, \quad \ell_2 \equiv m \pmod c, \quad b\ell_3 \equiv mb \pmod{bc}$$</p> <p>For $m&gt;c$ then $\ell_1\neq\ell_2\neq\ell_3 \iff a,b,c$ are pairwise coprime thus the solution must follow these restrictions:</p> <p>$$\ell_1 &lt; bc &lt; \log abc$$</p> <p>$$\ell_2 &lt; c &lt; \log abc$$</p> <p>$$\ell_3 &lt; ac &lt; \log abc$$</p>
Convergence of $\sum_{n=1}^{\infty}{(1-2c^2)^n \over n \ln{n} + \cos{n\pi}}$
2,094,519
4
https://math.stackexchange.com/questions/2094519/convergence-of-sum-n-1-infty1-2c2n-over-n-lnn-cosn-pi
<p>Let $$\sum_{n=1}^{\infty}{(1-2c^2)^n \over n \ln{n} + \cos{n\pi}}$$ $c \in \mathbb{R}$. For what values of $c$ is this series:</p> <p>1) convergent?</p> <p>2) absolutely convergent?</p> <p>Do you have any suggestions?</p>
[ { "answer_id": 2094533, "score": 0, "text": "<p>Hint: concentrate on the numerator. Do a case division $|c|&gt;,=,&lt;\\sqrt{2}/2$. The denominator is pretty much irrelevant</p>\n" }, { "answer_id": 2099996, "score": 0, "text": "<p>Considering:\n$$ n\\log{n}+\\cos{n\\pi} = n\\log{n}+(-1)^n \\approx n\\log{n} \\quad\\{\\text{for}\\,n\\rightarrow\\infty\\} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nLet:\n$$\\color{red}{A=1-2c^2} \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nThus:\n$$\n\\sum_{n=\\color{red}{2}}^{\\infty}\\frac{A^n}{n\\log{n}} = \\begin{cases} \n-\\infty\\lt \\,A\\, \\lt-1 &amp;\\Rightarrow \\sum\\frac{(-1)^n|A|^n}{n\\log{n}} \\space\\space\\space\\quad\\rightarrow\\quad \\text{Diverge by Limit test} \\\\[2mm] \n-1\\,\\,\\lt \\,A\\, \\lt0 &amp;\\Rightarrow \\sum\\frac{(-1)^n}{(1/|A|)^n n\\log{n}} \\space\\rightarrow\\quad \\text{Converge by Ratio test} \\\\[2mm] \n\\,\\,\\,0\\,\\,\\,\\lt \\,A\\, \\lt+1 &amp;\\Rightarrow \\sum\\frac{1}{(1/|A|)^n n\\log{n}} \\space\\rightarrow\\quad \\text{Converge by Ratio test} \\\\[2mm] \n+1\\,\\,\\lt \\,A\\, \\lt+\\infty &amp;\\Rightarrow \\sum\\frac{|A|^n}{n\\log{n}} \\space\\space\\space\\qquad\\rightarrow\\quad \\text{Diverge by Limit test} \\\\[2mm] \n\\end{cases}\n$$\nAlso:\n$$\n\\sum_{n=\\color{red}{2}}^{\\infty}\\frac{A^n}{n\\log{n}} = \\begin{cases} \nA=-1 &amp;\\implies \\sum\\frac{(-1)^n}{n\\log{n}} \\quad\\rightarrow\\quad \\text{Converge by Dirichlet test} \\\\[2mm] \nA=0 &amp;\\implies \\sum\\frac{(0)^n}{n\\log{n}} = 0 \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \nA=+1 &amp;\\implies \\sum\\frac{1}{n\\log{n}} \\quad\\rightarrow\\quad \\text{Diverge by Comparison test} \\\\[2mm] \n\\end{cases}\n$$\nHence:\n$$\n\\sum_{n=1}^{\\infty}\\frac{\\left(1-2c^2\\right)^n}{n\\log{n}+\\cos{n\\pi}} = \\begin{cases} \n\\text{Convergent for} &amp;A\\in\\,\\color{blue}{[}-1,\\,+1\\color{red}{[} \\quad\\rightarrow\\quad c\\in\\,\\color{blue}{[}-1,\\,+1\\color{blue}{]}-\\color{red}{\\{0\\}} \\\\[2mm] \n\\text{|Convergent| for} &amp;A\\in\\,\\color{blue}{]}-1,\\,+1\\color{red}{[} \\quad\\rightarrow\\quad c\\in\\,\\color{blue}{]}-1,\\,+1\\color{blue}{[}-\\color{red}{\\{0\\}} \\\\[2mm] \n\\end{cases}\n$$\n <br/> \n <br/> \n<a href=\"https://i.stack.imgur.com/rAHwu.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rAHwu.png\" alt=\"enter image description here\"></a></p>\n" } ]
Question: Convergence of $\sum_{n=1}^{\infty}{(1-2c^2)^n \over n \ln{n} + \cos{n\pi}}$ Answer 1: <p>Hint: concentrate on the numerator. Do a case division $|c|&gt;,=,&lt;\sqrt{2}/2$. The denominator is pretty much irrelevant</p> Answer 2: <p>Considering: $$ n\log{n}+\cos{n\pi} = n\log{n}+(-1)^n \approx n\log{n} \quad\{\text{for}\,n\rightarrow\infty\} \quad\qquad\qquad\qquad\qquad\qquad $$ Let: $$\color{red}{A=1-2c^2} \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ Thus: $$ \sum_{n=\color{red}{2}}^{\infty}\frac{A^n}{n\log{n}} = \begin{cases} -\infty\lt \,A\, \lt-1 &amp;\Rightarrow \sum\frac{(-1)^n|A|^n}{n\log{n}} \space\space\space\quad\rightarrow\quad \text{Diverge by Limit test} \\[2mm] -1\,\,\lt \,A\, \lt0 &amp;\Rightarrow \sum\frac{(-1)^n}{(1/|A|)^n n\log{n}} \space\rightarrow\quad \text{Converge by Ratio test} \\[2mm] \,\,\,0\,\,\,\lt \,A\, \lt+1 &amp;\Rightarrow \sum\frac{1}{(1/|A|)^n n\log{n}} \space\rightarrow\quad \text{Converge by Ratio test} \\[2mm] +1\,\,\lt \,A\, \lt+\infty &amp;\Rightarrow \sum\frac{|A|^n}{n\log{n}} \space\space\space\qquad\rightarrow\quad \text{Diverge by Limit test} \\[2mm] \end{cases} $$ Also: $$ \sum_{n=\color{red}{2}}^{\infty}\frac{A^n}{n\log{n}} = \begin{cases} A=-1 &amp;\implies \sum\frac{(-1)^n}{n\log{n}} \quad\rightarrow\quad \text{Converge by Dirichlet test} \\[2mm] A=0 &amp;\implies \sum\frac{(0)^n}{n\log{n}} = 0 \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] A=+1 &amp;\implies \sum\frac{1}{n\log{n}} \quad\rightarrow\quad \text{Diverge by Comparison test} \\[2mm] \end{cases} $$ Hence: $$ \sum_{n=1}^{\infty}\frac{\left(1-2c^2\right)^n}{n\log{n}+\cos{n\pi}} = \begin{cases} \text{Convergent for} &amp;A\in\,\color{blue}{[}-1,\,+1\color{red}{[} \quad\rightarrow\quad c\in\,\color{blue}{[}-1,\,+1\color{blue}{]}-\color{red}{\{0\}} \\[2mm] \text{|Convergent| for} &amp;A\in\,\color{blue}{]}-1,\,+1\color{red}{[} \quad\rightarrow\quad c\in\,\color{blue}{]}-1,\,+1\color{blue}{[}-\color{red}{\{0\}} \\[2mm] \end{cases} $$ <br/> <br/> <a href="https://i.stack.imgur.com/rAHwu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rAHwu.png" alt="enter image description here"></a></p>
then maximum and minimum value of $x+y$
2,094,891
4
https://math.stackexchange.com/questions/2094891/then-maximum-and-minimum-value-of-xy
<p>if $x,y\in R$ and $x^3+y^3=2\;,$ then maximum and minimum value of $x+y$</p> <p>using $\displaystyle \frac{x^3+y^3}{2}\geq \left(\frac{x+y}{2}\right)^3$</p> <p>So $(x+y)^3\leq 2^3$ so $x+y\leq 2$</p> <p>could some help me to find minimum value, thanks</p>
[ { "answer_id": 2094916, "score": 5, "text": "<p>From \n$$\n 2 = x^3 + y^3 = (x+y)(x^2-xy+y^2) = (x+y) \\frac{(x-y)^2+x^2+y^2}2\n$$\nit follows that $x+y&gt; 0$. On the other hand, for arbitrary $t &gt; 0$\n$$\n x =-t \\quad , \\quad y = \\sqrt[3]{2+t^3}\n$$\nsatisfies $x^3+y^3 =2$ and\n$$\n x + y = \\sqrt[3]{2+t^3} - t = \\frac{2}{(\\sqrt[3]{2+t^3})^2 + t \\sqrt[3]{2+t^3} + t^2} \\to 0\n$$\nfor $t \\to \\infty$. Therefore the <em>infimum</em> is zero and a minimum\ndoes not exist.</p>\n\n<p><a href=\"https://i.stack.imgur.com/6pkeZm.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6pkeZm.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2094933, "score": 4, "text": "<p>Note that\nSince\n$$\n3x^2+3y^2y'=0\n$$\nwe have\n$$\ny'=-\\frac{x^2}{y^2}\n$$\nTherefore,\n$$\n\\begin{align}\n0\n&amp;=(x+y)'\\\\[6pt]\n&amp;=1+y'\\\\\n&amp;=1-\\frac{x^2}{y^2}\\\\\n\\end{align}\n$$\nAt $x=y=1$, we get a maximum of $2$.</p>\n\n<p>$x=-y$ doesn't happen, but $\\frac xy\\to-1$ as $x\\to\\pm\\infty$. Since $xy\\le\\frac{x^2+y^2}2$, we have $x^2-xy+y^2\\ge\\frac{x^2+y^2}2$. Therefore,\n$$\n\\begin{align}\nx+y\n&amp;=\\frac{x^3+y^3}{x^2-xy+y^2}\\\\\n&amp;\\le\\frac4{x^2+y^2}\\\\[6pt]\n&amp;\\to0\n\\end{align}\n$$</p>\n" }, { "answer_id": 2094915, "score": 3, "text": "<p>If $x,y$ are real numbers then there is no absolute minimum. In fact the value of $x+y$ can be made as close to zero as we like. We have $y = \\sqrt[3]{2 - x^3}$</p>\n\n<p>Obviously for very large $x$ we have that $y \\approx -x$. Additionally the value must be positive, as $y &gt; -x$, from the condition.</p>\n\n<p>On the other side if we add the condition that $x,y$ are positive, then we have:</p>\n\n<p>$$(x+y)^3 \\ge x^3 + y^3 = 2 \\implies x+y \\ge \\sqrt[3]{2}$$</p>\n\n<p>And the minimum is obtained for $(x,y) = \\{(\\sqrt[3]{2},0), (0,\\sqrt[3]{2})\\}$</p>\n" } ]
Question: then maximum and minimum value of $x+y$ Answer 1: <p>From $$ 2 = x^3 + y^3 = (x+y)(x^2-xy+y^2) = (x+y) \frac{(x-y)^2+x^2+y^2}2 $$ it follows that $x+y&gt; 0$. On the other hand, for arbitrary $t &gt; 0$ $$ x =-t \quad , \quad y = \sqrt[3]{2+t^3} $$ satisfies $x^3+y^3 =2$ and $$ x + y = \sqrt[3]{2+t^3} - t = \frac{2}{(\sqrt[3]{2+t^3})^2 + t \sqrt[3]{2+t^3} + t^2} \to 0 $$ for $t \to \infty$. Therefore the <em>infimum</em> is zero and a minimum does not exist.</p> <p><a href="https://i.stack.imgur.com/6pkeZm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6pkeZm.png" alt="enter image description here"></a></p> Answer 2: <p>Note that Since $$ 3x^2+3y^2y'=0 $$ we have $$ y'=-\frac{x^2}{y^2} $$ Therefore, $$ \begin{align} 0 &amp;=(x+y)'\\[6pt] &amp;=1+y'\\ &amp;=1-\frac{x^2}{y^2}\\ \end{align} $$ At $x=y=1$, we get a maximum of $2$.</p> <p>$x=-y$ doesn't happen, but $\frac xy\to-1$ as $x\to\pm\infty$. Since $xy\le\frac{x^2+y^2}2$, we have $x^2-xy+y^2\ge\frac{x^2+y^2}2$. Therefore, $$ \begin{align} x+y &amp;=\frac{x^3+y^3}{x^2-xy+y^2}\\ &amp;\le\frac4{x^2+y^2}\\[6pt] &amp;\to0 \end{align} $$</p> Answer 3: <p>If $x,y$ are real numbers then there is no absolute minimum. In fact the value of $x+y$ can be made as close to zero as we like. We have $y = \sqrt[3]{2 - x^3}$</p> <p>Obviously for very large $x$ we have that $y \approx -x$. Additionally the value must be positive, as $y &gt; -x$, from the condition.</p> <p>On the other side if we add the condition that $x,y$ are positive, then we have:</p> <p>$$(x+y)^3 \ge x^3 + y^3 = 2 \implies x+y \ge \sqrt[3]{2}$$</p> <p>And the minimum is obtained for $(x,y) = \{(\sqrt[3]{2},0), (0,\sqrt[3]{2})\}$</p>
What is the meaning of &quot;Ring structure varying continuously with $x$?&quot;
2,095,222
4
https://math.stackexchange.com/questions/2095222/what-is-the-meaning-of-ring-structure-varying-continuously-with-x
<p>What does it mean when one states that a ring structure varies continuously with some variable? Say, in the following context</p> <blockquote> <p>The stalks may be rings, and in that case the ring structure of $\mathscr{F}_x$ is assumed to vary continuously with $x$. Or, each $\mathscr{F}_x$ may be a module over some fixed field.</p> </blockquote> <p>taken from Zariski's <em><a href="http://projecteuclid.org/euclid.bams/1183520530" rel="nofollow noreferrer">Scientific report on the second summer institute, several complex variables. Part III. Algebraic sheaf theory</a>.</em></p>
[ { "answer_id": 2095235, "score": 4, "text": "<p>Suppose you have a vector bundle on a manifold, in such a way that each fiber is a ring. Consider a local trivialization. In terms of it, you can describe the multiplication of each of those rings in terms of structure constants: that the structures of rings varies continuously means that those structure constants are continuous functions.</p>\n\n<p>In other categories, like the holomorphic one or the algebraic one you adapt as needed.</p>\n" }, { "answer_id": 2095392, "score": 3, "text": "<p>A sheaf of sets on a space $X$ is a topological space $\\mathscr{F}$ together with a continuous map $p:\\mathscr{F}\\to X$ satisfying certain conditions (this is not the only way to define a sheaf, but it is the one Zariski is using). If you put a ring structure on $\\mathscr{F}_x=p^{-1}(\\{x\\})$ for each $x$, then to say this ring structure \"varies continuously\" just means that the ring operations are continuous maps (and similarly for any other sort of algebraic structure). For instance, addition is a map $+:\\{(a,b)\\in\\mathscr{F}\\times\\mathscr{F}:p(a)=p(b)\\}\\to\\mathscr{F}$, and this map should be continuous (with respect to the product topology on the domain). Similarly, multiplication should be continuous, negation should be continuous as a map $\\mathscr{F}\\to\\mathscr{F}$, and the maps $0,1:X\\to\\mathscr{F}$ which send $x$ to the zero element or the unit element of $\\mathscr{F}_x$ (respectively) should be continuous.</p>\n\n<p>(The more common modern approach is to define a sheaf not in terms of the space $\\mathscr{F}$ but rather in terms of the operation which takes an open set $U\\subseteq X$ to the set $\\mathscr{F}(U)$ of sections of the map $p$ over the set $U$, together with restriction maps $\\mathscr{F}(U)\\to\\mathscr{F}(V)$ whenever $V\\subseteq U$. In this framework, it is automatic that any algebraic structure on the sets $\\mathscr{F}(U)$ \"varies continuously\", in the sense that putting (for instance) a ring structure on each $\\mathscr{F}(U)$ such that the restriction maps are homomorphisms is equivalent to putting a ring structure on each stalk $\\mathscr{F}_x$ such that the ring operations are continuous as described above.)</p>\n" } ]
Question: What is the meaning of &quot;Ring structure varying continuously with $x$?&quot; Answer 1: <p>Suppose you have a vector bundle on a manifold, in such a way that each fiber is a ring. Consider a local trivialization. In terms of it, you can describe the multiplication of each of those rings in terms of structure constants: that the structures of rings varies continuously means that those structure constants are continuous functions.</p> <p>In other categories, like the holomorphic one or the algebraic one you adapt as needed.</p> Answer 2: <p>A sheaf of sets on a space $X$ is a topological space $\mathscr{F}$ together with a continuous map $p:\mathscr{F}\to X$ satisfying certain conditions (this is not the only way to define a sheaf, but it is the one Zariski is using). If you put a ring structure on $\mathscr{F}_x=p^{-1}(\{x\})$ for each $x$, then to say this ring structure "varies continuously" just means that the ring operations are continuous maps (and similarly for any other sort of algebraic structure). For instance, addition is a map $+:\{(a,b)\in\mathscr{F}\times\mathscr{F}:p(a)=p(b)\}\to\mathscr{F}$, and this map should be continuous (with respect to the product topology on the domain). Similarly, multiplication should be continuous, negation should be continuous as a map $\mathscr{F}\to\mathscr{F}$, and the maps $0,1:X\to\mathscr{F}$ which send $x$ to the zero element or the unit element of $\mathscr{F}_x$ (respectively) should be continuous.</p> <p>(The more common modern approach is to define a sheaf not in terms of the space $\mathscr{F}$ but rather in terms of the operation which takes an open set $U\subseteq X$ to the set $\mathscr{F}(U)$ of sections of the map $p$ over the set $U$, together with restriction maps $\mathscr{F}(U)\to\mathscr{F}(V)$ whenever $V\subseteq U$. In this framework, it is automatic that any algebraic structure on the sets $\mathscr{F}(U)$ "varies continuously", in the sense that putting (for instance) a ring structure on each $\mathscr{F}(U)$ such that the restriction maps are homomorphisms is equivalent to putting a ring structure on each stalk $\mathscr{F}_x$ such that the ring operations are continuous as described above.)</p>
finding polynomials
2,095,339
4
https://math.stackexchange.com/questions/2095339/finding-polynomials
<p>How to find all complex polymials $p(z)$ and $q(z)$ such that $$p(z) \cos^2 (z) + q(z) \sin^2(z) =1$$ (where $z$ is complex number)? Clearly , $p(z)=q(z)=1$ is a solution. I think it is the only solution but I cannot prove that.</p>
[ { "answer_id": 2095347, "score": 5, "text": "<p>At all points $z = k \\pi $, $k \\in \\Bbb Z$, $p(z) = 1$ must hold,\ni.e. the polynomial $p(z) - 1$ has infinitely many zeros. It follows\nthat $p(z) \\equiv 1$.</p>\n\n<p>$q(z) \\equiv 1$ follows similarly.</p>\n" }, { "answer_id": 2095348, "score": 4, "text": "<p><strong>Hint</strong>: For each integer $n$, we have $$p(\\pi n)\\cos^2(\\pi n) + q(\\pi n)\\sin^2(\\pi n) = 1$$ $$p(\\pi n) = 1$$</p>\n\n<p>Hence, $z = \\ldots,-2\\pi,-\\pi,0,\\pi,2\\pi,\\ldots$ are all roots of $p(z)-1$. </p>\n" }, { "answer_id": 2105819, "score": 2, "text": "<p>Given,\np(z) * (cos z)^2 + q(z) * (sin z)^2 =1 .........(1)</p>\n\n<p>=>\n(e^ 2iz + e^ --2iz) * (p(z) -- q(z)) + 2 * (p(z) + q(z)) =4</p>\n\n<p>=>Cos (2z) *( p(z) -- q(z)) + (p(z) + q(z)) =2 </p>\n\n<p>Since,(1) is true for all z€C .\nSo for z€ C such that Cos (2z)=1 yields ,\np(z)=1 ..........(2)\nAnd ,Since,(1) is true for all z€C .\nSo for z€ C such that Cos (2z)=0 yields,\np(z)+ q(z)=1 .\nSo from (2) ,it yields q(z)=1.\nThus p(z) =1 and q(z)=1 are the only polynomials.</p>\n" } ]
Question: finding polynomials Answer 1: <p>At all points $z = k \pi $, $k \in \Bbb Z$, $p(z) = 1$ must hold, i.e. the polynomial $p(z) - 1$ has infinitely many zeros. It follows that $p(z) \equiv 1$.</p> <p>$q(z) \equiv 1$ follows similarly.</p> Answer 2: <p><strong>Hint</strong>: For each integer $n$, we have $$p(\pi n)\cos^2(\pi n) + q(\pi n)\sin^2(\pi n) = 1$$ $$p(\pi n) = 1$$</p> <p>Hence, $z = \ldots,-2\pi,-\pi,0,\pi,2\pi,\ldots$ are all roots of $p(z)-1$. </p> Answer 3: <p>Given, p(z) * (cos z)^2 + q(z) * (sin z)^2 =1 .........(1)</p> <p>=> (e^ 2iz + e^ --2iz) * (p(z) -- q(z)) + 2 * (p(z) + q(z)) =4</p> <p>=>Cos (2z) *( p(z) -- q(z)) + (p(z) + q(z)) =2 </p> <p>Since,(1) is true for all z€C . So for z€ C such that Cos (2z)=1 yields , p(z)=1 ..........(2) And ,Since,(1) is true for all z€C . So for z€ C such that Cos (2z)=0 yields, p(z)+ q(z)=1 . So from (2) ,it yields q(z)=1. Thus p(z) =1 and q(z)=1 are the only polynomials.</p>
club sets being invariant under increasing continuous function
2,095,556
4
https://math.stackexchange.com/questions/2095556/club-sets-being-invariant-under-increasing-continuous-function
<p>Can someone explain (prove) me why this holds? </p> <p>To be precise, by club set I mean closed and unbounded set where given definitions are:</p> <p>Let κ ∈ Card and C ⊆ κ . C is unbounded in κ if ∀α &lt; κ ∃β ∈ C α &lt; β.</p> <p>C is closed in κ if ∀λ &lt; κ (λ is a limit ordinal ∧ C ∩λ is unbounded in λ→λ ∈ C).</p> <p>On the other hand, I am supposed to prove that the next thing holds:</p> <p>Suppose that µ, κ are regular cardinals and f∶µ → κ is a cofinal strictly increasing continuous function. I have to show that f[C] is club in κ for every club C in µ</p>
[ { "answer_id": 2095560, "score": 4, "text": "<p><strong>Hint</strong>: A cofinal continuous and strictly increasing function $\\mu\\to \\kappa$ extends to a continuous function $\\mu+1\\to \\kappa+1$ (because it implies that $\\lim_{\\alpha \\to \\mu} f(\\alpha)=\\kappa$), which are compact Hausdorff spaces. Being closed in $\\kappa$ or $\\mu$ is the same as being closed in the order topology. This helps with the \"closed\" part. The \"unbounded\" part should be obvious. (In fact, I belive you don't need to know anything about $\\mu$ and $\\kappa$ here -- they can be any limit ordinals.)</p>\n" }, { "answer_id": 2095980, "score": 3, "text": "<p>HINT: I like <strong>tomasz</strong>’s argument, but a more elementary one is also possible. </p>\n\n<p>If $f[C]$ is not closed in $\\kappa$, let $\\alpha=\\min\\big((\\operatorname{cl}f[C])\\setminus C\\big)$, and let $A=C\\cap\\alpha$; clearly $\\alpha\\in\\operatorname{cl}A$. Let $B=f^{-1}[A]=\\{\\xi\\in C:f(\\xi)&lt;\\alpha\\}$, and let $\\beta=\\sup B\\in C$. There are two possibilities: either $\\beta=\\max B\\in B$, or $B\\subseteq\\beta$. Show that each possibility leads to a contradiction.</p>\n" } ]
Question: club sets being invariant under increasing continuous function Answer 1: <p><strong>Hint</strong>: A cofinal continuous and strictly increasing function $\mu\to \kappa$ extends to a continuous function $\mu+1\to \kappa+1$ (because it implies that $\lim_{\alpha \to \mu} f(\alpha)=\kappa$), which are compact Hausdorff spaces. Being closed in $\kappa$ or $\mu$ is the same as being closed in the order topology. This helps with the "closed" part. The "unbounded" part should be obvious. (In fact, I belive you don't need to know anything about $\mu$ and $\kappa$ here -- they can be any limit ordinals.)</p> Answer 2: <p>HINT: I like <strong>tomasz</strong>’s argument, but a more elementary one is also possible. </p> <p>If $f[C]$ is not closed in $\kappa$, let $\alpha=\min\big((\operatorname{cl}f[C])\setminus C\big)$, and let $A=C\cap\alpha$; clearly $\alpha\in\operatorname{cl}A$. Let $B=f^{-1}[A]=\{\xi\in C:f(\xi)&lt;\alpha\}$, and let $\beta=\sup B\in C$. There are two possibilities: either $\beta=\max B\in B$, or $B\subseteq\beta$. Show that each possibility leads to a contradiction.</p>
Distinguishing critical points, relative extrema, etc.
2,095,672
4
https://math.stackexchange.com/questions/2095672/distinguishing-critical-points-relative-extrema-etc
<p>I'm getting slightly confused with how critical points, inflection points, and relative extrema are all related to each other in a graph. Let me tell you what I think I know, and please correct me if I'm wrong.</p> <p><strong>Relative Extrema</strong> is the local maximum or minimum of a 'hill' of a graph of a polynomial of the second degree or higher. In other words, a relative extrema (extremum? extreme?) is a point where the derivative is 0. This can be found by finding where the derivative changes signs. Can this also be found just by looking where the derivative is 0? Is that the same thing? Which is easier?</p> <p><strong>Critical Points</strong>, also known as stationary points(?), is any point where the derivative is equal to 0. This can be found using the same method as above.</p> <p><strong>Inflection Points</strong> is the point where the rate of change of the derivative of the graph switches signs. In other words, the point where the curvature of the graph changes from concave up to down or vice versa. This can be found by finding the double derivative and finding when its sign changes/finding its zeroes (which is easier?).</p> <p>Now my main problem is figuring out where these overlap. So these are my questions that I can't really get to the bottom of:</p> <ul> <li><p>Are <strong>all relative extrema critical points?</strong> </p></li> <li><p>Are <strong>all critical points relative extrema?</strong></p></li> <li><p>Can a critical point ever <strong>not be a relative extrema?</strong></p></li> <li><p>Will an inflection point <strong>always be distinct from a relative extrema and a critical point?</strong></p></li> </ul>
[ { "answer_id": 2095705, "score": 1, "text": "<p>To answer my question about the difference between finding the zeroes and finding where a derivative changes signs, I've found from solving a few problems that finding the roots of a derivative isn't always sufficient, particularly when dealing with odd degree polynomials with stretches that lay nearly flat against the x-axis (I don't know how to describe this better). In a case like that, the derivative might have multiple zeroes, but only the roots farthest from zero might be the relative extrema.</p>\n" }, { "answer_id": 2095678, "score": 0, "text": "<ul>\n<li>All relative extrema are critical points.</li>\n<li>However, not all critical points are relative extrema. For example plot $f(x)=x^3$ and note that $f'$ is zero at $x=0$, yet it is neither a relative maximum nor a relative minimum. In higher dimensions, saddle points are another example of critical points that are not relative extrema.</li>\n<li>Consider $f(x)=x^5$. Its second derivative is $f''(x)=20x^3$, which changes sign at $x=0$. Its first derivative is $f'(x)=5x^4$ which is zero at $x=0$, so it is also a critical point.</li>\n</ul>\n" } ]
Question: Distinguishing critical points, relative extrema, etc. Answer 1: <p>To answer my question about the difference between finding the zeroes and finding where a derivative changes signs, I've found from solving a few problems that finding the roots of a derivative isn't always sufficient, particularly when dealing with odd degree polynomials with stretches that lay nearly flat against the x-axis (I don't know how to describe this better). In a case like that, the derivative might have multiple zeroes, but only the roots farthest from zero might be the relative extrema.</p> Answer 2: <ul> <li>All relative extrema are critical points.</li> <li>However, not all critical points are relative extrema. For example plot $f(x)=x^3$ and note that $f'$ is zero at $x=0$, yet it is neither a relative maximum nor a relative minimum. In higher dimensions, saddle points are another example of critical points that are not relative extrema.</li> <li>Consider $f(x)=x^5$. Its second derivative is $f''(x)=20x^3$, which changes sign at $x=0$. Its first derivative is $f'(x)=5x^4$ which is zero at $x=0$, so it is also a critical point.</li> </ul>
Suppose $X_1, \ldots, X_n$ are iid standard Cauchy random variables, does $\frac{1}{n}\sum_{i=1}^{n}X_i$ converge in probability or almost surely?
2,095,683
4
https://math.stackexchange.com/questions/2095683/suppose-x-1-ldots-x-n-are-iid-standard-cauchy-random-variables-does-frac
<p>Suppose $X_1, \ldots, X_n$ are iid standard Cauchy random variables, does $\frac{1}{n}\sum_{i=1}^{n}X_i$ converge in probability or almost surely?</p> <p>I know that by stable law, $\frac{1}{n}\sum_{i=1}^{n}X_i \sim Cauchy(0,1)$, but how can I find if it converges in probability or not by the definition of convergence in probability? </p>
[ { "answer_id": 2095714, "score": 2, "text": "<p>Let $Z_n = \\frac{1}{n} \\sum_{i=1}^n X_i$. </p>\n\n<p>Now $$Z_{2n} - Z_{n} = \\underset{A_n}{\\underbrace{-\\frac{1}{2n} \\sum_{i=1}^n X_i}} +\\underset{B_n}{\\underbrace{\\frac{1}{2n} \\sum_{i=n+1}^{2n} X_i}}.$$</p>\n\n<p>Then $A_n$ and $B_n$ are independent identically distributed Cauchy $(0,\\frac 12)$. Hence, $A_n + B_n $ is Cauchy $(0,1)$. </p>\n\n<p>In particular, </p>\n\n<p>$$P(|Z_n -Z_{2n}|&gt;\\epsilon )=c,$$ </p>\n\n<p>where $c$ is the probability that Cauchy $(0,1)$ has absolute value larger than $\\epsilon$ (explicitly, $c= \\frac{2}{\\pi} \\arctan\\epsilon$). This is a constant independent of $n$. </p>\n\n<p>From this we conclude that the sequence $(Z_n)$ does not converge in probability and therefore also not a.s. It is Cauchy distributed, but unfortunately, it is not a Cauchy sequence is probability... </p>\n\n<p>Indeed, for any random variable $Z$ we have </p>\n\n<p>$$|Z_n - Z_{2n}|= |Z_n -Z+ Z-Z_{2n}|\\le |Z_n - Z| + |Z_{2n}-Z|.$$ </p>\n\n<p>Therefore, the event $\\{|Z_n-Z_{2n}|&gt;\\epsilon\\}$ is contained in the event $\\{|Z_n -z|+|Z_{2n}-Z|&gt;\\epsilon\\}$. But the latter event is contained in the event $\\{|Z_n-Z|&gt;\\epsilon/2\\}\\cup \\{|Z_{2n}-Z|&gt;\\epsilon/2\\}$. Or: </p>\n\n<p>$$c = P(|Z_n-Z_{2n}|&gt;\\epsilon) \\le P(|Z_n -Z|&gt;\\epsilon/2)+P(|Z_{2n}-Z|&gt;\\epsilon/2).$$ </p>\n\n<p>Since the righthand side is bounded below by $c$, it follows that $Z$ is not the limit in probability of $(Z_n)$ (if it were, both summands on RHS would go to zero as $n\\to\\infty$). </p>\n" }, { "answer_id": 2095716, "score": 2, "text": "<p>It's pretty clear that it can't converge (in either sense) to a number, since it always has the Cauchy(0,1) distribution. To see that it can't converge to anything, observe that $$ \\frac{S_{2n}}{2n}-\\frac{S_n}{n} = \\frac{1}{2}\\frac{S_{2n}-S_n}{n} -\\frac{1}{2}\\frac{S_n}{n}.$$ where $S_n = \\sum_{k=1}^n X_k.$ $(S_{2n}-S_{n})/n$ and $S_n/n$ are independent Cauchys, so their difference is stable and the distance between $S_{2n}/2n$ and $S_n/n$ cannot go to zero as $n$ gets large.</p>\n" } ]
Question: Suppose $X_1, \ldots, X_n$ are iid standard Cauchy random variables, does $\frac{1}{n}\sum_{i=1}^{n}X_i$ converge in probability or almost surely? Answer 1: <p>Let $Z_n = \frac{1}{n} \sum_{i=1}^n X_i$. </p> <p>Now $$Z_{2n} - Z_{n} = \underset{A_n}{\underbrace{-\frac{1}{2n} \sum_{i=1}^n X_i}} +\underset{B_n}{\underbrace{\frac{1}{2n} \sum_{i=n+1}^{2n} X_i}}.$$</p> <p>Then $A_n$ and $B_n$ are independent identically distributed Cauchy $(0,\frac 12)$. Hence, $A_n + B_n $ is Cauchy $(0,1)$. </p> <p>In particular, </p> <p>$$P(|Z_n -Z_{2n}|&gt;\epsilon )=c,$$ </p> <p>where $c$ is the probability that Cauchy $(0,1)$ has absolute value larger than $\epsilon$ (explicitly, $c= \frac{2}{\pi} \arctan\epsilon$). This is a constant independent of $n$. </p> <p>From this we conclude that the sequence $(Z_n)$ does not converge in probability and therefore also not a.s. It is Cauchy distributed, but unfortunately, it is not a Cauchy sequence is probability... </p> <p>Indeed, for any random variable $Z$ we have </p> <p>$$|Z_n - Z_{2n}|= |Z_n -Z+ Z-Z_{2n}|\le |Z_n - Z| + |Z_{2n}-Z|.$$ </p> <p>Therefore, the event $\{|Z_n-Z_{2n}|&gt;\epsilon\}$ is contained in the event $\{|Z_n -z|+|Z_{2n}-Z|&gt;\epsilon\}$. But the latter event is contained in the event $\{|Z_n-Z|&gt;\epsilon/2\}\cup \{|Z_{2n}-Z|&gt;\epsilon/2\}$. Or: </p> <p>$$c = P(|Z_n-Z_{2n}|&gt;\epsilon) \le P(|Z_n -Z|&gt;\epsilon/2)+P(|Z_{2n}-Z|&gt;\epsilon/2).$$ </p> <p>Since the righthand side is bounded below by $c$, it follows that $Z$ is not the limit in probability of $(Z_n)$ (if it were, both summands on RHS would go to zero as $n\to\infty$). </p> Answer 2: <p>It's pretty clear that it can't converge (in either sense) to a number, since it always has the Cauchy(0,1) distribution. To see that it can't converge to anything, observe that $$ \frac{S_{2n}}{2n}-\frac{S_n}{n} = \frac{1}{2}\frac{S_{2n}-S_n}{n} -\frac{1}{2}\frac{S_n}{n}.$$ where $S_n = \sum_{k=1}^n X_k.$ $(S_{2n}-S_{n})/n$ and $S_n/n$ are independent Cauchys, so their difference is stable and the distance between $S_{2n}/2n$ and $S_n/n$ cannot go to zero as $n$ gets large.</p>
Number of cubes
2,096,006
4
https://math.stackexchange.com/questions/2096006/number-of-cubes
<p>We have $X$ cubes with $8000\le X\le10000$. We have built columns with $2×2$ bases, leaving 2 cubes. We have also built columns with $3×3$ and $5×5$ bases, leaving 4 cubes in these cases.</p> <blockquote> <p>How can we calculate the number of cubes?</p> </blockquote> <p>I have created the equations $$n\equiv2\bmod4$$ $$n\equiv4\bmod9$$ $$n\equiv4\bmod25$$ but I am not sure how to proceed in calculating the right number. What is the best way to calculate it? Thanks for help.</p>
[ { "answer_id": 2096034, "score": 1, "text": "<p>Since 4, 9 and 25 are relatively prime, the Chinese remainder theorem guarantees that the number of blocks $X$ is unique modulo $4\\cdot9\\cdot25=900$.</p>\n\n<p>$X\\equiv4\\bmod9$ and $X\\equiv4\\bmod25$ imply $X\\equiv4\\bmod225$ (the product of 9 and 25). Add 225 repeatedly to 4 until arriving at a number that is $2\\bmod 4$:\n$$4+225+225=454\\equiv2\\bmod4$$\nThis is $X\\bmod900$; now add 900 repeatedly to 454 until arriving at a number between 8000 and 10000. The possible values for $X$ are thus\n$$454+900\\cdot9=8554$$\n$$454+900\\cdot10=9454$$</p>\n" }, { "answer_id": 2096032, "score": 0, "text": "<p>The Chinese Remainder is used for calculating a number $n$ in this case that when divided by $4$ has remainder $2$, by $9$ remainder $4$ and by $25$ remainder $4$.</p>\n\n<p>Then $lcm(4,9,25) = 900$ and we need $a+b+c \\equiv r \\pmod{900}$</p>\n\n<p>Let's start calculating $a=2\\cdot9\\cdot25\\cdot t$ where $t_1 \\equiv (9\\cdot25)^{-1} \\pmod 4$. </p>\n\n<p>Then $t_1 \\equiv 225^1 \\equiv 1 \\pmod 4$ and $a=450$</p>\n\n<p>For $b=4\\cdot4\\cdot25\\cdot t_2$ where $t_2 \\equiv (4\\cdot25)^{-1} \\pmod 9$.</p>\n\n<p>Then $t_2 \\equiv 1 \\equiv 100^5 \\pmod 9$ and $b=400$</p>\n\n<p>For $c=4\\cdot4\\cdot9\\cdot t_3$ where $t_3 \\equiv (4\\cdot9)^{-1} \\pmod 9$</p>\n\n<p>Then $t_3 \\equiv 16 \\equiv 36^{19} \\pmod{25}$ and $c=2304$</p>\n\n<p>Thus $454 \\equiv a+b+c \\equiv 3154 \\pmod{900}$ so $n=454$ but we aren't done yet.</p>\n\n<p>Since you want that $n$ satisfies $8000&lt;n&lt;10000$ then $n=8554$ or $n=9454$</p>\n" }, { "answer_id": 2096053, "score": 0, "text": "<p>$n \\equiv 4 \\pmod 9\\\\n \\equiv 4 \\pmod {25}$</p>\n\n<p>So<br>\n$n \\equiv 4 \\pmod {225}$</p>\n\n<p>Possible numbers are $8104,\n8329,\n8554,\n8779,\n9004,\n9229,\n9454,\n9679,\n9904$</p>\n\n<p>Last two digits of above numbers are <br>$04,\n29,\n54,\n79,\n04,\n29,\n54,\n79,\n04\n$</p>\n\n<p>Last two digits added by $2$ of above numbers are <br>$06,31,56,81,06,31,56,81,06$</p>\n\n<p>Only $56$ is divisible by $4$</p>\n\n<p>Corresponding numbers are $8554$ and $9454$<br>\nthese are the required numbers</p>\n" } ]
Question: Number of cubes Answer 1: <p>Since 4, 9 and 25 are relatively prime, the Chinese remainder theorem guarantees that the number of blocks $X$ is unique modulo $4\cdot9\cdot25=900$.</p> <p>$X\equiv4\bmod9$ and $X\equiv4\bmod25$ imply $X\equiv4\bmod225$ (the product of 9 and 25). Add 225 repeatedly to 4 until arriving at a number that is $2\bmod 4$: $$4+225+225=454\equiv2\bmod4$$ This is $X\bmod900$; now add 900 repeatedly to 454 until arriving at a number between 8000 and 10000. The possible values for $X$ are thus $$454+900\cdot9=8554$$ $$454+900\cdot10=9454$$</p> Answer 2: <p>The Chinese Remainder is used for calculating a number $n$ in this case that when divided by $4$ has remainder $2$, by $9$ remainder $4$ and by $25$ remainder $4$.</p> <p>Then $lcm(4,9,25) = 900$ and we need $a+b+c \equiv r \pmod{900}$</p> <p>Let's start calculating $a=2\cdot9\cdot25\cdot t$ where $t_1 \equiv (9\cdot25)^{-1} \pmod 4$. </p> <p>Then $t_1 \equiv 225^1 \equiv 1 \pmod 4$ and $a=450$</p> <p>For $b=4\cdot4\cdot25\cdot t_2$ where $t_2 \equiv (4\cdot25)^{-1} \pmod 9$.</p> <p>Then $t_2 \equiv 1 \equiv 100^5 \pmod 9$ and $b=400$</p> <p>For $c=4\cdot4\cdot9\cdot t_3$ where $t_3 \equiv (4\cdot9)^{-1} \pmod 9$</p> <p>Then $t_3 \equiv 16 \equiv 36^{19} \pmod{25}$ and $c=2304$</p> <p>Thus $454 \equiv a+b+c \equiv 3154 \pmod{900}$ so $n=454$ but we aren't done yet.</p> <p>Since you want that $n$ satisfies $8000&lt;n&lt;10000$ then $n=8554$ or $n=9454$</p> Answer 3: <p>$n \equiv 4 \pmod 9\\n \equiv 4 \pmod {25}$</p> <p>So<br> $n \equiv 4 \pmod {225}$</p> <p>Possible numbers are $8104, 8329, 8554, 8779, 9004, 9229, 9454, 9679, 9904$</p> <p>Last two digits of above numbers are <br>$04, 29, 54, 79, 04, 29, 54, 79, 04 $</p> <p>Last two digits added by $2$ of above numbers are <br>$06,31,56,81,06,31,56,81,06$</p> <p>Only $56$ is divisible by $4$</p> <p>Corresponding numbers are $8554$ and $9454$<br> these are the required numbers</p>
Uniform convergence not a topological property?
2,096,052
4
https://math.stackexchange.com/questions/2096052/uniform-convergence-not-a-topological-property
<p>Suppose I have a sequence of functions $f_n: X \to Y$ between metric spaces that converges uniformly to $f$. Then if I change the metric on $Y$ to a Lipschitz equivalent metric, the sequence of functions will still converge uniformly to $f$.</p> <p>However, I am told that if I change the metric on $Y$ in such a way that does not disturb the induced topology, then $f_n$ may no longer converge uniformly. My question is: <strong>is there a simple example of a sequence of functions from $X$ to two homeomorphic metric spaces $Y$, $Y'$ converging uniformly in one but not the other?</strong></p> <p>EDIT: question modified slightly in response to comments.</p>
[ { "answer_id": 2096098, "score": 3, "text": "<p>$Y$ must not be compact for this to be possible (because compact spaces have unique uniform structure).</p>\n\n<p>You can simply construct an artificial example. For instance, let $X={\\bf N}$ and $Y={\\bf N}\\times [0,1]$, while $Y'$ is $Y$ with metric on $\\{n\\}\\times [0,1]$ scaled by $1/n$ (so that the diameter is $1/n$, you can put the distance between components in any way you want, for example put $2$ whenever two points are in different components). Clearly, $Y'$ is homeomorphic to $Y$ (even by a locally bi-Lipschitz function), but if you take sequence of functions $(f_m)$ with $$f_m(n)=\\begin{cases}(n,1) &amp;\\textrm{if }m&gt;n\\\\ (n,0) &amp;\\textrm{otherwise}\\end{cases}$$ you will have $f_m$ converge uniformly as sequence of functions into $Y'$, but not so into $Y$. You can replace $[0,1]$ here with any nontrivial metric space.</p>\n\n<p>For a different example, take for $Y$ the interval $(0,1]$ with the standard metric, and for $Y'$ the same interval with the metric scaled by a singular function, e.g. for $y_1&lt;y_2$ put $d(y_1,y_2)=y_1^{-1}-y_2^{-1}$, and put\n$$f_m(n)=\\begin{cases}\\frac{1}{n} &amp;\\textrm{if }m&gt;n\\\\ \\frac{1}{n+1} &amp;\\textrm{otherwise}\\end{cases}$$\nThis sequence will be uniformly convergent in the standard metric, but not in the scaled metric.</p>\n" }, { "answer_id": 2096681, "score": 1, "text": "<p>Consider $f_n(x)=x+\\frac1n$, from $X=(0,1)$ to $Y=\\Bbb R$, with the usual metrics. This converges uniformly to $f(x)=x$.</p>\n\n<p>Now, change the metric of the codomain to $d'(x,y)=\\frac1x-\\frac1y$. That is, let $Y'$ be $(\\Bbb R,d')$. Then $f_n$, as a function from $X$ to $Y'$ no longer converges uniformly to $f$.</p>\n" } ]
Question: Uniform convergence not a topological property? Answer 1: <p>$Y$ must not be compact for this to be possible (because compact spaces have unique uniform structure).</p> <p>You can simply construct an artificial example. For instance, let $X={\bf N}$ and $Y={\bf N}\times [0,1]$, while $Y'$ is $Y$ with metric on $\{n\}\times [0,1]$ scaled by $1/n$ (so that the diameter is $1/n$, you can put the distance between components in any way you want, for example put $2$ whenever two points are in different components). Clearly, $Y'$ is homeomorphic to $Y$ (even by a locally bi-Lipschitz function), but if you take sequence of functions $(f_m)$ with $$f_m(n)=\begin{cases}(n,1) &amp;\textrm{if }m&gt;n\\ (n,0) &amp;\textrm{otherwise}\end{cases}$$ you will have $f_m$ converge uniformly as sequence of functions into $Y'$, but not so into $Y$. You can replace $[0,1]$ here with any nontrivial metric space.</p> <p>For a different example, take for $Y$ the interval $(0,1]$ with the standard metric, and for $Y'$ the same interval with the metric scaled by a singular function, e.g. for $y_1&lt;y_2$ put $d(y_1,y_2)=y_1^{-1}-y_2^{-1}$, and put $$f_m(n)=\begin{cases}\frac{1}{n} &amp;\textrm{if }m&gt;n\\ \frac{1}{n+1} &amp;\textrm{otherwise}\end{cases}$$ This sequence will be uniformly convergent in the standard metric, but not in the scaled metric.</p> Answer 2: <p>Consider $f_n(x)=x+\frac1n$, from $X=(0,1)$ to $Y=\Bbb R$, with the usual metrics. This converges uniformly to $f(x)=x$.</p> <p>Now, change the metric of the codomain to $d'(x,y)=\frac1x-\frac1y$. That is, let $Y'$ be $(\Bbb R,d')$. Then $f_n$, as a function from $X$ to $Y'$ no longer converges uniformly to $f$.</p>
Why are Stiefel-Whitney classes not represented in de Rham cohomology?
2,096,233
4
https://math.stackexchange.com/questions/2096233/why-are-stiefel-whitney-classes-not-represented-in-de-rham-cohomology
<p>Stiefel-Whitney classes of a real vector bundle $(E, \pi, M)$ are defined axiomatically in singular cohomology $H^*(M, \mathbb{Z}/2\mathbb{Z})$ (compare <a href="https://en.wikipedia.org/wiki/Stiefel%E2%80%93Whitney_class#Axiomatic_definition" rel="nofollow noreferrer">wiki</a>). </p> <p>Now various books state, that Stiefel-Whitney classes are not represented in de Rham cohomology (e.g. "The Stiefel Whitney classes are not de Rham cohomology classes" compare <a href="https://books.google.de/books?id=VVciDQAAQBAJ&amp;pg=PA74&amp;lpg=PA74&amp;dq=stiefel%20whitney%20de%20rham%20cohomology&amp;source=bl&amp;ots=ia8-vPx0LM&amp;sig=90NBy2s-EMS9QjYrmilmuC98dQU&amp;hl=de&amp;sa=X&amp;ved=0ahUKEwiM05nArL_RAhXhLcAKHZfVDXsQ6AEISTAG#v=onepage&amp;q=stiefel%20whitney%20de%20rham%20cohomology&amp;f=false" rel="nofollow noreferrer">click</a>). </p> <p>Why is this the case? Naivly I would think, that the composition of the coefficient homomorphism with the de Rham iso $H^*(M, \mathbb{Z}/2\mathbb{Z}) \rightarrow H^*(M, \mathbb{R}) \rightarrow H_{dR}(M)$ should map Stiefel-Whitney classes on classes of de Rham cohomology satisfying the axioms of Stiefel-Whitney classes. Why does this fail? Can you give a reference?</p>
[ { "answer_id": 2096242, "score": 6, "text": "<p>There is no (nonzero) map $f : H^*(M; \\mathbb{Z}/2\\mathbb{Z}) \\to H^*(M; \\mathbb{R})$ as you claimed. The field $\\mathbb{Z}/2\\mathbb{Z}$ has characteristic $2$, but the field $\\mathbb{R}$ has characteristic $0$. In $H^*(M; \\mathbb{Z}/2\\mathbb{Z})$, all the elements $\\alpha$ satisfy $2 \\alpha = 0$. So under your map $f$ you get $2 f(\\alpha) = 0$, but over $\\mathbb{R}$ you can divide by $2$ and you get $f(\\alpha) = 0$...</p>\n\n<p>More abstractly, the universal Stiefel-Whitney class $w_n$ lives in $H^n(\\operatorname{Gr}_n(\\mathbb{R}^\\infty); \\mathbb{Z}/2\\mathbb{Z})$, the $n$th cohomology group of the Grassmanian of $n$-subspaces in $\\mathbb{R}^\\infty$. All the other Stiefel-Whitney classes $w_n(\\xi)$ are pullbacks of this one. So if you wanted \"de Rham\" Stiefel-Whitney classes, you'd be looking at $H^n(\\operatorname{Gr}_n(\\mathbb{R}^\\infty); \\mathbb{R})$. But for example $\\operatorname{Gr}_1(\\mathbb{R}^\\infty)$ is the infinite-dimensional real projective space $\\mathbb{RP}^\\infty$, and $H^1(\\mathbb{RP}^\\infty; \\mathbb{R}) = 0$.</p>\n" }, { "answer_id": 2096244, "score": 2, "text": "<p>Let's look at $w_1$: it takes 1-cycles to numbers $0$ or $1$. What, in $H^1(M, \\Bbb R)$ would be the cocycle $W$ corresponding to that? In particular, for surfaces, $w_1$ measures non-orientability: if $c$ is a curve that reverses orientation, then $w_1(c) = 1$. So for $RP^2$, $w_1$ is nonzero, but what will $W$ be? Since $H^1(RP^2, \\Bbb R) = 0$, $W$ pretty much can't be anything interesting. </p>\n\n<p>@Najib's answer makes this simple example more general. </p>\n" } ]
Question: Why are Stiefel-Whitney classes not represented in de Rham cohomology? Answer 1: <p>There is no (nonzero) map $f : H^*(M; \mathbb{Z}/2\mathbb{Z}) \to H^*(M; \mathbb{R})$ as you claimed. The field $\mathbb{Z}/2\mathbb{Z}$ has characteristic $2$, but the field $\mathbb{R}$ has characteristic $0$. In $H^*(M; \mathbb{Z}/2\mathbb{Z})$, all the elements $\alpha$ satisfy $2 \alpha = 0$. So under your map $f$ you get $2 f(\alpha) = 0$, but over $\mathbb{R}$ you can divide by $2$ and you get $f(\alpha) = 0$...</p> <p>More abstractly, the universal Stiefel-Whitney class $w_n$ lives in $H^n(\operatorname{Gr}_n(\mathbb{R}^\infty); \mathbb{Z}/2\mathbb{Z})$, the $n$th cohomology group of the Grassmanian of $n$-subspaces in $\mathbb{R}^\infty$. All the other Stiefel-Whitney classes $w_n(\xi)$ are pullbacks of this one. So if you wanted "de Rham" Stiefel-Whitney classes, you'd be looking at $H^n(\operatorname{Gr}_n(\mathbb{R}^\infty); \mathbb{R})$. But for example $\operatorname{Gr}_1(\mathbb{R}^\infty)$ is the infinite-dimensional real projective space $\mathbb{RP}^\infty$, and $H^1(\mathbb{RP}^\infty; \mathbb{R}) = 0$.</p> Answer 2: <p>Let's look at $w_1$: it takes 1-cycles to numbers $0$ or $1$. What, in $H^1(M, \Bbb R)$ would be the cocycle $W$ corresponding to that? In particular, for surfaces, $w_1$ measures non-orientability: if $c$ is a curve that reverses orientation, then $w_1(c) = 1$. So for $RP^2$, $w_1$ is nonzero, but what will $W$ be? Since $H^1(RP^2, \Bbb R) = 0$, $W$ pretty much can't be anything interesting. </p> <p>@Najib's answer makes this simple example more general. </p>
The sum of series $1-1/2-1/4+1/3-1/6-1/8+1/5-....$?
2,096,604
4
https://math.stackexchange.com/questions/2096604/the-sum-of-series-1-1-2-1-41-3-1-6-1-81-5
<p>I know that the sum of series $1-1/2+1/3-1/4+1/5..= log 2.$ And we can see that by rearranging the terms of the series given in question, i would land on the series as above. so this should mean that the answer should be log2. but no, the answer is log2/2.</p> <p>also notice that the Question series has a positive term after every two negative terms.Does this have anything to do with the answer?</p> <p>This question is different from the series $1-1/2+1/3-1/4+...$ as sum of this series is log2. but the sum of asked series is log2/2.</p>
[ { "answer_id": 2096622, "score": 5, "text": "<p>Ok, looks like you are adding the odd terms and subtracting the even terms of the harmonic series in the order odd, even, even, odd, even, even. </p>\n\n<p>We can write each trio of terms as $$ \\frac{1}{n}-\\frac{1}{2n}-\\frac{1}{2n+2}$$</p>\n\n<p>so reexpress the series as $$ \\sum_{\\mbox{n odd}} \\frac{1}{n}-\\frac{1}{2n}-\\frac{1}{2n+2}.$$ The first two terms can be combined to get$$ \\sum_{\\mbox{n odd}} \\frac{1}{2n}-\\frac{1}{2n+2}= \\frac{1}{2}\\sum_{\\mbox{n odd}}\\frac{1}{n}-\\frac{1}{n+1} = \\frac{1}{2}\\left(\\frac{1}{1} -\\frac{1}{2}+\\frac{1}{3}-\\frac{1}{4}\\ldots\\right)$$\nAs you mentioned in your question, the thing inside the parentheses is $\\log(2),$ so the final answer is $\\frac{1}{2}\\log(2).$</p>\n" }, { "answer_id": 2096621, "score": 3, "text": "<p>If you take the terms 3 at a time, the formula for the $n$th term appears to be:</p>\n\n<p>$$\\frac{1}{2n-1} - \\frac{1}{4n-2}- \\frac{1}{4n}.$$</p>\n\n<p>If so we have the above </p>\n\n<p>$$= \\frac{1}{4n(2n-1)} = \\frac{1}{2} \\frac{1}{2n(2n-1)}= \\frac{1}{2}\\left(\\frac{1}{2n-1}-\\frac{1}{2n}\\right).$$</p>\n\n<p>And this last is $\\frac{1}{2}$ of two consecutive terms of the alternating harmonic series. This makes your sum equal to $\\frac{1}{2} \\ln 2.$</p>\n" } ]
Question: The sum of series $1-1/2-1/4+1/3-1/6-1/8+1/5-....$? Answer 1: <p>Ok, looks like you are adding the odd terms and subtracting the even terms of the harmonic series in the order odd, even, even, odd, even, even. </p> <p>We can write each trio of terms as $$ \frac{1}{n}-\frac{1}{2n}-\frac{1}{2n+2}$$</p> <p>so reexpress the series as $$ \sum_{\mbox{n odd}} \frac{1}{n}-\frac{1}{2n}-\frac{1}{2n+2}.$$ The first two terms can be combined to get$$ \sum_{\mbox{n odd}} \frac{1}{2n}-\frac{1}{2n+2}= \frac{1}{2}\sum_{\mbox{n odd}}\frac{1}{n}-\frac{1}{n+1} = \frac{1}{2}\left(\frac{1}{1} -\frac{1}{2}+\frac{1}{3}-\frac{1}{4}\ldots\right)$$ As you mentioned in your question, the thing inside the parentheses is $\log(2),$ so the final answer is $\frac{1}{2}\log(2).$</p> Answer 2: <p>If you take the terms 3 at a time, the formula for the $n$th term appears to be:</p> <p>$$\frac{1}{2n-1} - \frac{1}{4n-2}- \frac{1}{4n}.$$</p> <p>If so we have the above </p> <p>$$= \frac{1}{4n(2n-1)} = \frac{1}{2} \frac{1}{2n(2n-1)}= \frac{1}{2}\left(\frac{1}{2n-1}-\frac{1}{2n}\right).$$</p> <p>And this last is $\frac{1}{2}$ of two consecutive terms of the alternating harmonic series. This makes your sum equal to $\frac{1}{2} \ln 2.$</p>
Is this a trick question? Balls into urns probability
2,096,757
4
https://math.stackexchange.com/questions/2096757/is-this-a-trick-question-balls-into-urns-probability
<p>There are two urns. Urn 1 contains 3 white and 2 red ball, urn 2 one white and two red. First, a ball from urn 1 is randomly chosen and placed into urn 2. Finally, a ball from urn 2 is picked. This ball be red: What is the probability that the ball transferred from urn 1 to urn 2 was white?</p> <p>My answer is 3/5 - 3W over 5 balls in urn 1 as the second event does not tell me anything to influence which ball was transferred since urn 2 already has a red ball anyways. Hope to know if my reasoning is valid!</p>
[ { "answer_id": 2096764, "score": 8, "text": "<p>Method I: (Bayes) There are two scenarios in which a red ball is observed:</p>\n\n<p>I: A white ball is transferred (probability $\\frac 35$). In this case, a red ball is observed with probability $\\frac 12$. Thus this scenario has probability $$\\frac 35 \\times \\frac 12 = \\frac 3{10}$$</p>\n\n<p>II: A red ball is transferred (probability $\\frac 25$). Now the probability of drawing a red ball is $\\frac 34$ Thus this scenario has probability $$\\frac 25\\times \\frac 34 =\\frac 3{10}$$</p>\n\n<p>We see that the two scenarios contribute equally, thus the probability that it was a white ball that was transferred initially is $\\boxed {\\frac 12}$</p>\n\n<p>Note: as our prior was that the probability the transferred was white was $\\frac 35$ we see that the observation of the red ball has caused us to lower our estimate for the probability</p>\n\n<p>Method II (Conditional Probability). A priori, the universe here consists of four possible events: $(W,W),(W,R),(R,R),(R,W)$ according to the color of the transferred ball and the color of the drawn ball. A routine calculation shows that $(W,W),(W,R),(R,R)$ each have probability $\\frac 3{10}$ and $(R,W)$ has probability $\\frac 1{10}$. We are asked for the probability that the transferred ball is $W$ conditioned on the fact that the drawn ball is $R$ and inspection now shows the answer to be $\\frac 12$. </p>\n" }, { "answer_id": 2097149, "score": 3, "text": "<p>Confirming lulu's answer.</p>\n\n<p>You begin with ($3$white and $2$red) balls in urn#1, then select $1$ at random(without bias) and move it into urn#2. &nbsp; Urn#2 then contains either ($2$white and $2$red) or ($1$white and $3$red) depending on which colour of ball was moved into it (white or red respectively). &nbsp; Then a ball is extracted(without bias) from urn#2 and it turns out to be red.</p>\n\n<p>Let $W_1$ be the event that a white ball was moved into urn#2, and $R_2$ the event that a red ball was extracted from urn#2. &nbsp; We wish to determine $\\mathsf P(W_1\\mid R_2)$, the conditional probability that a white ball had been transferred into urn#2 when given that a red ball was subsequently extracted.</p>\n\n<p>We know $~\\mathsf P(W_1)=3/5~,~ \\mathsf P(R_2\\mid W_1)=2/4~,~ \\\\\\mathsf P(W_1^\\complement)=2/5~,~ \\mathsf P(R_2\\mid W_1^\\complement)=3/4~$.</p>\n\n<p>Then we apply Bayes' Rule:$$\\begin{align}\\mathsf P(W_1\\mid R_2) ~&amp;=~\\dfrac{\\mathsf P(W_1)\\,\\mathsf P(R_2\\mid W_1)}{~\\mathsf P(W_1)\\,\\mathsf P(R_2\\mid W_1)+\\mathsf P(W_1^\\complement)\\,\\mathsf P(R_2\\mid W_1^\\complement)~}\\\\[1ex]&amp;=~\\dfrac{3\\cdot 2}{~3\\cdot 2+2\\cdot 3~}\\\\[1ex]&amp;=~\\dfrac{~1~}{2}\\end{align}$$</p>\n\n<p>$\\blacksquare$</p>\n" }, { "answer_id": 4264048, "score": 0, "text": "<p><a href=\"https://i.stack.imgur.com/fMhz1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fMhz1.png\" alt=\"enter image description here\" /></a></p>\n<p>A Pretty Straightforward question. Please go through the diagram. Another way to think is if 1W transferred to U2= (1W + 2R) makes U2 = (2R + 2W). Now, # White Balls = # Red Balls . Then P(W)= P(R) = 1/2 for Urn2.</p>\n" } ]
Question: Is this a trick question? Balls into urns probability Answer 1: <p>Method I: (Bayes) There are two scenarios in which a red ball is observed:</p> <p>I: A white ball is transferred (probability $\frac 35$). In this case, a red ball is observed with probability $\frac 12$. Thus this scenario has probability $$\frac 35 \times \frac 12 = \frac 3{10}$$</p> <p>II: A red ball is transferred (probability $\frac 25$). Now the probability of drawing a red ball is $\frac 34$ Thus this scenario has probability $$\frac 25\times \frac 34 =\frac 3{10}$$</p> <p>We see that the two scenarios contribute equally, thus the probability that it was a white ball that was transferred initially is $\boxed {\frac 12}$</p> <p>Note: as our prior was that the probability the transferred was white was $\frac 35$ we see that the observation of the red ball has caused us to lower our estimate for the probability</p> <p>Method II (Conditional Probability). A priori, the universe here consists of four possible events: $(W,W),(W,R),(R,R),(R,W)$ according to the color of the transferred ball and the color of the drawn ball. A routine calculation shows that $(W,W),(W,R),(R,R)$ each have probability $\frac 3{10}$ and $(R,W)$ has probability $\frac 1{10}$. We are asked for the probability that the transferred ball is $W$ conditioned on the fact that the drawn ball is $R$ and inspection now shows the answer to be $\frac 12$. </p> Answer 2: <p>Confirming lulu's answer.</p> <p>You begin with ($3$white and $2$red) balls in urn#1, then select $1$ at random(without bias) and move it into urn#2. &nbsp; Urn#2 then contains either ($2$white and $2$red) or ($1$white and $3$red) depending on which colour of ball was moved into it (white or red respectively). &nbsp; Then a ball is extracted(without bias) from urn#2 and it turns out to be red.</p> <p>Let $W_1$ be the event that a white ball was moved into urn#2, and $R_2$ the event that a red ball was extracted from urn#2. &nbsp; We wish to determine $\mathsf P(W_1\mid R_2)$, the conditional probability that a white ball had been transferred into urn#2 when given that a red ball was subsequently extracted.</p> <p>We know $~\mathsf P(W_1)=3/5~,~ \mathsf P(R_2\mid W_1)=2/4~,~ \\\mathsf P(W_1^\complement)=2/5~,~ \mathsf P(R_2\mid W_1^\complement)=3/4~$.</p> <p>Then we apply Bayes' Rule:$$\begin{align}\mathsf P(W_1\mid R_2) ~&amp;=~\dfrac{\mathsf P(W_1)\,\mathsf P(R_2\mid W_1)}{~\mathsf P(W_1)\,\mathsf P(R_2\mid W_1)+\mathsf P(W_1^\complement)\,\mathsf P(R_2\mid W_1^\complement)~}\\[1ex]&amp;=~\dfrac{3\cdot 2}{~3\cdot 2+2\cdot 3~}\\[1ex]&amp;=~\dfrac{~1~}{2}\end{align}$$</p> <p>$\blacksquare$</p> Answer 3: <p><a href="https://i.stack.imgur.com/fMhz1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fMhz1.png" alt="enter image description here" /></a></p> <p>A Pretty Straightforward question. Please go through the diagram. Another way to think is if 1W transferred to U2= (1W + 2R) makes U2 = (2R + 2W). Now, # White Balls = # Red Balls . Then P(W)= P(R) = 1/2 for Urn2.</p>
could someone please explain this proof to me? $d(x^2)=2xdx+dxdx$
2,096,850
4
https://math.stackexchange.com/questions/2096850/could-someone-please-explain-this-proof-to-me-dx2-2xdxdxdx
<p>I know that $d(xy)=ydx+xdy$ and $d(x^2)=2xdx+dxdx$<br> I want to calculate $d(x^3)$ and $d(x^4)$??<br> can anyone explain that how can I calculate it? and what is the name of this rule?<br> Regards</p>
[ { "answer_id": 2096886, "score": 2, "text": "<p>What you have originally stated--$d(x^2)=2xdx + dxdx$--is not correct. It should be that $d(x^2)=2xdx$.</p>\n\n<p>Formally, though, if you want to calculate it it's quite simple. It's just defined by</p>\n\n<p>$$d(f(x_1,x_2,\\cdots, x_n)) = \\sum_{i=1}^n \\frac{\\partial f}{\\partial x_i} dx_i.$$</p>\n" }, { "answer_id": 2096866, "score": 0, "text": "<p>I don't know how you know \n$$\nd(xy) = ydx + xdy\n$$\nbut I think I can explain what you want to know.</p>\n\n<p>The identity you quote comes from thinking about a rectangle with width $x$ and length $y$ that you make a little bit bigger by changing $x$ by a tiny $dx$ and $y$ by a tiny $dy$. Then the area of the new rectangle is\n$$\n(x+dx)(y+dy) = xy + xdy + ydx + dxdy .\n$$\nThat means the <em>change</em> in area is\n$$\nd(xy) = xdy + ydx + dxdy .\n$$\nSince $dxdy$ is the product of two tiny numbers it's very tiny, so we can ignore it. That gives you what you say you know. If you think of a square, for which $x=y$ then you get the formula in your title <em>before</em> you ignore the $(dx)^2$.</p>\n\n<p>To get the results you want, expand $(x+dx)^3$ and then $(x+dx)^4$ using the binomial theorem (or just plain high school algebra). See which terms in the change are really tiny.</p>\n\n<p>Note: none of this is rigorous formal calculus, but it is how calculus began, and it's a good way to train your intuition.</p>\n" }, { "answer_id": 2096870, "score": 0, "text": "<p>Apply the rules you know recursively. I am writing these out in an extremely pedantic way:</p>\n\n<p>$d(x^4) = d(x x^3) = x^3dx + xdx^3 = x^3dx + xd(x x^2) = x^3dx + xx^2dx + x^2dx^2 = 2x^3dx + x^3dx + x^3dx = 4x^3dx$</p>\n\n<p>In general,</p>\n\n<p>$dx^n = nx^{n - 1}dx$ which you can show by mathematical induction. We already know the rule holds for $n = 1$, so suppose it holds for $n = m$, we will show that it holds also for $n = m + 1$:</p>\n\n<p>$dx^{m + 1} = dxx^m = xdx^m + x^mdx = mx^mdx + x^mdx = (m + 1)x^mdx$</p>\n\n<p>And hence it must hold for any finite $n$. That's it.</p>\n" }, { "answer_id": 2096945, "score": 0, "text": "<p><strong>A visual intuition</strong></p>\n\n<p>Remember that with $d(f(x))$ we mean how much $f(x)$ increases, if $x$ grows by a tiny amount $dx$. So, with $d(x^2)$ we mean how much the area of a square of side $x$ increases, if $x$ grows by a tiny amount $dx$.</p>\n\n<p>Picture a square of side $x$. Then \"extend\" its sides upwards and rightwards by $dx$ to $x+dx$. Now you have a slightly larger square, formed by:</p>\n\n<ol>\n<li>the initial square</li>\n<li>two new long and thin rectangles $x \\times dx$ (one vertical to the right of the original square, one horizontal above it), and</li>\n<li>a new tiny little $dx \\times dx$ square in the upright corner. </li>\n</ol>\n\n<p>Those two rectangles and the little square are how much your $x^2$ increased as $x$ grew by the tiny amount $dx$:</p>\n\n<p>$d(x^2)=2(x~dx)+(dx~dx)$.</p>\n\n<p>What happens if you take a cube instead of a square?</p>\n" }, { "answer_id": 2097048, "score": 0, "text": "<p>Note that :to find $d(xy)$ \n$$\\Delta(xy)=(x+\\Delta x)(y+\\Delta y)-xy=x\\Delta y+y\\Delta x+\\Delta x\\Delta y+xy-xy=\\\\x\\Delta y+y\\Delta x+\\Delta x\\Delta y \\\\$$so when $\\Delta y,\\Delta x \\to 0$ we have \n$$\\large d(xy)=xdy+ydx+dxdy\\\\ x=y\\\\ \\to d(xx)=d(x^2)=xdx+xdx+dx.dx=2xdx+(dx)^2 $$\nnow for the case of 1st method $$d(x^3)$$\n$$=d(xx^2)=xd(x^2)+x^2dx+dx.d(x^2)=\\\\x(2x.dx+(dx)^2)+x^2.dx+dx.((2x.dx+(dx)^2)=\\\\2x^2.dx+x.(dx)^2+x^2.dx+2x(dx)^2+(dx)^3=\\\\3x^2.dx+3x(dx)^2+(dx)^3$$\nsecond method\n$$\\Delta(x^3)=(x+\\Delta x)^3-x^3=\\\\x^3.\\Delta x+3x^2.\\Delta x+(\\Delta x)^3\\\\\\Delta x \\to 0 \\\\d(x^3)=3x^2.dx+3x(dx)^2+(dx)^3$$\nthen you can do it for $d(x^4)$ with two idea \n$$d(x^4)=d(x^2.x^2) \\space or \\space d(x.x^3)$$ or this way\n$$\\Delta(x^4)=(x+\\Delta x)^4-x^4=\\\\4x^3.\\Delta x+6x^.(\\Delta x)^2+4.(\\Delta x)^3+(\\Delta x)^4$$</p>\n" } ]
Question: could someone please explain this proof to me? $d(x^2)=2xdx+dxdx$ Answer 1: <p>What you have originally stated--$d(x^2)=2xdx + dxdx$--is not correct. It should be that $d(x^2)=2xdx$.</p> <p>Formally, though, if you want to calculate it it's quite simple. It's just defined by</p> <p>$$d(f(x_1,x_2,\cdots, x_n)) = \sum_{i=1}^n \frac{\partial f}{\partial x_i} dx_i.$$</p> Answer 2: <p>I don't know how you know $$ d(xy) = ydx + xdy $$ but I think I can explain what you want to know.</p> <p>The identity you quote comes from thinking about a rectangle with width $x$ and length $y$ that you make a little bit bigger by changing $x$ by a tiny $dx$ and $y$ by a tiny $dy$. Then the area of the new rectangle is $$ (x+dx)(y+dy) = xy + xdy + ydx + dxdy . $$ That means the <em>change</em> in area is $$ d(xy) = xdy + ydx + dxdy . $$ Since $dxdy$ is the product of two tiny numbers it's very tiny, so we can ignore it. That gives you what you say you know. If you think of a square, for which $x=y$ then you get the formula in your title <em>before</em> you ignore the $(dx)^2$.</p> <p>To get the results you want, expand $(x+dx)^3$ and then $(x+dx)^4$ using the binomial theorem (or just plain high school algebra). See which terms in the change are really tiny.</p> <p>Note: none of this is rigorous formal calculus, but it is how calculus began, and it's a good way to train your intuition.</p> Answer 3: <p>Apply the rules you know recursively. I am writing these out in an extremely pedantic way:</p> <p>$d(x^4) = d(x x^3) = x^3dx + xdx^3 = x^3dx + xd(x x^2) = x^3dx + xx^2dx + x^2dx^2 = 2x^3dx + x^3dx + x^3dx = 4x^3dx$</p> <p>In general,</p> <p>$dx^n = nx^{n - 1}dx$ which you can show by mathematical induction. We already know the rule holds for $n = 1$, so suppose it holds for $n = m$, we will show that it holds also for $n = m + 1$:</p> <p>$dx^{m + 1} = dxx^m = xdx^m + x^mdx = mx^mdx + x^mdx = (m + 1)x^mdx$</p> <p>And hence it must hold for any finite $n$. That's it.</p> Answer 4: <p><strong>A visual intuition</strong></p> <p>Remember that with $d(f(x))$ we mean how much $f(x)$ increases, if $x$ grows by a tiny amount $dx$. So, with $d(x^2)$ we mean how much the area of a square of side $x$ increases, if $x$ grows by a tiny amount $dx$.</p> <p>Picture a square of side $x$. Then "extend" its sides upwards and rightwards by $dx$ to $x+dx$. Now you have a slightly larger square, formed by:</p> <ol> <li>the initial square</li> <li>two new long and thin rectangles $x \times dx$ (one vertical to the right of the original square, one horizontal above it), and</li> <li>a new tiny little $dx \times dx$ square in the upright corner. </li> </ol> <p>Those two rectangles and the little square are how much your $x^2$ increased as $x$ grew by the tiny amount $dx$:</p> <p>$d(x^2)=2(x~dx)+(dx~dx)$.</p> <p>What happens if you take a cube instead of a square?</p> Answer 5: <p>Note that :to find $d(xy)$ $$\Delta(xy)=(x+\Delta x)(y+\Delta y)-xy=x\Delta y+y\Delta x+\Delta x\Delta y+xy-xy=\\x\Delta y+y\Delta x+\Delta x\Delta y \\$$so when $\Delta y,\Delta x \to 0$ we have $$\large d(xy)=xdy+ydx+dxdy\\ x=y\\ \to d(xx)=d(x^2)=xdx+xdx+dx.dx=2xdx+(dx)^2 $$ now for the case of 1st method $$d(x^3)$$ $$=d(xx^2)=xd(x^2)+x^2dx+dx.d(x^2)=\\x(2x.dx+(dx)^2)+x^2.dx+dx.((2x.dx+(dx)^2)=\\2x^2.dx+x.(dx)^2+x^2.dx+2x(dx)^2+(dx)^3=\\3x^2.dx+3x(dx)^2+(dx)^3$$ second method $$\Delta(x^3)=(x+\Delta x)^3-x^3=\\x^3.\Delta x+3x^2.\Delta x+(\Delta x)^3\\\Delta x \to 0 \\d(x^3)=3x^2.dx+3x(dx)^2+(dx)^3$$ then you can do it for $d(x^4)$ with two idea $$d(x^4)=d(x^2.x^2) \space or \space d(x.x^3)$$ or this way $$\Delta(x^4)=(x+\Delta x)^4-x^4=\\4x^3.\Delta x+6x^.(\Delta x)^2+4.(\Delta x)^3+(\Delta x)^4$$</p>
Find $\int_{0}^{\infty }\cos \left (x \right )\sin \left (x^{2} \right )\mathrm{d}x$
2,096,976
4
https://math.stackexchange.com/questions/2096976/find-int-0-infty-cos-left-x-right-sin-left-x2-right-mathrm
<p>How to prove $$\int_{0}^{\infty }\cos \left (x \right )\sin \left (x^{2} \right )dx=\frac{1}{2}\sqrt{\frac{\pi }{2}}\left ( \cos\frac{1}{4}-\sin\frac{1}{4} \right )$$ any hint ?Thanks!</p>
[ { "answer_id": 2096981, "score": 8, "text": "<p>\\begin{align*}\n\\int_{0}^{\\infty }\\cos x\\sin x^2\\, \\mathrm{d}x&amp;=\\frac{1}{4}\\int_{-\\infty}^{\\infty }\\left [ \\sin\\left ( x^{2}+x \\right )+\\sin\\left ( x^{2}-x \\right ) \\right ]\\mathrm{d}x\\tag1\\\\\n&amp;=\\frac{1}{4}\\int_{-\\infty}^{\\infty }\\left [ \\sin\\left ( \\left ( x+\\frac{1}{2} \\right )^{2}-\\frac{1}{4} \\right )+\\sin\\left ( \\left ( x-\\frac{1}{2} \\right )^{2}-\\frac{1}{4} \\right ) \\right ]\\mathrm{d}x\\\\\n&amp;=\\frac{1}{2}\\int_{-\\infty}^{\\infty }\\sin\\left ( x^{2}-\\frac{1}{4} \\right )\\mathrm{d}x\\\\\n&amp;=\\frac{1}{2}\\cos\\frac{1}{4}\\int_{-\\infty}^{\\infty }\\sin x^{2}\\, \\mathrm{d}x-\\frac{1}{2}\\sin\\frac{1}{4}\\int_{-\\infty}^{\\infty }\\cos x^2\\, \\mathrm{d}x\\tag2\n\\end{align*}\nThe use the <a href=\"https://en.wikipedia.org/wiki/Fresnel_integral\" rel=\"nofollow noreferrer\">Fresnel integral</a> , we will get the answer as wanted.</p>\n\n<p>$(1)$ : $\\sin\\alpha \\cos\\beta =\\dfrac{1}{2}\\left [ \\sin\\left ( \\alpha +\\beta \\right )+\\sin\\left ( \\alpha -\\beta \\right ) \\right ]$</p>\n\n<p>$(2)$ : $\\sin\\left ( \\alpha -\\beta \\right )=\\sin\\alpha \\cos\\beta -\\cos\\alpha \\sin\\beta $</p>\n\n<hr>\n\n<p>Use the same way we can get \n$$\\int_{0}^{\\infty }\\cos x\\cos x^2\\, \\mathrm{d}x=\\frac{1}{2}\\sqrt{\\frac{\\pi }{2}}\\left ( \\cos\\frac{1}{4}+\\sin\\frac{1}{4} \\right )$$</p>\n" }, { "answer_id": 2097010, "score": 7, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present a way forward that exploits Euler's formula. To the end, we proceed.</strong></p>\n</blockquote>\n\n<p>We begin by exploiting the even symmetry of the integrand and write</p>\n\n<p>$$\\begin{align}\n\\int_0^\\infty \\cos(x)\\sin(x^2)\\,dx&amp;=\\frac12\\int_{-\\infty}^\\infty \\cos(x)\\sin(x^2)\\,dx\\\\\\\\\n&amp;=\\frac12\\text{Im}\\left(\\int_{-\\infty}^\\infty \\cos(x)e^{ix^2}\\,dx\\right)\\\\\\\\\n&amp;=\\frac14\\text{Im}\\left(\\int_{-\\infty}^\\infty (e^{ix}+e^{-ix})\\,e^{ix^2}\\,dx\\right)\\\\\\\\\n&amp;=\\frac14\\text{Im}\\left(\\int_{-\\infty}^\\infty (e^{i(x^2+x)}+e^{i(x^2-x)})\\,dx\\right)\\\\\\\\\n&amp;=\\frac14\\text{Im}\\left(2e^{-i/4}\\int_{-\\infty}^\\infty e^{ix^2}\\,dx\\right)\\\\\\\\\n&amp;=\\frac12\\text{Im}\\left(2e^{-i/4}\\sqrt{\\frac{\\pi}{2}}(1+i)\\right)\\\\\\\\\n&amp;=\\sqrt{\\frac{\\pi}{2}}\\left(\\cos(1/4)-\\sin(1/4)\\right)\n\\end{align}$$</p>\n" } ]
Question: Find $\int_{0}^{\infty }\cos \left (x \right )\sin \left (x^{2} \right )\mathrm{d}x$ Answer 1: <p>\begin{align*} \int_{0}^{\infty }\cos x\sin x^2\, \mathrm{d}x&amp;=\frac{1}{4}\int_{-\infty}^{\infty }\left [ \sin\left ( x^{2}+x \right )+\sin\left ( x^{2}-x \right ) \right ]\mathrm{d}x\tag1\\ &amp;=\frac{1}{4}\int_{-\infty}^{\infty }\left [ \sin\left ( \left ( x+\frac{1}{2} \right )^{2}-\frac{1}{4} \right )+\sin\left ( \left ( x-\frac{1}{2} \right )^{2}-\frac{1}{4} \right ) \right ]\mathrm{d}x\\ &amp;=\frac{1}{2}\int_{-\infty}^{\infty }\sin\left ( x^{2}-\frac{1}{4} \right )\mathrm{d}x\\ &amp;=\frac{1}{2}\cos\frac{1}{4}\int_{-\infty}^{\infty }\sin x^{2}\, \mathrm{d}x-\frac{1}{2}\sin\frac{1}{4}\int_{-\infty}^{\infty }\cos x^2\, \mathrm{d}x\tag2 \end{align*} The use the <a href="https://en.wikipedia.org/wiki/Fresnel_integral" rel="nofollow noreferrer">Fresnel integral</a> , we will get the answer as wanted.</p> <p>$(1)$ : $\sin\alpha \cos\beta =\dfrac{1}{2}\left [ \sin\left ( \alpha +\beta \right )+\sin\left ( \alpha -\beta \right ) \right ]$</p> <p>$(2)$ : $\sin\left ( \alpha -\beta \right )=\sin\alpha \cos\beta -\cos\alpha \sin\beta $</p> <hr> <p>Use the same way we can get $$\int_{0}^{\infty }\cos x\cos x^2\, \mathrm{d}x=\frac{1}{2}\sqrt{\frac{\pi }{2}}\left ( \cos\frac{1}{4}+\sin\frac{1}{4} \right )$$</p> Answer 2: <blockquote> <p><strong>I thought it might be instructive to present a way forward that exploits Euler's formula. To the end, we proceed.</strong></p> </blockquote> <p>We begin by exploiting the even symmetry of the integrand and write</p> <p>$$\begin{align} \int_0^\infty \cos(x)\sin(x^2)\,dx&amp;=\frac12\int_{-\infty}^\infty \cos(x)\sin(x^2)\,dx\\\\ &amp;=\frac12\text{Im}\left(\int_{-\infty}^\infty \cos(x)e^{ix^2}\,dx\right)\\\\ &amp;=\frac14\text{Im}\left(\int_{-\infty}^\infty (e^{ix}+e^{-ix})\,e^{ix^2}\,dx\right)\\\\ &amp;=\frac14\text{Im}\left(\int_{-\infty}^\infty (e^{i(x^2+x)}+e^{i(x^2-x)})\,dx\right)\\\\ &amp;=\frac14\text{Im}\left(2e^{-i/4}\int_{-\infty}^\infty e^{ix^2}\,dx\right)\\\\ &amp;=\frac12\text{Im}\left(2e^{-i/4}\sqrt{\frac{\pi}{2}}(1+i)\right)\\\\ &amp;=\sqrt{\frac{\pi}{2}}\left(\cos(1/4)-\sin(1/4)\right) \end{align}$$</p>
Limit $\lim\limits_{a \to 0}( a\lfloor\frac{x}{a}\rfloor)$
2,097,007
4
https://math.stackexchange.com/questions/2097007/limit-lim-limits-a-to-0-a-lfloor-fracxa-rfloor
<p>Obviously the limit either does not exist or converges to <span class="math-container">$x$</span>.<br> I'm partial towards the latter, and have an incomplete argument involving Fourier Series which corroborates my inclination. Feel free to disprove me/affirm my hunch. <span class="math-container">$$\lim\limits_{a \to 0}( a\lfloor\frac{x}{a}\rfloor)$$</span></p>
[ { "answer_id": 2097036, "score": 2, "text": "<p>We know that $\\lfloor x/a \\rfloor \\le x/a &lt; \\lfloor x/a \\rfloor +1$, so\n$$\na &gt; 0 \\Rightarrow a\\lfloor x/a \\rfloor \\le x &lt; a\\lfloor x/a \\rfloor +a \\\\\na &lt; 0 \\Rightarrow a\\lfloor x/a \\rfloor +a &lt; x \\le a\\lfloor x/a \\rfloor &lt; \n$$\nwhich means that \n$$\na &gt; 0 \\Rightarrow 0 \\le x - a\\lfloor x/a \\rfloor &lt; a \\\\\na &lt; 0 \\Rightarrow a &lt; x - a\\lfloor x/a \\rfloor \\le 0,\n$$\nand so\n$$\n| x - a\\lfloor x/a \\rfloor | \\le |a|.\n$$\nThus $a \\lfloor x/a \\rfloor \\to x$ as $a \\to 0$ by the squeeze lemma.</p>\n" }, { "answer_id": 2097034, "score": 1, "text": "<p>Overthought it:<br>\nFor $a\\gt0$:<br>\n$\\frac{x}{a}-1 \\lt \\lfloor\\frac{x}{a}\\rfloor \\lt \\frac{x}{a}+1$<br>\n$a\\frac{x}{a}-a \\lt a\\lfloor\\frac{x}{a}\\rfloor \\lt a\\frac{x}{a}+a$<br>\nThe convergence follows from the squeeze theorem.<br>\n$a\\lt0$ is similar. </p>\n" } ]
Question: Limit $\lim\limits_{a \to 0}( a\lfloor\frac{x}{a}\rfloor)$ Answer 1: <p>We know that $\lfloor x/a \rfloor \le x/a &lt; \lfloor x/a \rfloor +1$, so $$ a &gt; 0 \Rightarrow a\lfloor x/a \rfloor \le x &lt; a\lfloor x/a \rfloor +a \\ a &lt; 0 \Rightarrow a\lfloor x/a \rfloor +a &lt; x \le a\lfloor x/a \rfloor &lt; $$ which means that $$ a &gt; 0 \Rightarrow 0 \le x - a\lfloor x/a \rfloor &lt; a \\ a &lt; 0 \Rightarrow a &lt; x - a\lfloor x/a \rfloor \le 0, $$ and so $$ | x - a\lfloor x/a \rfloor | \le |a|. $$ Thus $a \lfloor x/a \rfloor \to x$ as $a \to 0$ by the squeeze lemma.</p> Answer 2: <p>Overthought it:<br> For $a\gt0$:<br> $\frac{x}{a}-1 \lt \lfloor\frac{x}{a}\rfloor \lt \frac{x}{a}+1$<br> $a\frac{x}{a}-a \lt a\lfloor\frac{x}{a}\rfloor \lt a\frac{x}{a}+a$<br> The convergence follows from the squeeze theorem.<br> $a\lt0$ is similar. </p>
Inverse image of a Noetherian module under a module morphism
2,097,217
4
https://math.stackexchange.com/questions/2097217/inverse-image-of-a-noetherian-module-under-a-module-morphism
<p>During my exercise session of Commutative algebra we had the following question:</p> <p>Suppose $f: M \to N$ is a morphism of $R$-modules (where $R$ is a commutative ring with 1). Suppose $N$ is Noetherian, is $\ker f$ Noetherian?</p> <p>I know the answer is 'no', since if we consider the zeromap from any non-Noetherian $R$-module M to the zero module, then we have a counterexample. </p> <p>However, I wondered what we could say if $f$ is not the zero map and I have the following attempt: I have tried to prove that $f^{-1}(N)$ is Noetherian.</p> <p>Let $S$ be a submodule of $f^{-1}(N)$, then $f(S)$ is a submodule of $N$, hence it is finitely generated since $N$ is Noetherian. Let us denote $f(S) = \langle s_1, \ldots, s_n\rangle$. Since $s_1, \ldots, s_n$ are elements of the image of $f$, we can choose elements $m_1, \ldots m_n$ in $M$ such that $f(m_i) = s_i$. If $s \in S$, then $f(s) = r_1s_1 + \ldots r_ns_n$ and so we can write that $s = r_1m_1 + \ldots r_nm_n$. Therefore $S$ is a finitely generated submodule of $f^{-1}(N)$ and so $f^{-1}(N)$ is a Noetherian $R$-module. </p> <p>Since $\ker f$ is a submodule of $f^{-1}(N)$, we have that $\ker f$ is Noetherian.</p> <p>I guess the step where I write $s$ as linear combination of the $m_i$ is dodgy, so my question is: is the inverse image of a Noetherian module (under a non-zero map) Noetherian?</p>
[ { "answer_id": 2097225, "score": 4, "text": "<p>Take your favorite Noetherian module $N$ and consider a direct sum $M=N^{(X)}$ of copies of $N$, with $X$ an infinite set.</p>\n\n<p>Now prove that the map\n$$\n\\sigma\\colon N^{(X)}\\to N,\\qquad\n(n_x)_{x\\to X}\\mapsto \\sum_{x\\in X}n_x\n$$\nhas Noetherian kernel if and only if $N=0$.</p>\n\n<p>Another simpler example, where $M$ is even finitely generated. Take a non Noetherian ring $R$, a maximal ideal $\\mathfrak{m}$ and set $M=R$, $N=R/\\mathfrak{m}$. Then $N$ is obviously Noetherian, but the kernel of the projection $R\\to R/\\mathfrak{m}$ is not Noetherian.</p>\n\n<p>Now, let's try being more general: prove the following.</p>\n\n<blockquote>\n <p>Suppose $f\\colon M\\to N$ is a morphism of $R$-modules. If the image $f(N)$ is Noetherian and $\\ker f$ is Noetherian, then also $M$ is Noetherian.</p>\n</blockquote>\n\n<hr>\n\n<p>Where does your attempt go wrong? Note that $f^{-1}(N)=M$, by definition. If $S$ is a submodule of $M$, then $f(S)$ is certainly finitely generated, but it's not true that, given that $f(m_1),\\dots,f(m_n)$ generate $f(S)$, also $m_1,\\dots,m_n$ generate $S$. What you can say is only that\n$$\nm_1R+\\dots+m_nR+\\ker f=S\n$$\nbut then you're doomed, because you see you need $\\ker f$ to be Noetherian.</p>\n" }, { "answer_id": 2097227, "score": 3, "text": "<p>A very simple counter-example: take non-finitely generated maximal ideal of a non-noetherian ring, viz; the ideal $(X_0,X_1,\\dots, X_n,\\dots)$ in the polynomial ring $k[X_0,X_1,\\dots, X_n,\\dots]$ over a field $k$.</p>\n\n<p>Then $k[X_0,X_1,\\dots, X_n,\\dots]/(X_0,X_1,\\dots, X_n,\\dots)\\simeq k$ is noetherian (and artinian) since it is a field.</p>\n\n<p>In the same spirit: Take a non-discrete height $1$ valuation domain $V$ and its residual field. The maximal ideal of $V$ is <em>not</em> finitely generated, since this would imply $V$ is a discrete-valuation domain.</p>\n" } ]
Question: Inverse image of a Noetherian module under a module morphism Answer 1: <p>Take your favorite Noetherian module $N$ and consider a direct sum $M=N^{(X)}$ of copies of $N$, with $X$ an infinite set.</p> <p>Now prove that the map $$ \sigma\colon N^{(X)}\to N,\qquad (n_x)_{x\to X}\mapsto \sum_{x\in X}n_x $$ has Noetherian kernel if and only if $N=0$.</p> <p>Another simpler example, where $M$ is even finitely generated. Take a non Noetherian ring $R$, a maximal ideal $\mathfrak{m}$ and set $M=R$, $N=R/\mathfrak{m}$. Then $N$ is obviously Noetherian, but the kernel of the projection $R\to R/\mathfrak{m}$ is not Noetherian.</p> <p>Now, let's try being more general: prove the following.</p> <blockquote> <p>Suppose $f\colon M\to N$ is a morphism of $R$-modules. If the image $f(N)$ is Noetherian and $\ker f$ is Noetherian, then also $M$ is Noetherian.</p> </blockquote> <hr> <p>Where does your attempt go wrong? Note that $f^{-1}(N)=M$, by definition. If $S$ is a submodule of $M$, then $f(S)$ is certainly finitely generated, but it's not true that, given that $f(m_1),\dots,f(m_n)$ generate $f(S)$, also $m_1,\dots,m_n$ generate $S$. What you can say is only that $$ m_1R+\dots+m_nR+\ker f=S $$ but then you're doomed, because you see you need $\ker f$ to be Noetherian.</p> Answer 2: <p>A very simple counter-example: take non-finitely generated maximal ideal of a non-noetherian ring, viz; the ideal $(X_0,X_1,\dots, X_n,\dots)$ in the polynomial ring $k[X_0,X_1,\dots, X_n,\dots]$ over a field $k$.</p> <p>Then $k[X_0,X_1,\dots, X_n,\dots]/(X_0,X_1,\dots, X_n,\dots)\simeq k$ is noetherian (and artinian) since it is a field.</p> <p>In the same spirit: Take a non-discrete height $1$ valuation domain $V$ and its residual field. The maximal ideal of $V$ is <em>not</em> finitely generated, since this would imply $V$ is a discrete-valuation domain.</p>
Find $xyz$ given that $x + z + y = 5$, $x^2 + z^2 + y^2 = 21$, $x^3 + z^3 + y^3 = 80$
2,097,444
4
https://math.stackexchange.com/questions/2097444/find-xyz-given-that-x-z-y-5-x2-z2-y2-21-x3-z3-y3
<p>I was looking back in my junk, then I found this:</p> <blockquote> <p><span class="math-container">$$x + z + y = 5$$</span> <span class="math-container">$$x^2 + z^2 + y^2 = 21$$</span> <span class="math-container">$$x^3 + z^3 + y^3 = 80$$</span></p> <p>What is the value of <span class="math-container">$xyz$</span>?</p> <p><strong>A)</strong> <span class="math-container">$5$</span></p> <p><strong>B)</strong> <span class="math-container">$4$</span></p> <p><strong>C)</strong> <span class="math-container">$1$</span></p> <p><strong>D)</strong> <span class="math-container">$-4$</span></p> <p><strong>E)</strong> <span class="math-container">$-5$</span></p> </blockquote> <p>It's pretty easy, any chances of solving this question? I already have the answer for this, but I didn't fully understand.</p> <p>Thanks for the attention.</p>
[ { "answer_id": 2097458, "score": 23, "text": "<p>We have $$(x+y+z)^3=(x^3+y^3+z^3)+3x(y^2+z^2)+3y(x^2+z^2)+3z(x^2+y^2)+6xyz.$$ Hence</p>\n\n<p>$$125=80+3x(21-x^2)+3y(21-y^2)+3z(21-z^2)+6xyz.$$</p>\n\n<p>This leads to</p>\n\n<p>$$45=63(x+y+z)-3(x^3+y^3+z^3)+6xyz.$$</p>\n\n<p>This gives us $45=315-240+6xyz$, so $6xyz=-30$ and $xyz=-5$.</p>\n" }, { "answer_id": 2097463, "score": 8, "text": "<p>x + y + z = 5</p>\n\n<p>On squaring both sides,</p>\n\n<p>$(x + y + z)^2 = 25$</p>\n\n<p>$x^2 + y^2 + z^2 + 2xy + 2yz + 2zx = 25$</p>\n\n<p>$21 + 2xy + 2yz + 2zx = 25$</p>\n\n<p>$2xy + 2yz + 2zx = 25 - 21$</p>\n\n<p>$2xy + 2yz + 2zx = 4$</p>\n\n<p>$xy + yz + zx = 2$</p>\n\n<p>Also,</p>\n\n<p>$x^3 + y^3 + z^3 - 3xyz = (x + y + z)(x^2 + y^2 + z^2 - xy - yz - zx)$</p>\n\n<p>Putting values,</p>\n\n<p>$80 - 3xyz = (5)\\left[21 - (xy + yz + zx)\\right]$</p>\n\n<p>80 - 3xyz = (5)(21 - 2)</p>\n\n<p>80 - 3xyz = 95</p>\n\n<p>-3xyz = 15</p>\n\n<p>xyz = -5</p>\n" }, { "answer_id": 2100857, "score": 4, "text": "<p>Consider the polynomial</p>\n\n<p>$$p(t) = (1-x t)(1-y t)(1-z t)$$</p>\n\n<p>Let's consider the series expansion of $\\log\\left[p(t)\\right]$:</p>\n\n<p>$$\\log\\left[p(t)\\right] =-\\sum_{k=1}^{\\infty}\\frac{S_k}{k} t^k$$</p>\n\n<p>where</p>\n\n<p>$$S_k = x^k + y^k + z^k$$</p>\n\n<p>Since we're given the $S_k$ for $k$ up to $3$ we can write down the series expansion of $\\log\\left[p(t)\\right]$ up to third order in $t$, but that's sufficient to calculate $p(t)$, as it's a third degree polynomial. The coefficient of $t^3$ equals $-xyz$, so we only need to focus on that term. We have:</p>\n\n<p>$$\\log\\left[p(t)\\right] = -\\left(5 t +\\frac{21}{2} t^2 + \\frac{80}{3} t^3+\\cdots\\right)$$</p>\n\n<p>Exponentiating yields:</p>\n\n<p>$$p(t) = \\exp(-5t)\\exp\\left(-\\frac{21}{2}t^2\\right)\\exp\\left(-\\frac{80}{3}t^3\\right)\\times\\exp\\left[\\mathcal{O}(t^4)\\right]$$</p>\n\n<p>The $t^3$ term can come in its entirety from the first factor, or we can pick the linear term in $t$ from there and then multiply that by the $t^2$ term from the second factor or we can take the $t^3$ term from the last factor. Adding up the 3 possibilities yields:</p>\n\n<p>$$xyz = \\frac{5^3}{3!} -5\\times\\frac{21}{2} + \\frac{80}{3} = -5$$</p>\n\n<p>It is also easy to show that $x^4 + y^4 + z^4 = 333$ by using that the coefficient of the $t^4$ term is zero.</p>\n" }, { "answer_id": 3896679, "score": 1, "text": "<blockquote>\n<p><span class=\"math-container\">\\begin{align} x + z + y &amp;= u=5 \\tag{1}\\label{1} ,\\\\ x^2 + z^2 + y^2 &amp;= v=21 \\tag{2}\\label{2},\\\\ x^3 +z^3 + y^3 &amp;= w=80 \\tag{3}\\label{3}. \\end{align}</span></p>\n<p>What is the value of <span class=\"math-container\">$xyz$</span>?</p>\n</blockquote>\n<p>Surprisingly, the <a href=\"https://brilliant.org/wiki/ravi-substitution\" rel=\"nofollow noreferrer\">Ravi substitution</a>\nworks in this case, despite that\nnot all the numbers <span class=\"math-container\">$x,y,z$</span> are positive,\nand hence, the corresponding triangle is &quot;unreal&quot;.</p>\n<p>So, let\n<span class=\"math-container\">\\begin{align}\t\na &amp;= y + z\n,\\quad\nb = z + x\n,\\quad\nc = x + y\n\\tag{4}\\label{4}\n,\\\\\nx&amp;=\\rho-a\n,\\quad\ny=\\rho-b\n,\\quad\nz=\\rho-c\n\\tag{5}\\label{5}\n,\n\\end{align}</span></p>\n<p>where the triplet <span class=\"math-container\">$a, b, c$</span> represents the sides of a triangle\nwith semiperimeter <span class=\"math-container\">$\\rho$</span>, inradius <span class=\"math-container\">$r$</span> and circumradius <span class=\"math-container\">$R$</span>.</p>\n<p>Then</p>\n<p><span class=\"math-container\">\\begin{align}\t\nx + z + y &amp;= \\rho\n\\tag{6}\\label{6}\n,\\\\\nx^2 + z^2 + y^2 &amp;= \\rho^2-2(r^2+4rR)\n\\tag{7}\\label{7}\n,\\\\\nx^3 + z^3 + y^3 &amp;= \\rho(\\rho^2-12rR)\n\\tag{8}\\label{8}\n,\\\\\nxyz&amp;=\\rho\\,r^2\n\\tag{9}\\label{9}\n.\n\\end{align}</span></p>\n<p>Excluding <span class=\"math-container\">$rR$</span> from \\eqref{7}-\\eqref{8},\nwe get</p>\n<p><span class=\"math-container\">\\begin{align}\t\nr^2&amp;=\n\\tfrac16\\,\\rho^2-\\tfrac12\\,v+\\tfrac13\\,\\frac w{\\rho}\n\\tag{10}\\label{10}\n,\n\\end{align}</span></p>\n<p>and from \\eqref{9} we have the answer</p>\n<p><span class=\"math-container\">\\begin{align}\t\nxyz&amp;=\n\\tfrac16\\,\\rho^3-\\tfrac12\\,v\\rho+\\tfrac13\\,w\n=\n\\tfrac16\\,5^3-\\tfrac12\\,21\\cdot5+\\tfrac13\\,80\n=-5\n\\tag{11}\\label{11}\n.\n\\end{align}</span></p>\n<p>As a bonus, we can find that</p>\n<p><span class=\"math-container\">\\begin{align}\t\nrR &amp;= \\tfrac1{12}\\,\\frac{\\rho^3-w}{\\rho}\n\\tag{12}\\label{12}\n\\end{align}</span></p>\n<p>and <span class=\"math-container\">$x=\\rho-a,\\ y=\\rho-b,\\ z=\\rho-c$</span> are the roots of cubic equation</p>\n<p><span class=\"math-container\">\\begin{align}\nx^3-\\rho\\,x^2+(r^2+4rR)\\,x-\\rho r^2&amp;=0\n\\tag{13}\\label{13}\n,\\\\\n\\text{or }\\quad\nx^3-\\rho\\,x^2+\\tfrac12\\,(\\rho^2-v)\\,x-\\tfrac13\\,w-\\tfrac16\\,\\rho\\,(\\rho^2-3v)&amp;=0\n\\tag{14}\\label{14}\n,\\\\\nx^3-5\\,x^2+2\\,x+5&amp;=0\n\\tag{15}\\label{15}\n.\n\\end{align}</span></p>\n<p>One of the roots of \\eqref{15} is</p>\n<p><span class=\"math-container\">\\begin{align}\nx &amp;= \\tfrac53+\n\\tfrac23\\,\\sqrt{19}\\,\\cos\\Big(\\tfrac13\\,\\arctan(\\tfrac9{25}\\,\\sqrt{331})\\Big)\n\\approx 4.253418\n\\tag{16}\\label{16}\n,\\\\\n\\text{the other two are }\\quad\ny&amp;\\approx -0.773387\n,\\quad\nz\\approx 1.519969\n\\tag{17}\\label{17}\n.\n\\end{align}</span></p>\n" } ]
Question: Find $xyz$ given that $x + z + y = 5$, $x^2 + z^2 + y^2 = 21$, $x^3 + z^3 + y^3 = 80$ Answer 1: <p>We have $$(x+y+z)^3=(x^3+y^3+z^3)+3x(y^2+z^2)+3y(x^2+z^2)+3z(x^2+y^2)+6xyz.$$ Hence</p> <p>$$125=80+3x(21-x^2)+3y(21-y^2)+3z(21-z^2)+6xyz.$$</p> <p>This leads to</p> <p>$$45=63(x+y+z)-3(x^3+y^3+z^3)+6xyz.$$</p> <p>This gives us $45=315-240+6xyz$, so $6xyz=-30$ and $xyz=-5$.</p> Answer 2: <p>x + y + z = 5</p> <p>On squaring both sides,</p> <p>$(x + y + z)^2 = 25$</p> <p>$x^2 + y^2 + z^2 + 2xy + 2yz + 2zx = 25$</p> <p>$21 + 2xy + 2yz + 2zx = 25$</p> <p>$2xy + 2yz + 2zx = 25 - 21$</p> <p>$2xy + 2yz + 2zx = 4$</p> <p>$xy + yz + zx = 2$</p> <p>Also,</p> <p>$x^3 + y^3 + z^3 - 3xyz = (x + y + z)(x^2 + y^2 + z^2 - xy - yz - zx)$</p> <p>Putting values,</p> <p>$80 - 3xyz = (5)\left[21 - (xy + yz + zx)\right]$</p> <p>80 - 3xyz = (5)(21 - 2)</p> <p>80 - 3xyz = 95</p> <p>-3xyz = 15</p> <p>xyz = -5</p> Answer 3: <p>Consider the polynomial</p> <p>$$p(t) = (1-x t)(1-y t)(1-z t)$$</p> <p>Let's consider the series expansion of $\log\left[p(t)\right]$:</p> <p>$$\log\left[p(t)\right] =-\sum_{k=1}^{\infty}\frac{S_k}{k} t^k$$</p> <p>where</p> <p>$$S_k = x^k + y^k + z^k$$</p> <p>Since we're given the $S_k$ for $k$ up to $3$ we can write down the series expansion of $\log\left[p(t)\right]$ up to third order in $t$, but that's sufficient to calculate $p(t)$, as it's a third degree polynomial. The coefficient of $t^3$ equals $-xyz$, so we only need to focus on that term. We have:</p> <p>$$\log\left[p(t)\right] = -\left(5 t +\frac{21}{2} t^2 + \frac{80}{3} t^3+\cdots\right)$$</p> <p>Exponentiating yields:</p> <p>$$p(t) = \exp(-5t)\exp\left(-\frac{21}{2}t^2\right)\exp\left(-\frac{80}{3}t^3\right)\times\exp\left[\mathcal{O}(t^4)\right]$$</p> <p>The $t^3$ term can come in its entirety from the first factor, or we can pick the linear term in $t$ from there and then multiply that by the $t^2$ term from the second factor or we can take the $t^3$ term from the last factor. Adding up the 3 possibilities yields:</p> <p>$$xyz = \frac{5^3}{3!} -5\times\frac{21}{2} + \frac{80}{3} = -5$$</p> <p>It is also easy to show that $x^4 + y^4 + z^4 = 333$ by using that the coefficient of the $t^4$ term is zero.</p> Answer 4: <blockquote> <p><span class="math-container">\begin{align} x + z + y &amp;= u=5 \tag{1}\label{1} ,\\ x^2 + z^2 + y^2 &amp;= v=21 \tag{2}\label{2},\\ x^3 +z^3 + y^3 &amp;= w=80 \tag{3}\label{3}. \end{align}</span></p> <p>What is the value of <span class="math-container">$xyz$</span>?</p> </blockquote> <p>Surprisingly, the <a href="https://brilliant.org/wiki/ravi-substitution" rel="nofollow noreferrer">Ravi substitution</a> works in this case, despite that not all the numbers <span class="math-container">$x,y,z$</span> are positive, and hence, the corresponding triangle is &quot;unreal&quot;.</p> <p>So, let <span class="math-container">\begin{align} a &amp;= y + z ,\quad b = z + x ,\quad c = x + y \tag{4}\label{4} ,\\ x&amp;=\rho-a ,\quad y=\rho-b ,\quad z=\rho-c \tag{5}\label{5} , \end{align}</span></p> <p>where the triplet <span class="math-container">$a, b, c$</span> represents the sides of a triangle with semiperimeter <span class="math-container">$\rho$</span>, inradius <span class="math-container">$r$</span> and circumradius <span class="math-container">$R$</span>.</p> <p>Then</p> <p><span class="math-container">\begin{align} x + z + y &amp;= \rho \tag{6}\label{6} ,\\ x^2 + z^2 + y^2 &amp;= \rho^2-2(r^2+4rR) \tag{7}\label{7} ,\\ x^3 + z^3 + y^3 &amp;= \rho(\rho^2-12rR) \tag{8}\label{8} ,\\ xyz&amp;=\rho\,r^2 \tag{9}\label{9} . \end{align}</span></p> <p>Excluding <span class="math-container">$rR$</span> from \eqref{7}-\eqref{8}, we get</p> <p><span class="math-container">\begin{align} r^2&amp;= \tfrac16\,\rho^2-\tfrac12\,v+\tfrac13\,\frac w{\rho} \tag{10}\label{10} , \end{align}</span></p> <p>and from \eqref{9} we have the answer</p> <p><span class="math-container">\begin{align} xyz&amp;= \tfrac16\,\rho^3-\tfrac12\,v\rho+\tfrac13\,w = \tfrac16\,5^3-\tfrac12\,21\cdot5+\tfrac13\,80 =-5 \tag{11}\label{11} . \end{align}</span></p> <p>As a bonus, we can find that</p> <p><span class="math-container">\begin{align} rR &amp;= \tfrac1{12}\,\frac{\rho^3-w}{\rho} \tag{12}\label{12} \end{align}</span></p> <p>and <span class="math-container">$x=\rho-a,\ y=\rho-b,\ z=\rho-c$</span> are the roots of cubic equation</p> <p><span class="math-container">\begin{align} x^3-\rho\,x^2+(r^2+4rR)\,x-\rho r^2&amp;=0 \tag{13}\label{13} ,\\ \text{or }\quad x^3-\rho\,x^2+\tfrac12\,(\rho^2-v)\,x-\tfrac13\,w-\tfrac16\,\rho\,(\rho^2-3v)&amp;=0 \tag{14}\label{14} ,\\ x^3-5\,x^2+2\,x+5&amp;=0 \tag{15}\label{15} . \end{align}</span></p> <p>One of the roots of \eqref{15} is</p> <p><span class="math-container">\begin{align} x &amp;= \tfrac53+ \tfrac23\,\sqrt{19}\,\cos\Big(\tfrac13\,\arctan(\tfrac9{25}\,\sqrt{331})\Big) \approx 4.253418 \tag{16}\label{16} ,\\ \text{the other two are }\quad y&amp;\approx -0.773387 ,\quad z\approx 1.519969 \tag{17}\label{17} . \end{align}</span></p>
Number of unique hex tiles with two paths on each edge?
2,097,450
4
https://math.stackexchange.com/questions/2097450/number-of-unique-hex-tiles-with-two-paths-on-each-edge
<p>I want to know:</p> <p>How many "unique" hexagonal tiles can be created with two paths exiting on each edge?</p> <p>There will always be only 6 paths (connecting two different exits) per tile. </p> <p>Note: No duplicate tiles. Meaning, if you were to rotate the tile in any orientation it would not be the same as another. </p>
[ { "answer_id": 2098318, "score": 2, "text": "<p>This problem is a case of <strong>Power Group Enumeration</strong> in the sense of\nthe paper by Fripertinger which was discussed quite extensively at the\nfollowing <a href=\"https://math.stackexchange.com/questions/864116/\">MSE link</a>.\nThe set of values consists of the possible edges and is being permuted\nby the rotations of the hexagon, call this $G$. Edges are represented\nby products $A_p A_q$ where $p$ and $q$ range from zero to eleven and\ncorrespond to a clockwise enumeration of the exits and there are no\nedges between adjacent exits (same edge). This is a feature of the\nproblem which is open to debate and which I hope I have interpreted\ncorrectly. The slots are simply six slots being permuted by the\nsymmetric group $S_6.$ The technique is exactly the same as at the\nquoted link -- note that we seek sets of edges and not multisets which\nsimplifies things considerably. In a nutshell we consider pairs\n$(\\beta,\\alpha)$ of permutations from the edge permutations $G$ and\nthe symmetric group $S_6$ and compute the generating function that\ncorresponding to all possible ways of covering $\\alpha$ with cycles\nfrom $\\beta.$ At the end we extract those terms that contain every\nexit exactly once. A remarkable feature of this problem is that we are\nworking with generating functions here but the generating function\nthat appears when we try to cover $a_1^6$ with cycles (fixed points)\nfrom $a_1^{60}$ produces a completely unmanageable number of terms\nwhich it is impossible to process. Fortunately the contribution of\nthis term can be computed in closed form. It has no symmetries and\nsimply counts all tiles before symmetry. As we must exclude edges\nbetween adjacent exits we get by inclusion-exclusion</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{k=0}^6 {6\\choose k} (-1)^k \\frac{(12-2k)!}{2^{6-k} (6-k)!}\n= 6040.}$$</p>\n\n<p>We simply wire this value into the program and detect $a_1^{60}$\nduring the <strong>Power Group Enumeration</strong> computation. This is all we\nneed to be aware of at this time and the answer is</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{1060.}$$</p>\n\n<p>The reader is invited to verify this result by classifying tiles\naccording to their symmetries and counting the number of tiles in\nevery class.</p>\n\n<p>I also verfied this by a total enumeration routine which in fact does\nnot require all that much time / space and confirms the above\nfindings. PGE obviously generalizes while simple enumeration does not.</p>\n\n<p><P>This was the Maple code.</p>\n\n<pre>\nwith(combinat);\n\npet_autom2cyclesA :=\nproc(src, aut)\nlocal numa, numsubs, marks, pos, cycs,\n data, item, cpos, clen;\n\n numsubs := [seq(src[k]=k, k=1..nops(src))];\n numa := subs(numsubs, aut);\n\n marks := Array([seq(true, pos=1..nops(aut))]);\n\n cycs := []; pos := 1; data := [];\n\n while pos &lt;= nops(aut) do\n if marks[pos] then\n clen := 0; item := []; cpos := pos;\n\n while marks[cpos] do\n marks[cpos] := false;\n item := [op(item), aut[cpos]];\n\n cpos := numa[cpos];\n clen := clen+1;\n od;\n\n cycs := [op(cycs), clen];\n data := [op(data), item];\n fi;\n\n pos := pos+1;\n od;\n\n return [data, mul(a[cycs[k]], k=1..nops(cycs))];\nend;\n\n\npet_cycleind_symm :=\nproc(n)\nlocal p, s;\noption remember;\n\n if n=0 then return 1; fi;\n\n expand(1/n*add(a[l]*pet_cycleind_symm(n-l), l=1..n));\nend;\n\n\nedges_all_src :=\nproc()\noption remember;\nlocal edges;\n\n edges :=\n add(add(A[p]*A[q], q=p+1..11), p=0..11)\n - add(A[2*p]*A[2*p+1], p=0..5);\n\n [seq(p, p in edges)];\nend;\n\npet_perms_edge_cind :=\nproc()\noption remember;\nlocal cind, p, q, perm, sl,\n edges, edgeperm, rotind;\n\n edges := edges_all_src();\n\n cind := [];\n\n for rotind from 0 to 5 do\n sl :=\n [seq(A[p]=A[(p+2*rotind) mod 12], p=0..11)];\n edgeperm := subs(sl, edges);\n\n cind :=\n [op(cind), pet_autom2cyclesA(edges, edgeperm)];\n od;\n\n cind;\nend;\n\ntiles :=\nproc()\noption remember;\nlocal slot_idx, edge_idx, gf, var, flat, cycs,\n edge_term, slot_term, contrib, cover, deg, onesize;\n\n slot_idx := pet_cycleind_symm(6);\n edge_idx := pet_perms_edge_cind();\n\n gf := 0;\n\n for slot_term in slot_idx do\n for edge_term in edge_idx do\n if edge_term[2] = a[1]^60 then\n next;\n fi;\n\n if type(edge_term[2]/slot_term,\n &#96;monomial&#96;) then\n contrib := 1;\n\n for var in indets(slot_term) do\n cycs :=\n select(c-&gt;nops(c)=op(1,var),\n edge_term[1]);\n\n onesize := 0;\n deg := degree(slot_term, var);\n\n for cover in choose(cycs, deg) do\n onesize := onesize\n + deg!*op(1,var)^deg*\n mul(mul(el, el in cyc),\n cyc in cover);\n od;\n\n contrib := contrib*onesize;\n od;\n\n gf := gf + lcoeff(slot_term)*contrib;\n fi;\n od;\n od;\n\n gf := expand(gf);\n\n for var from 0 to 11 do\n gf := coeff(gf, A[var], 1);\n od;\n\n (gf + 6040)/6;\nend;\n\nenum_all_symm :=\nproc()\noption remember;\nlocal recurse, count, edges, orbits;\n\n edges := edges_all_src();\n orbits := table();\n\n recurse :=\n proc(sofar, sel, n, pos)\n local orbit, sl, rotind;\n\n if n = 6 then\n count := count + 1;\n\n orbit := {};\n\n for rotind from 0 to 5 do\n sl :=\n [seq(A[p]=A[p+2*rotind mod 12],\n p=0..11)];\n\n orbit :=\n {op(orbit), subs(sl, sel)};\n od;\n\n orbits[orbit] := 1;\n\n return;\n fi;\n\n if pos &gt; nops(edges) then\n return;\n fi;\n\n recurse(sofar, sel, n, pos+1);\n\n if nops(indets(sofar*edges[pos])) =\n 2*(n+1) then\n recurse(sofar*edges[pos],\n {op(sel), edges[pos]},\n n+1, pos+1);\n fi;\n end;\n\n count := 0;\n recurse(1, {}, 0, 1);\n\n [count, nops([indices(orbits)])];\nend;\n</pre>\n\n<p><strong>Addendum Mon Jun 26 2017.</strong> Presenting a completely reworked\nalgorithm and implementation, which gives an instant answer. The\nprevious version computed the entire generating function in the\nvariables representing the exits, expanding intermediate terms even\nincluding those that did not contribute. As indicated this algorithm\nbreaks down when placing cycles corresponding to $a_1^{60}$ on\n$a_1^6.$ The new version uses</p>\n\n<p>$$[z^k] \\prod_{q\\in C} (1+zq)$$</p>\n\n<p>to represent a choice of $k$ cycles from a set of cycles $C$ and does\nnot expand this term, which means that the contribution from\n$a_1^{60}$ only consists of $60$ terms as opposed to ${60\\choose 6},$\nwhich is not feasible. We thus get a product of these (times a leading\ncoefficient) when there exists a covering of the slot permutation\n$\\alpha$ by cycles from the edge permutation $\\beta,$ where $k$ is the\nnumber of cycles of a given size in $\\alpha$ and $C$ are the cycles of\nthat size from $\\beta,$ which is not a conjugacy class in this problem\nbut an actual permutation of the edges. Having constructed this\nproduct we differentiate with respect to all variables in sequence and\nset them to zero thereafter, which isolates the contribution from\nthose terms where each exit occurs just once. We extract the\ncoefficients that correspond to the desired power of $[z^k]$ at the\nvery end, producing a number which gives the possible coverings of\n$\\alpha$ by cycles from $\\beta$ that contribute to the count of\nadmissible tiles. The Maple code was as follows (the enumeration\nroutine has also been improved.) We omit the prefix that is shared\nwith the first version.</p>\n\n<pre>\ntiles :=\nproc()\noption remember;\nlocal slot_idx, edge_idx, slot_term, edge_term,\n cycs, deg, gf, var, res;\n\n slot_idx := pet_cycleind_symm(6);\n edge_idx := pet_perms_edge_cind();\n\n res := 0;\n\n for slot_term in slot_idx do\n for edge_term in edge_idx do\n if type(edge_term[2]/slot_term,\n &#96;monomial&#96;) then\n gf := 1;\n\n for var in indets(slot_term) do\n cycs :=\n select(c-&gt;nops(c)=op(1,var),\n edge_term[1]);\n\n deg := degree(slot_term, var);\n\n gf := gf*deg!*op(1, var)^deg\n *mul(1+z[op(1, var)]*mul(el, el in cyc),\n cyc in cycs);\n od;\n\n for var from 0 to 11 do\n gf :=\n subs(A[var] = 0, diff(gf, A[var]));\n od;\n\n for var in indets(slot_term) do\n gf :=\n coeftayl(gf, z[op(1, var)]=0,\n degree(slot_term, var));\n od;\n\n res := res + lcoeff(slot_term)*gf;\n fi;\n od;\n od;\n\n res/nops(edge_idx);\nend;\n\nenum_all_tiles :=\nproc()\noption remember;\nlocal recurse, count, edges, orbits, sl;\n\n edges := edges_all_src();\n orbits := table();\n\n sl :=\n [seq([seq(A[p]=A[p+2*r mod 12], p=0..11)],\n r=1..5)];\n\n recurse :=\n proc(sofar, sel, n, pos)\n local orbit, rotind;\n\n if n = 6 then\n count := count + 1;\n\n orbit := [sel];\n\n for rotind to 5 do\n orbit :=\n [op(orbit),\n subs(sl[rotind], sel)];\n od;\n\n orbits[sort(orbit)[1]] := 1;\n\n return;\n fi;\n\n if pos &gt; nops(edges) then\n return;\n fi;\n\n recurse(sofar, sel, n, pos+1);\n\n if nops(indets(sofar*edges[pos])) =\n 2*(n+1) then\n recurse(sofar*edges[pos],\n {op(sel), edges[pos]},\n n+1, pos+1);\n fi;\n end;\n\n count := 0;\n recurse(1, {}, 0, 1);\n\n [count, numelems(orbits)];\nend;\n</pre>\n\n<p><strong>Addendum Wed Jun 28 2017.</strong> We present the case of arbitrary\npolygonal tiles having $q$ sides rather than hexagonal ones to\ncomplete this discussion and end on a challenge to the reader. We get\nby inclusion-exclusion for the count with no symmetries the formula</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\sum_{k=0}^q {q\\choose k} (-1)^k \\frac{(2q-2k)!}{2^{q-k} (q-k)!}.}$$</p>\n\n<p>Here we choose the $k$ spots where there is an edge between adjacent\nexits (which we are wanting to avoid), for a factor of ${q\\choose k}.$\nThis leaves $2q-2k$ exits. We choose two of these to connect by an\nedge, then another two from the remaining exits, and another two, and\nso on, for a multinomial coefficient ${2q-2k\\choose 2,2,\\ldots,2}.$\nHere we may choose one set of edges in $(q-k)!$ ways, hence the\nformula. We obtain the sequence starting at $q=1$:</p>\n\n<p>$$0, 2, 8, 60, 544, 6040, 79008, 1190672, 20314880, 387099936,\\ldots$$</p>\n\n<p>Taking symmetries into account yields the following sequence:</p>\n\n<p>$$0, 2, 4, 22, 112, 1060, 11292, 149448, 2257288, 38720728\\ldots$$</p>\n\n<p>and the challenge to the reader is of course to compute more\nterms for the latter. This is the Maple code:</p>\n\n<pre>\nedges_all_src :=\nproc(q)\noption remember;\nlocal edges;\n\n edges :=\n add(add(A[p]*A[r], r=p+1..2*q-1), p=0..2*q-1)\n - add(A[2*p]*A[2*p+1], p=0..q-1);\n\n [seq(p, p in edges)];\nend;\n\npet_perms_edge_cind :=\nproc(q)\noption remember;\nlocal cind, perm, sl,\n edges, edgeperm, rotind;\n\n edges := edges_all_src(q);\n\n cind := [];\n\n for rotind from 0 to q-1 do\n sl :=\n [seq(A[p]=A[(p+2*rotind) mod 2*q], p=0..2*q-1)];\n edgeperm := subs(sl, edges);\n\n cind :=\n [op(cind), pet_autom2cyclesA(edges, edgeperm)];\n od;\n\n cind;\nend;\n\ntiles :=\nproc(q)\noption remember;\nlocal slot_idx, edge_idx, slot_term, edge_term,\n cycs, deg, gf, var, res;\n\n slot_idx := &#96;if&#96;(q=1, [a[1]], pet_cycleind_symm(q));\n edge_idx := pet_perms_edge_cind(q);\n\n res := 0;\n\n for slot_term in slot_idx do\n for edge_term in edge_idx do\n if type(edge_term[2]/slot_term,\n &#96;monomial&#96;) then\n gf := 1;\n\n for var in indets(slot_term) do\n cycs :=\n select(c-&gt;nops(c)=op(1,var),\n edge_term[1]);\n\n deg := degree(slot_term, var);\n\n gf := gf*deg!*op(1, var)^deg\n *mul(1+z[op(1, var)]*mul(el, el in cyc),\n cyc in cycs);\n od;\n\n for var from 0 to 2*q-1 do\n gf :=\n subs(A[var] = 0, diff(gf, A[var]));\n od;\n\n for var in indets(slot_term) do\n gf :=\n coeftayl(gf, z[op(1, var)]=0,\n degree(slot_term, var));\n od;\n\n res := res + lcoeff(slot_term)*gf;\n fi;\n od;\n od;\n\n res/nops(edge_idx);\nend;\n\ntiles_all :=\nq -&gt; add(binomial(q,k)*(-1)^k*(2*q-2*k)!/2^(q-k)/(q-k)!,\n k = 0..q);\n\nenum_tiles_all :=\nproc(q)\noption remember;\nlocal recurse, count, edges, orbits, sl;\n\n edges := edges_all_src(q);\n orbits := table();\n\n sl :=\n [seq([seq(A[p]=A[p+2*r mod 2*q], p=0..2*q-1)],\n r=1..q-1)];\n\n recurse :=\n proc(sofar, sel, n, pos)\n local orbit, rotind;\n\n if n = q then\n count := count + 1;\n\n orbit := [sel];\n\n for rotind to q-1 do\n orbit :=\n [op(orbit),\n subs(sl[rotind], sel)];\n od;\n\n orbits[sort(orbit)[1]] := 1;\n\n return;\n fi;\n\n if pos &gt; nops(edges) then\n return;\n fi;\n\n recurse(sofar, sel, n, pos+1);\n\n if nops(indets(sofar*edges[pos])) =\n 2*(n+1) then\n recurse(sofar*edges[pos],\n {op(sel), edges[pos]},\n n+1, pos+1);\n fi;\n end;\n\n count := 0;\n recurse(1, {}, 0, 1);\n\n [count, numelems(orbits)];\nend;\n</pre>\n\n<p><strong>Addendum Fri Jun 30 2017.</strong> The data for dihedral symmetries being\ntaken into account (rotational and reflectional) are as follows:</p>\n\n<p>$$0, 2, 4, 19, 80, 638, 6054, 76692, 1137284, 19405244,\\ldots $$</p>\n\n<p><strong>Addendum Mon Jul 3 2017.</strong> It appears that plain Burnside is still\nthe best approach. This requires that we compute the number of tiles\nfixed by each of the $q$ rotations including the identity, which fixes\nall tiles. These are counted by the inclusion-exclusion formula given\nabove. For the remaining rotations we have that they must be constant\non the cycles (obtained by factorizing the corresponding permutation\nof edges). This means that to be fixed by a given rotation a cycle of\nedges is either switched on or off (all edges simultaneously). We use\na backtracking search to determine which cycles may be switched on at\nthe same time without creating conflict (more than one edge incident\non the same exit.) This lets us calculate three more values for\nrotational symmetries and we obtain</p>\n\n<p>$$0, 2, 4, 22, 112, 1060, 11292, 149448, 2257288, 38720728, \n\\\\ 740754220, 15648468804, 361711410384, \\ldots$$</p>\n\n<p>We also find that we have effectively solved the problem for tiles\nwhose number of sides is a prime. We find (more of these can be\ncomputed)</p>\n\n<p>$$2, 4, 112, 11292, 740754220, 361711410384, 222595582448849152,\n\\\\ 258327454310582805036, 661821993709898403923269564, \n\\\\ 10266982973657640119698928948136690256,\\ldots$$</p>\n\n<p>The Maple code for this goes as follows.</p>\n\n<pre>\nwith(combinat);\n\npet_autom2cyclesA :=\nproc(src, aut)\nlocal numa, numsubs, marks, pos, cycs,\n data, item, cpos, clen;\n\n numsubs := [seq(src[k]=k, k=1..nops(src))];\n numa := subs(numsubs, aut);\n\n marks := Array([seq(true, pos=1..nops(aut))]);\n\n cycs := []; pos := 1; data := [];\n\n while pos &lt;= nops(aut) do\n if marks[pos] then\n clen := 0; item := []; cpos := pos;\n\n while marks[cpos] do\n marks[cpos] := false;\n item := [op(item), aut[cpos]];\n\n cpos := numa[cpos];\n clen := clen+1;\n od;\n\n cycs := [op(cycs), clen];\n data := [op(data), item];\n fi;\n\n pos := pos+1;\n od;\n\n return [data, mul(a[cycs[k]], k=1..nops(cycs))];\nend;\n\n\nedges_all_src :=\nproc(q)\noption remember;\nlocal edges;\n\n edges :=\n add(add(A[p]*A[r], r=p+1..2*q-1), p=0..2*q-1)\n - add(A[2*p]*A[2*p+1], p=0..q-1);\n\n [seq(p, p in edges)];\nend;\n\ntiles_no_symm :=\nq -&gt; add(binomial(q,k)*(-1)^k\n *(2*q-2*k)!/2^(q-k)/(q-k)!, k=0..q);\n\nbs_tiles :=\nproc(q)\noption remember;\nlocal res, sl, recurse, edgecycs, cind, vsets,\n edges, edgeperm, rotind, seen, factored;\n\n edges := edges_all_src(q);\n\n res := tiles_no_symm(q);\n\n recurse :=\n proc(cycs, sofar, pos)\n local data, nxt;\n\n if nops(sofar) = 2*q then\n return 1\n fi;\n\n if pos &gt; nops(cycs) then\n return 0\n fi;\n\n data := recurse(cycs, sofar, pos+1);\n\n if nops(cycs[pos] intersect sofar) = 0 then\n data := data +\n recurse(cycs, cycs[pos] union sofar, pos+1);\n fi;\n\n return data;\n end;\n\n cind := [];\n\n seen := table(); \n for rotind to q-1 do\n sl :=\n [seq(A[p]=A[(p+2*rotind) mod 2*q], p=0..2*q-1)];\n\n edgeperm := subs(sl, edges);\n\n cind :=\n [op(cind),\n pet_autom2cyclesA(edges, edgeperm)];\n od;\n\n for factored in cind do\n if not(type(seen[factored[2]], &#96;integer&#96;)) then\n vsets :=\n map(indets,\n select(cf -&gt; { seq(degree(cf, v), v in indets(cf)) }\n = { 1 },\n map(cyc -&gt; mul(el, el in cyc), factored[1])));\n\n seen[factored[2]] := recurse(vsets, {}, 1);\n fi;\n\n res := res + seen[factored[2]];\n od;\n\n res/q;\nend;\n</pre>\n\n<p>With Burnside we also get three more terms for dihedral symmetry,\nwhich yields</p>\n\n<p>$$ 0, 2, 4, 19, 80, 638, 6054, 76692, 1137284, 19405244, \n\\\\ 370597430, 7825459362, 180862277352,\\ldots$$</p>\n\n<p>The reason why the above code for cyclic symmetry is structured the\nway it is becomes apparent when we study the code for dihedral\nsymmetry, which is why I decided to present it here, omitting\nduplicate code.</p>\n\n<pre>\nedges_all_src :=\nproc(q)\noption remember;\nlocal edges;\n\n edges :=\n add(add(C[p]*C[r], r=p+1..2*q-1), p=0..2*q-1)\n - add(C[2*p]*C[2*p+1], p=0..q-1);\n\n subs([seq(C[2*p]=A[p], p=0..q-1),\n seq(C[2*p+1]=B[p], p=0..q-1)],\n [seq(p, p in edges)]);\nend;\n\ntiles_no_symm :=\nq -&gt; add(binomial(q,k)*(-1)^k\n *(2*q-2*k)!/2^(q-k)/(q-k)!, k=0..q);\n\nbs_tiles_dh :=\nproc(q)\nlocal res, sl, recurse, cind, vsets,\n edges, edgeperm, rotind, seen, factored;\n\n edges := edges_all_src(q);\n\n res := tiles_no_symm(q);\n\n recurse :=\n proc(cycs, sofar, pos)\n local data, nxt;\n\n if nops(sofar) = 2*q then\n return 1\n fi;\n\n if pos &gt; nops(cycs) then\n return 0\n fi;\n\n data := recurse(cycs, sofar, pos+1);\n\n if nops(cycs[pos] intersect sofar) = 0 then\n data := data +\n recurse(cycs, cycs[pos] union sofar, pos+1);\n fi;\n\n return data;\n end;\n\n cind := [];\n\n seen := table();\n for rotind from 0 to q-1 do\n if rotind &gt;= 1 then\n sl :=\n [seq(A[p]=A[(p+rotind) mod q], p=0..q-1),\n seq(B[p]=B[(p+rotind) mod q], p=0..q-1)];\n\n edgeperm := subs(sl, edges);\n\n cind :=\n [op(cind),\n pet_autom2cyclesA(edges, edgeperm)];\n fi;\n\n sl :=\n [seq(A[p]=B[q-1-((p+rotind) mod q)], p=0..q-1),\n seq(B[p]=A[q-1-((p+rotind) mod q)], p=0..q-1)];\n\n edgeperm := subs(sl, edges);\n\n cind :=\n [op(cind),\n pet_autom2cyclesA(edges, edgeperm)];\n od;\n\n for factored in cind do\n if not(type(seen[factored[2]], &#96;integer&#96;)) then\n vsets :=\n map(indets,\n select(cf -&gt; { seq(degree(cf, v), v in indets(cf)) }\n = { 1 },\n map(cyc -&gt; mul(el, el in cyc), factored[1])));\n\n seen[factored[2]] := recurse(vsets, {}, 1);\n fi;\n\n res := res + seen[factored[2]];\n od;\n\n res/q/2;\nend;\n</pre>\n" }, { "answer_id": 2097543, "score": 1, "text": "<p>If the paths cannot cross, there are a rather limited number of tiles under the constraints given; I make it 5:</p>\n\n<p><a href=\"https://i.stack.imgur.com/AbDPK.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/AbDPK.jpg\" alt=\"tile 1\"></a>\n<a href=\"https://i.stack.imgur.com/Zr6Hj.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Zr6Hj.jpg\" alt=\"tile 2\"></a>\n<a href=\"https://i.stack.imgur.com/V7Mjp.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/V7Mjp.jpg\" alt=\"tile 3\"></a>\n<a href=\"https://i.stack.imgur.com/VDM5T.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VDM5T.jpg\" alt=\"tile 4\"></a>\n<a href=\"https://i.stack.imgur.com/R34BD.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/R34BD.jpg\" alt=\"tile 5\"></a></p>\n" } ]
Question: Number of unique hex tiles with two paths on each edge? Answer 1: <p>This problem is a case of <strong>Power Group Enumeration</strong> in the sense of the paper by Fripertinger which was discussed quite extensively at the following <a href="https://math.stackexchange.com/questions/864116/">MSE link</a>. The set of values consists of the possible edges and is being permuted by the rotations of the hexagon, call this $G$. Edges are represented by products $A_p A_q$ where $p$ and $q$ range from zero to eleven and correspond to a clockwise enumeration of the exits and there are no edges between adjacent exits (same edge). This is a feature of the problem which is open to debate and which I hope I have interpreted correctly. The slots are simply six slots being permuted by the symmetric group $S_6.$ The technique is exactly the same as at the quoted link -- note that we seek sets of edges and not multisets which simplifies things considerably. In a nutshell we consider pairs $(\beta,\alpha)$ of permutations from the edge permutations $G$ and the symmetric group $S_6$ and compute the generating function that corresponding to all possible ways of covering $\alpha$ with cycles from $\beta.$ At the end we extract those terms that contain every exit exactly once. A remarkable feature of this problem is that we are working with generating functions here but the generating function that appears when we try to cover $a_1^6$ with cycles (fixed points) from $a_1^{60}$ produces a completely unmanageable number of terms which it is impossible to process. Fortunately the contribution of this term can be computed in closed form. It has no symmetries and simply counts all tiles before symmetry. As we must exclude edges between adjacent exits we get by inclusion-exclusion</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{k=0}^6 {6\choose k} (-1)^k \frac{(12-2k)!}{2^{6-k} (6-k)!} = 6040.}$$</p> <p>We simply wire this value into the program and detect $a_1^{60}$ during the <strong>Power Group Enumeration</strong> computation. This is all we need to be aware of at this time and the answer is</p> <p>$$\bbox[5px,border:2px solid #00A000]{1060.}$$</p> <p>The reader is invited to verify this result by classifying tiles according to their symmetries and counting the number of tiles in every class.</p> <p>I also verfied this by a total enumeration routine which in fact does not require all that much time / space and confirms the above findings. PGE obviously generalizes while simple enumeration does not.</p> <p><P>This was the Maple code.</p> <pre> with(combinat); pet_autom2cyclesA := proc(src, aut) local numa, numsubs, marks, pos, cycs, data, item, cpos, clen; numsubs := [seq(src[k]=k, k=1..nops(src))]; numa := subs(numsubs, aut); marks := Array([seq(true, pos=1..nops(aut))]); cycs := []; pos := 1; data := []; while pos &lt;= nops(aut) do if marks[pos] then clen := 0; item := []; cpos := pos; while marks[cpos] do marks[cpos] := false; item := [op(item), aut[cpos]]; cpos := numa[cpos]; clen := clen+1; od; cycs := [op(cycs), clen]; data := [op(data), item]; fi; pos := pos+1; od; return [data, mul(a[cycs[k]], k=1..nops(cycs))]; end; pet_cycleind_symm := proc(n) local p, s; option remember; if n=0 then return 1; fi; expand(1/n*add(a[l]*pet_cycleind_symm(n-l), l=1..n)); end; edges_all_src := proc() option remember; local edges; edges := add(add(A[p]*A[q], q=p+1..11), p=0..11) - add(A[2*p]*A[2*p+1], p=0..5); [seq(p, p in edges)]; end; pet_perms_edge_cind := proc() option remember; local cind, p, q, perm, sl, edges, edgeperm, rotind; edges := edges_all_src(); cind := []; for rotind from 0 to 5 do sl := [seq(A[p]=A[(p+2*rotind) mod 12], p=0..11)]; edgeperm := subs(sl, edges); cind := [op(cind), pet_autom2cyclesA(edges, edgeperm)]; od; cind; end; tiles := proc() option remember; local slot_idx, edge_idx, gf, var, flat, cycs, edge_term, slot_term, contrib, cover, deg, onesize; slot_idx := pet_cycleind_symm(6); edge_idx := pet_perms_edge_cind(); gf := 0; for slot_term in slot_idx do for edge_term in edge_idx do if edge_term[2] = a[1]^60 then next; fi; if type(edge_term[2]/slot_term, &#96;monomial&#96;) then contrib := 1; for var in indets(slot_term) do cycs := select(c-&gt;nops(c)=op(1,var), edge_term[1]); onesize := 0; deg := degree(slot_term, var); for cover in choose(cycs, deg) do onesize := onesize + deg!*op(1,var)^deg* mul(mul(el, el in cyc), cyc in cover); od; contrib := contrib*onesize; od; gf := gf + lcoeff(slot_term)*contrib; fi; od; od; gf := expand(gf); for var from 0 to 11 do gf := coeff(gf, A[var], 1); od; (gf + 6040)/6; end; enum_all_symm := proc() option remember; local recurse, count, edges, orbits; edges := edges_all_src(); orbits := table(); recurse := proc(sofar, sel, n, pos) local orbit, sl, rotind; if n = 6 then count := count + 1; orbit := {}; for rotind from 0 to 5 do sl := [seq(A[p]=A[p+2*rotind mod 12], p=0..11)]; orbit := {op(orbit), subs(sl, sel)}; od; orbits[orbit] := 1; return; fi; if pos &gt; nops(edges) then return; fi; recurse(sofar, sel, n, pos+1); if nops(indets(sofar*edges[pos])) = 2*(n+1) then recurse(sofar*edges[pos], {op(sel), edges[pos]}, n+1, pos+1); fi; end; count := 0; recurse(1, {}, 0, 1); [count, nops([indices(orbits)])]; end; </pre> <p><strong>Addendum Mon Jun 26 2017.</strong> Presenting a completely reworked algorithm and implementation, which gives an instant answer. The previous version computed the entire generating function in the variables representing the exits, expanding intermediate terms even including those that did not contribute. As indicated this algorithm breaks down when placing cycles corresponding to $a_1^{60}$ on $a_1^6.$ The new version uses</p> <p>$$[z^k] \prod_{q\in C} (1+zq)$$</p> <p>to represent a choice of $k$ cycles from a set of cycles $C$ and does not expand this term, which means that the contribution from $a_1^{60}$ only consists of $60$ terms as opposed to ${60\choose 6},$ which is not feasible. We thus get a product of these (times a leading coefficient) when there exists a covering of the slot permutation $\alpha$ by cycles from the edge permutation $\beta,$ where $k$ is the number of cycles of a given size in $\alpha$ and $C$ are the cycles of that size from $\beta,$ which is not a conjugacy class in this problem but an actual permutation of the edges. Having constructed this product we differentiate with respect to all variables in sequence and set them to zero thereafter, which isolates the contribution from those terms where each exit occurs just once. We extract the coefficients that correspond to the desired power of $[z^k]$ at the very end, producing a number which gives the possible coverings of $\alpha$ by cycles from $\beta$ that contribute to the count of admissible tiles. The Maple code was as follows (the enumeration routine has also been improved.) We omit the prefix that is shared with the first version.</p> <pre> tiles := proc() option remember; local slot_idx, edge_idx, slot_term, edge_term, cycs, deg, gf, var, res; slot_idx := pet_cycleind_symm(6); edge_idx := pet_perms_edge_cind(); res := 0; for slot_term in slot_idx do for edge_term in edge_idx do if type(edge_term[2]/slot_term, &#96;monomial&#96;) then gf := 1; for var in indets(slot_term) do cycs := select(c-&gt;nops(c)=op(1,var), edge_term[1]); deg := degree(slot_term, var); gf := gf*deg!*op(1, var)^deg *mul(1+z[op(1, var)]*mul(el, el in cyc), cyc in cycs); od; for var from 0 to 11 do gf := subs(A[var] = 0, diff(gf, A[var])); od; for var in indets(slot_term) do gf := coeftayl(gf, z[op(1, var)]=0, degree(slot_term, var)); od; res := res + lcoeff(slot_term)*gf; fi; od; od; res/nops(edge_idx); end; enum_all_tiles := proc() option remember; local recurse, count, edges, orbits, sl; edges := edges_all_src(); orbits := table(); sl := [seq([seq(A[p]=A[p+2*r mod 12], p=0..11)], r=1..5)]; recurse := proc(sofar, sel, n, pos) local orbit, rotind; if n = 6 then count := count + 1; orbit := [sel]; for rotind to 5 do orbit := [op(orbit), subs(sl[rotind], sel)]; od; orbits[sort(orbit)[1]] := 1; return; fi; if pos &gt; nops(edges) then return; fi; recurse(sofar, sel, n, pos+1); if nops(indets(sofar*edges[pos])) = 2*(n+1) then recurse(sofar*edges[pos], {op(sel), edges[pos]}, n+1, pos+1); fi; end; count := 0; recurse(1, {}, 0, 1); [count, numelems(orbits)]; end; </pre> <p><strong>Addendum Wed Jun 28 2017.</strong> We present the case of arbitrary polygonal tiles having $q$ sides rather than hexagonal ones to complete this discussion and end on a challenge to the reader. We get by inclusion-exclusion for the count with no symmetries the formula</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \sum_{k=0}^q {q\choose k} (-1)^k \frac{(2q-2k)!}{2^{q-k} (q-k)!}.}$$</p> <p>Here we choose the $k$ spots where there is an edge between adjacent exits (which we are wanting to avoid), for a factor of ${q\choose k}.$ This leaves $2q-2k$ exits. We choose two of these to connect by an edge, then another two from the remaining exits, and another two, and so on, for a multinomial coefficient ${2q-2k\choose 2,2,\ldots,2}.$ Here we may choose one set of edges in $(q-k)!$ ways, hence the formula. We obtain the sequence starting at $q=1$:</p> <p>$$0, 2, 8, 60, 544, 6040, 79008, 1190672, 20314880, 387099936,\ldots$$</p> <p>Taking symmetries into account yields the following sequence:</p> <p>$$0, 2, 4, 22, 112, 1060, 11292, 149448, 2257288, 38720728\ldots$$</p> <p>and the challenge to the reader is of course to compute more terms for the latter. This is the Maple code:</p> <pre> edges_all_src := proc(q) option remember; local edges; edges := add(add(A[p]*A[r], r=p+1..2*q-1), p=0..2*q-1) - add(A[2*p]*A[2*p+1], p=0..q-1); [seq(p, p in edges)]; end; pet_perms_edge_cind := proc(q) option remember; local cind, perm, sl, edges, edgeperm, rotind; edges := edges_all_src(q); cind := []; for rotind from 0 to q-1 do sl := [seq(A[p]=A[(p+2*rotind) mod 2*q], p=0..2*q-1)]; edgeperm := subs(sl, edges); cind := [op(cind), pet_autom2cyclesA(edges, edgeperm)]; od; cind; end; tiles := proc(q) option remember; local slot_idx, edge_idx, slot_term, edge_term, cycs, deg, gf, var, res; slot_idx := &#96;if&#96;(q=1, [a[1]], pet_cycleind_symm(q)); edge_idx := pet_perms_edge_cind(q); res := 0; for slot_term in slot_idx do for edge_term in edge_idx do if type(edge_term[2]/slot_term, &#96;monomial&#96;) then gf := 1; for var in indets(slot_term) do cycs := select(c-&gt;nops(c)=op(1,var), edge_term[1]); deg := degree(slot_term, var); gf := gf*deg!*op(1, var)^deg *mul(1+z[op(1, var)]*mul(el, el in cyc), cyc in cycs); od; for var from 0 to 2*q-1 do gf := subs(A[var] = 0, diff(gf, A[var])); od; for var in indets(slot_term) do gf := coeftayl(gf, z[op(1, var)]=0, degree(slot_term, var)); od; res := res + lcoeff(slot_term)*gf; fi; od; od; res/nops(edge_idx); end; tiles_all := q -&gt; add(binomial(q,k)*(-1)^k*(2*q-2*k)!/2^(q-k)/(q-k)!, k = 0..q); enum_tiles_all := proc(q) option remember; local recurse, count, edges, orbits, sl; edges := edges_all_src(q); orbits := table(); sl := [seq([seq(A[p]=A[p+2*r mod 2*q], p=0..2*q-1)], r=1..q-1)]; recurse := proc(sofar, sel, n, pos) local orbit, rotind; if n = q then count := count + 1; orbit := [sel]; for rotind to q-1 do orbit := [op(orbit), subs(sl[rotind], sel)]; od; orbits[sort(orbit)[1]] := 1; return; fi; if pos &gt; nops(edges) then return; fi; recurse(sofar, sel, n, pos+1); if nops(indets(sofar*edges[pos])) = 2*(n+1) then recurse(sofar*edges[pos], {op(sel), edges[pos]}, n+1, pos+1); fi; end; count := 0; recurse(1, {}, 0, 1); [count, numelems(orbits)]; end; </pre> <p><strong>Addendum Fri Jun 30 2017.</strong> The data for dihedral symmetries being taken into account (rotational and reflectional) are as follows:</p> <p>$$0, 2, 4, 19, 80, 638, 6054, 76692, 1137284, 19405244,\ldots $$</p> <p><strong>Addendum Mon Jul 3 2017.</strong> It appears that plain Burnside is still the best approach. This requires that we compute the number of tiles fixed by each of the $q$ rotations including the identity, which fixes all tiles. These are counted by the inclusion-exclusion formula given above. For the remaining rotations we have that they must be constant on the cycles (obtained by factorizing the corresponding permutation of edges). This means that to be fixed by a given rotation a cycle of edges is either switched on or off (all edges simultaneously). We use a backtracking search to determine which cycles may be switched on at the same time without creating conflict (more than one edge incident on the same exit.) This lets us calculate three more values for rotational symmetries and we obtain</p> <p>$$0, 2, 4, 22, 112, 1060, 11292, 149448, 2257288, 38720728, \\ 740754220, 15648468804, 361711410384, \ldots$$</p> <p>We also find that we have effectively solved the problem for tiles whose number of sides is a prime. We find (more of these can be computed)</p> <p>$$2, 4, 112, 11292, 740754220, 361711410384, 222595582448849152, \\ 258327454310582805036, 661821993709898403923269564, \\ 10266982973657640119698928948136690256,\ldots$$</p> <p>The Maple code for this goes as follows.</p> <pre> with(combinat); pet_autom2cyclesA := proc(src, aut) local numa, numsubs, marks, pos, cycs, data, item, cpos, clen; numsubs := [seq(src[k]=k, k=1..nops(src))]; numa := subs(numsubs, aut); marks := Array([seq(true, pos=1..nops(aut))]); cycs := []; pos := 1; data := []; while pos &lt;= nops(aut) do if marks[pos] then clen := 0; item := []; cpos := pos; while marks[cpos] do marks[cpos] := false; item := [op(item), aut[cpos]]; cpos := numa[cpos]; clen := clen+1; od; cycs := [op(cycs), clen]; data := [op(data), item]; fi; pos := pos+1; od; return [data, mul(a[cycs[k]], k=1..nops(cycs))]; end; edges_all_src := proc(q) option remember; local edges; edges := add(add(A[p]*A[r], r=p+1..2*q-1), p=0..2*q-1) - add(A[2*p]*A[2*p+1], p=0..q-1); [seq(p, p in edges)]; end; tiles_no_symm := q -&gt; add(binomial(q,k)*(-1)^k *(2*q-2*k)!/2^(q-k)/(q-k)!, k=0..q); bs_tiles := proc(q) option remember; local res, sl, recurse, edgecycs, cind, vsets, edges, edgeperm, rotind, seen, factored; edges := edges_all_src(q); res := tiles_no_symm(q); recurse := proc(cycs, sofar, pos) local data, nxt; if nops(sofar) = 2*q then return 1 fi; if pos &gt; nops(cycs) then return 0 fi; data := recurse(cycs, sofar, pos+1); if nops(cycs[pos] intersect sofar) = 0 then data := data + recurse(cycs, cycs[pos] union sofar, pos+1); fi; return data; end; cind := []; seen := table(); for rotind to q-1 do sl := [seq(A[p]=A[(p+2*rotind) mod 2*q], p=0..2*q-1)]; edgeperm := subs(sl, edges); cind := [op(cind), pet_autom2cyclesA(edges, edgeperm)]; od; for factored in cind do if not(type(seen[factored[2]], &#96;integer&#96;)) then vsets := map(indets, select(cf -&gt; { seq(degree(cf, v), v in indets(cf)) } = { 1 }, map(cyc -&gt; mul(el, el in cyc), factored[1]))); seen[factored[2]] := recurse(vsets, {}, 1); fi; res := res + seen[factored[2]]; od; res/q; end; </pre> <p>With Burnside we also get three more terms for dihedral symmetry, which yields</p> <p>$$ 0, 2, 4, 19, 80, 638, 6054, 76692, 1137284, 19405244, \\ 370597430, 7825459362, 180862277352,\ldots$$</p> <p>The reason why the above code for cyclic symmetry is structured the way it is becomes apparent when we study the code for dihedral symmetry, which is why I decided to present it here, omitting duplicate code.</p> <pre> edges_all_src := proc(q) option remember; local edges; edges := add(add(C[p]*C[r], r=p+1..2*q-1), p=0..2*q-1) - add(C[2*p]*C[2*p+1], p=0..q-1); subs([seq(C[2*p]=A[p], p=0..q-1), seq(C[2*p+1]=B[p], p=0..q-1)], [seq(p, p in edges)]); end; tiles_no_symm := q -&gt; add(binomial(q,k)*(-1)^k *(2*q-2*k)!/2^(q-k)/(q-k)!, k=0..q); bs_tiles_dh := proc(q) local res, sl, recurse, cind, vsets, edges, edgeperm, rotind, seen, factored; edges := edges_all_src(q); res := tiles_no_symm(q); recurse := proc(cycs, sofar, pos) local data, nxt; if nops(sofar) = 2*q then return 1 fi; if pos &gt; nops(cycs) then return 0 fi; data := recurse(cycs, sofar, pos+1); if nops(cycs[pos] intersect sofar) = 0 then data := data + recurse(cycs, cycs[pos] union sofar, pos+1); fi; return data; end; cind := []; seen := table(); for rotind from 0 to q-1 do if rotind &gt;= 1 then sl := [seq(A[p]=A[(p+rotind) mod q], p=0..q-1), seq(B[p]=B[(p+rotind) mod q], p=0..q-1)]; edgeperm := subs(sl, edges); cind := [op(cind), pet_autom2cyclesA(edges, edgeperm)]; fi; sl := [seq(A[p]=B[q-1-((p+rotind) mod q)], p=0..q-1), seq(B[p]=A[q-1-((p+rotind) mod q)], p=0..q-1)]; edgeperm := subs(sl, edges); cind := [op(cind), pet_autom2cyclesA(edges, edgeperm)]; od; for factored in cind do if not(type(seen[factored[2]], &#96;integer&#96;)) then vsets := map(indets, select(cf -&gt; { seq(degree(cf, v), v in indets(cf)) } = { 1 }, map(cyc -&gt; mul(el, el in cyc), factored[1]))); seen[factored[2]] := recurse(vsets, {}, 1); fi; res := res + seen[factored[2]]; od; res/q/2; end; </pre> Answer 2: <p>If the paths cannot cross, there are a rather limited number of tiles under the constraints given; I make it 5:</p> <p><a href="https://i.stack.imgur.com/AbDPK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AbDPK.jpg" alt="tile 1"></a> <a href="https://i.stack.imgur.com/Zr6Hj.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zr6Hj.jpg" alt="tile 2"></a> <a href="https://i.stack.imgur.com/V7Mjp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V7Mjp.jpg" alt="tile 3"></a> <a href="https://i.stack.imgur.com/VDM5T.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VDM5T.jpg" alt="tile 4"></a> <a href="https://i.stack.imgur.com/R34BD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R34BD.jpg" alt="tile 5"></a></p>
A continuous function on a half-closed interval
2,097,515
4
https://math.stackexchange.com/questions/2097515/a-continuous-function-on-a-half-closed-interval
<p>I have this question: If $f:[0,\infty) \to \mathbb R$ is continuous and $$\lim _{x\to \infty }\:f\left(x\right)\:=\:L$$ then $f$ has a maximum or a minimum on the interval.</p> <p>I've already proved that this function (on the same terms and limit) is bounded on the interval. I understand why this is true - because the limit can be below the function and then it won't have minimum or above the function and then it wont have maximum. I don't know how to write it formally.</p>
[ { "answer_id": 2097545, "score": 6, "text": "<p>If $f$ is not a constant function there exists $a \\geqslant 0$ such that $f(a) \\neq L$. Let $M=f(a)$ and let us suppose that $M&gt;L$ (an analoguous reasoning would apply if $M&lt;L)$.\nWe want to prove that $f$ has a maximum.\nLet $\\epsilon=\\frac{M-L}{2}&gt;0$.</p>\n\n<p>Since $\\lim_{x\\to \\infty } f(x)=L$ you have the existence of $A &gt; 0$ such that for all $x \\geqslant A$, $f(x) \\in [L-\\epsilon, L+\\epsilon]$. Then for all $x \\geqslant A$, $f(x) \\leqslant L+\\frac{M-L}{2} &lt; L + M -L = M$.</p>\n\n<p>On $[0, A]$, $f$ is continuous then has a maximum $B \\geqslant M$ (because $a \\in [0, A]$). Consequently $f$ has a maximum on $[0, \\infty)$.</p>\n\n<p>If $f$ is constant, it should be equal to $L$ and $f$ has a maximum and a minimum.</p>\n\n<p>If $M&lt;L$ we would have proven that $f$ has a minimum on $[0, \\infty)$. Note moreover that if there exists $a,\\ b$ such that $f(a) \\geqslant L$ and $f(b) \\leqslant L$ then $f$ has a maximum and a minimum on $[0, \\infty)$.</p>\n" }, { "answer_id": 2097569, "score": 2, "text": "<p>Take $\\phi\\colon[0,1) \\to [0,+\\infty)$, $\\phi(x)=1/(1-x)-1$ (any continuous increasing unbounded function would work) and define\nthe function $g\\colon [0,1]\\to \\mathbb R$ by\n$$\n g(t) = \\begin{cases}\n f(\\phi(t)) &amp; \\text{if $t\\in[0,1)$}\\\\\n L &amp; \\text{if $t=1$}\n \\end{cases}.\n$$\nClearly $g$ is continuous on $[0,1]$ hence by Weierstrass theorem has minimum and maximum. Either the minimum or the maximum is not achieved in $t=1$. Being $\\phi$ increasing the corresponding point $\\phi(t)$ is a minimum or maximum for $g$.</p>\n" } ]
Question: A continuous function on a half-closed interval Answer 1: <p>If $f$ is not a constant function there exists $a \geqslant 0$ such that $f(a) \neq L$. Let $M=f(a)$ and let us suppose that $M&gt;L$ (an analoguous reasoning would apply if $M&lt;L)$. We want to prove that $f$ has a maximum. Let $\epsilon=\frac{M-L}{2}&gt;0$.</p> <p>Since $\lim_{x\to \infty } f(x)=L$ you have the existence of $A &gt; 0$ such that for all $x \geqslant A$, $f(x) \in [L-\epsilon, L+\epsilon]$. Then for all $x \geqslant A$, $f(x) \leqslant L+\frac{M-L}{2} &lt; L + M -L = M$.</p> <p>On $[0, A]$, $f$ is continuous then has a maximum $B \geqslant M$ (because $a \in [0, A]$). Consequently $f$ has a maximum on $[0, \infty)$.</p> <p>If $f$ is constant, it should be equal to $L$ and $f$ has a maximum and a minimum.</p> <p>If $M&lt;L$ we would have proven that $f$ has a minimum on $[0, \infty)$. Note moreover that if there exists $a,\ b$ such that $f(a) \geqslant L$ and $f(b) \leqslant L$ then $f$ has a maximum and a minimum on $[0, \infty)$.</p> Answer 2: <p>Take $\phi\colon[0,1) \to [0,+\infty)$, $\phi(x)=1/(1-x)-1$ (any continuous increasing unbounded function would work) and define the function $g\colon [0,1]\to \mathbb R$ by $$ g(t) = \begin{cases} f(\phi(t)) &amp; \text{if $t\in[0,1)$}\\ L &amp; \text{if $t=1$} \end{cases}. $$ Clearly $g$ is continuous on $[0,1]$ hence by Weierstrass theorem has minimum and maximum. Either the minimum or the maximum is not achieved in $t=1$. Being $\phi$ increasing the corresponding point $\phi(t)$ is a minimum or maximum for $g$.</p>
Integral of $\int\frac{dx}{x^2\sqrt{1-\frac{1}{x^2}}}$
2,097,549
4
https://math.stackexchange.com/questions/2097549/integral-of-int-fracdxx2-sqrt1-frac1x2
<p>I was asked to find the following integral: </p> <p>$$\int\frac{dx}{x^2\sqrt{1-\frac{1}{x^2}}}$$</p> <p>What I tried to replace $\sqrt{1-\frac{1}{x^2}}$ with $u$ so that: $$du=\frac{dx}{x^3\sqrt{1-\frac{1}{x^2}}} \Rightarrow du*x=\frac{dx}{x^2\sqrt{1-\frac{1}{x^2}}}$$ And:$$x=\sqrt{\frac{1}{1-u^2}}$$ And we can replace: $$\int\frac{dx}{x^2\sqrt{1-\frac{1}{x^2}}}=\int\frac{du}{\sqrt{1-u^2}}=\arctan(u)+C=\arctan(\sqrt{1-\frac{1}{x^2}})+C$$ The problem is, when that result is derived we don't get the original expression. I just can't find my mistake, so some help would be appreciated.</p>
[ { "answer_id": 2097559, "score": 4, "text": "<p>Substitute $\\text{u}=\\frac{1}{x}$:</p>\n\n<p>$$\\mathcal{I}\\left(x\\right)=\\int\\frac{1}{x^2\\sqrt{1-\\frac{1}{x^2}}}\\space\\text{d}x=-\\int\\frac{1}{\\sqrt{1-\\text{u}^2}}\\space\\text{d}\\text{u}$$</p>\n\n<hr>\n\n<p>Now, use:</p>\n\n<p>$$\\frac{\\text{d}\\arcsin\\left(x\\right)}{\\text{d}x}=\\frac{1}{\\sqrt{1-x^2}}$$</p>\n\n<hr>\n\n<p>So, we get:</p>\n\n<p>$$\\mathcal{I}\\left(x\\right)=\\text{C}-\\arcsin\\left(\\frac{1}{x}\\right)=\\text{C}-\\text{arccsc}\\left(x\\right)$$</p>\n" }, { "answer_id": 2097587, "score": 4, "text": "<p>The substitution $u=\\sqrt{1-\\frac{1}{x^2}}$ is equivalent to $|x|=\\frac{1}{\\sqrt{1-u^2}}$. For $x&gt;1$, we have</p>\n\n<p>$$\\begin{align}\n\\int \\frac{1}{x^2\\sqrt{1-\\frac1{x^2}}}\\,dx&amp;=\\int \\frac{1}{\\sqrt{1-u^2}}\\,du\\\\\\\\\n&amp;=\\arcsin(u)+C\\\\\\\\\n&amp;=\\arcsin\\left(\\frac{\\sqrt{x^2-1}}{x}\\right)+C\\\\\\\\\n&amp;=-\\arcsin(1/x)+C'\n\\end{align}$$</p>\n\n<p>where we used the identity $\\arcsin(1/x)+\\arcsin\\left(\\frac{\\sqrt{x^2-1}}{x}\\right)=(\\pi/2)\\text{sgn}(x)=\\pi/2$ when $x&gt;1$.</p>\n\n<p>One can proceed similarly for the case in which $x&lt;-1$.</p>\n" }, { "answer_id": 2097552, "score": 1, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{x^2\\sqrt{1-\\dfrac1{x^2}}}=\\dfrac{|x|}{x^2\\sqrt{x^2-1}}$$</p>\n\n<p>Set $\\sqrt{x^2-1}=u\\implies x^2=u^2+1$ and $x\\ dx=u\\ du$</p>\n" } ]
Question: Integral of $\int\frac{dx}{x^2\sqrt{1-\frac{1}{x^2}}}$ Answer 1: <p>Substitute $\text{u}=\frac{1}{x}$:</p> <p>$$\mathcal{I}\left(x\right)=\int\frac{1}{x^2\sqrt{1-\frac{1}{x^2}}}\space\text{d}x=-\int\frac{1}{\sqrt{1-\text{u}^2}}\space\text{d}\text{u}$$</p> <hr> <p>Now, use:</p> <p>$$\frac{\text{d}\arcsin\left(x\right)}{\text{d}x}=\frac{1}{\sqrt{1-x^2}}$$</p> <hr> <p>So, we get:</p> <p>$$\mathcal{I}\left(x\right)=\text{C}-\arcsin\left(\frac{1}{x}\right)=\text{C}-\text{arccsc}\left(x\right)$$</p> Answer 2: <p>The substitution $u=\sqrt{1-\frac{1}{x^2}}$ is equivalent to $|x|=\frac{1}{\sqrt{1-u^2}}$. For $x&gt;1$, we have</p> <p>$$\begin{align} \int \frac{1}{x^2\sqrt{1-\frac1{x^2}}}\,dx&amp;=\int \frac{1}{\sqrt{1-u^2}}\,du\\\\ &amp;=\arcsin(u)+C\\\\ &amp;=\arcsin\left(\frac{\sqrt{x^2-1}}{x}\right)+C\\\\ &amp;=-\arcsin(1/x)+C' \end{align}$$</p> <p>where we used the identity $\arcsin(1/x)+\arcsin\left(\frac{\sqrt{x^2-1}}{x}\right)=(\pi/2)\text{sgn}(x)=\pi/2$ when $x&gt;1$.</p> <p>One can proceed similarly for the case in which $x&lt;-1$.</p> Answer 3: <p>HINT:</p> <p>$$\dfrac1{x^2\sqrt{1-\dfrac1{x^2}}}=\dfrac{|x|}{x^2\sqrt{x^2-1}}$$</p> <p>Set $\sqrt{x^2-1}=u\implies x^2=u^2+1$ and $x\ dx=u\ du$</p>
A complex number $a+bi$ as a continued fraction.
2,097,697
4
https://math.stackexchange.com/questions/2097697/a-complex-number-abi-as-a-continued-fraction
<p>I tried to learn about continued fractions of rational and irational numbers, but when it comes to complex numbers i cannot find anything resonable about that. Is there a way to write complex numbers as a finite or infinite continued fraction and what is the algorithm for this ?</p>
[ { "answer_id": 4347552, "score": 2, "text": "<p>You might try continued fractions using a quadratic formula solution with complex roots, then treat the real variable as a complex one. The coefficients could either be real or complex. I have an example of this with real coefficients:</p>\n<pre><code> z = (3)/(-1 + 3/(-1 + 3/(-1 + ...... )))\n\n The solution would be from the equation:\n\n z = 3/(-1 + z) or\n\n z^2 - z - 3 = 0 \n\n This would result in:\n\n z = 1/2 + (1/2)*(sqrt 11)i\n\n or\n\n z = 1/2 - (1/2)*(sqrt 11)i\n\n \n</code></pre>\n" } ]
Question: A complex number $a+bi$ as a continued fraction. Answer 1: <p>You might try continued fractions using a quadratic formula solution with complex roots, then treat the real variable as a complex one. The coefficients could either be real or complex. I have an example of this with real coefficients:</p> <pre><code> z = (3)/(-1 + 3/(-1 + 3/(-1 + ...... ))) The solution would be from the equation: z = 3/(-1 + z) or z^2 - z - 3 = 0 This would result in: z = 1/2 + (1/2)*(sqrt 11)i or z = 1/2 - (1/2)*(sqrt 11)i </code></pre>
Prove $a_{n+2}=3a_n+2\sqrt{2a_n^2+2a_{n+1}^2}$ is an integer
2,097,985
4
https://math.stackexchange.com/questions/2097985/prove-a-n2-3a-n2-sqrt2a-n22a-n12-is-an-integer
<p>Given the sequence $$a_1=a_2=1;\ a_ {n+2} = 3a_n + 2\sqrt{2a_n^2 + 2a_{n+1}^2}$$ prove that $a_n$ is an integer for all $n\in\mathbb N$.</p> <p><strong>Attempt</strong></p> <p>It is enough to show that $2a_n^2 + 2a_{n+1}^2$ is a perfect square. That means it's an even perfect square and so divisible by 4; thus $a_n^2+a_{n+1}^2=2k^2$ for some integer $k$. I think can solve this diophantine equation but I can't relate the solution to the original problem. Can anyone help?</p>
[ { "answer_id": 2098089, "score": 4, "text": "<p>$a_{n+2}\\ =\\ 3a_n+2\\sqrt{2a_n^2+2a_{n+1}^2}$</p>\n\n<p>$\\implies\\ \\left(a_{n+2}-3a_n\\right)^2\\ =\\ 8a_n^2+8a_{n+1}^2$</p>\n\n<p>$\\implies\\ a_n^2-6a_{n+2}a_n-8a_{n+1}^2+a_{n+2}^2=0$</p>\n\n<p>Treat this as a quadratic equation in $a_n$. The discriminant is</p>\n\n<p>$\\Delta_n\\ =\\ 36a_{n+2}^2+32a_{n+1}^2-4a_{n+2}^2 = 4\\left(8a_{n+1}^2+8a_{n+2}^2\\right)$</p>\n\n<p>Thus</p>\n\n<p>$a_n\\ =\\ \\dfrac{6a_{n+2}-\\sqrt{\\Delta}}2\\ (\\because a_n&lt;a_{n+2})$</p>\n\n<p>$=\\ 3a_{n+2}-2\\sqrt{2a_{n+1}^2+2a_{n+2}^2}$</p>\n\n<p>$\\implies\\ -a_n+3a_{n+1}+3_{a_n+2}\\ =\\ 3a_{n+1}+2\\sqrt{2a_{n+1}^2+2a_{n+2}^2}=a_{n+3}$</p>\n\n<p>Since $a_1=a_2=1$ and $a_3=7$ are integers, $a_4$ is an integer; by induction $a_n\\in\\mathbb Z$ for all $n\\in\\mathbb N$.</p>\n" }, { "answer_id": 2098021, "score": 1, "text": "<p>Let $b_n=\\sqrt{2a_n^2+2a_{n+1}^2}$. Show that $$b_{n+1}=3b_n+4a_{n}.$$</p>\n" }, { "answer_id": 2098119, "score": 0, "text": "<p>We have that:</p>\n\n<p>$$a_{n+2}-3a_n=2\\sqrt{2a_n^2+2{a_{n+1}}^2}\\Rightarrow (a_{n+2}-3a_n)^2=(2\\sqrt{2a_n^2+2{a_{n+1}}^2})^2\\Rightarrow\\\\ {a_{n+2}}^2-6a_na_{n+2}+9a_n^2=8a_n^2+8{a_{n+1}}^2\\Rightarrow {a_{n+2}}^2+{a_{n+1}}^2=-a_n^2+6a_na_{n+2}+9{a_{n+1}}^2\\Rightarrow \\\\{a_{n+2}}^2+{a_{n+1}}^2=-a_n^2+6a_n(3a_n+2\\sqrt{2a_n^2+2{a_{n+1}}^2})+9{a_{n+1}}^2\\Rightarrow\\\\ {a_{n+2}}^2+{a_{n+1}}^2=17a_n^2+9{a_{n+1}}^2+12a_n\\sqrt{2a_n^2+2{a_{n+1}}^2}\\Rightarrow\\\\ 2({a_{n+2}}^2+{a_{n+1}}^2)={(4a_n)}^2+\\left(3\\sqrt{2a_n^2+2{a_{n+1}}^2}\\right)^2+2\\cdot (4a_n)\\cdot \\left(3\\sqrt{2a_n^2+2{a_{n+1}}^2}\\right)\\Rightarrow\\\\ 2({a_{n+2}}^2+{a_{n+1}}^2)=\\left(4a_n+3\\sqrt{2a_n^2+2{a_{n+1}}^2}\\right)^2\\ (1)$$\nNow we will prove the claim with induction.</p>\n\n<p>It is a matter of a few calculations to verify that $a_3$ and $a_4$ are integers. </p>\n\n<p>Now we suppose that $a_n$ is an integer for every $4\\leq n\\leq k$. Then we immediately get that $\\sqrt{2{a_{k-2}}^2+2{a_{k-1}}^2}$ is an integer as the difference of the integers $a_k$ and $3a_{k-2}$. From $(1)$ it follows that $\\sqrt{2{a_{k-1}}^2+2{a_k}^2}$ is an integer too. </p>\n\n<p>Hence $a_{k+1}=3a_{k-1}+\\sqrt{2{a_{k-1}}^2+2{a_k}^2}\\in \\mathbb Z$ as a sum of integers and the inductive proof is complete. Thus we are done. </p>\n" }, { "answer_id": 2098125, "score": 0, "text": "<p>Will Jagy is right ... first port of call ... calculate the first few values $a_3=7$,$a_4=23$,$a_5=89$,$a_6=329$,$a_7=1231$.... I wrote a little c# project to this for me. Now google integer sequences &amp; go to \"The On-Line Encyclopedia of Integer Sequences\". Put the values $1,1,7,23,89,319,1231,$ &amp; hit search ... bingo ... it is sequence 217233, Expansion of $(1-2x+x^2)/(1-3x-3x^2+x^3)$. So we now know the solution ... we just need to prove it ... the denominator gives the equivalent recurrence relation $a_{n+3}=3a_{n+2}+3a_{n+1}-a_n$, the numerator is dictated by the initial conditions ... there is still some work to do to complete this answer but this gives a good start. </p>\n" } ]
Question: Prove $a_{n+2}=3a_n+2\sqrt{2a_n^2+2a_{n+1}^2}$ is an integer Answer 1: <p>$a_{n+2}\ =\ 3a_n+2\sqrt{2a_n^2+2a_{n+1}^2}$</p> <p>$\implies\ \left(a_{n+2}-3a_n\right)^2\ =\ 8a_n^2+8a_{n+1}^2$</p> <p>$\implies\ a_n^2-6a_{n+2}a_n-8a_{n+1}^2+a_{n+2}^2=0$</p> <p>Treat this as a quadratic equation in $a_n$. The discriminant is</p> <p>$\Delta_n\ =\ 36a_{n+2}^2+32a_{n+1}^2-4a_{n+2}^2 = 4\left(8a_{n+1}^2+8a_{n+2}^2\right)$</p> <p>Thus</p> <p>$a_n\ =\ \dfrac{6a_{n+2}-\sqrt{\Delta}}2\ (\because a_n&lt;a_{n+2})$</p> <p>$=\ 3a_{n+2}-2\sqrt{2a_{n+1}^2+2a_{n+2}^2}$</p> <p>$\implies\ -a_n+3a_{n+1}+3_{a_n+2}\ =\ 3a_{n+1}+2\sqrt{2a_{n+1}^2+2a_{n+2}^2}=a_{n+3}$</p> <p>Since $a_1=a_2=1$ and $a_3=7$ are integers, $a_4$ is an integer; by induction $a_n\in\mathbb Z$ for all $n\in\mathbb N$.</p> Answer 2: <p>Let $b_n=\sqrt{2a_n^2+2a_{n+1}^2}$. Show that $$b_{n+1}=3b_n+4a_{n}.$$</p> Answer 3: <p>We have that:</p> <p>$$a_{n+2}-3a_n=2\sqrt{2a_n^2+2{a_{n+1}}^2}\Rightarrow (a_{n+2}-3a_n)^2=(2\sqrt{2a_n^2+2{a_{n+1}}^2})^2\Rightarrow\\ {a_{n+2}}^2-6a_na_{n+2}+9a_n^2=8a_n^2+8{a_{n+1}}^2\Rightarrow {a_{n+2}}^2+{a_{n+1}}^2=-a_n^2+6a_na_{n+2}+9{a_{n+1}}^2\Rightarrow \\{a_{n+2}}^2+{a_{n+1}}^2=-a_n^2+6a_n(3a_n+2\sqrt{2a_n^2+2{a_{n+1}}^2})+9{a_{n+1}}^2\Rightarrow\\ {a_{n+2}}^2+{a_{n+1}}^2=17a_n^2+9{a_{n+1}}^2+12a_n\sqrt{2a_n^2+2{a_{n+1}}^2}\Rightarrow\\ 2({a_{n+2}}^2+{a_{n+1}}^2)={(4a_n)}^2+\left(3\sqrt{2a_n^2+2{a_{n+1}}^2}\right)^2+2\cdot (4a_n)\cdot \left(3\sqrt{2a_n^2+2{a_{n+1}}^2}\right)\Rightarrow\\ 2({a_{n+2}}^2+{a_{n+1}}^2)=\left(4a_n+3\sqrt{2a_n^2+2{a_{n+1}}^2}\right)^2\ (1)$$ Now we will prove the claim with induction.</p> <p>It is a matter of a few calculations to verify that $a_3$ and $a_4$ are integers. </p> <p>Now we suppose that $a_n$ is an integer for every $4\leq n\leq k$. Then we immediately get that $\sqrt{2{a_{k-2}}^2+2{a_{k-1}}^2}$ is an integer as the difference of the integers $a_k$ and $3a_{k-2}$. From $(1)$ it follows that $\sqrt{2{a_{k-1}}^2+2{a_k}^2}$ is an integer too. </p> <p>Hence $a_{k+1}=3a_{k-1}+\sqrt{2{a_{k-1}}^2+2{a_k}^2}\in \mathbb Z$ as a sum of integers and the inductive proof is complete. Thus we are done. </p> Answer 4: <p>Will Jagy is right ... first port of call ... calculate the first few values $a_3=7$,$a_4=23$,$a_5=89$,$a_6=329$,$a_7=1231$.... I wrote a little c# project to this for me. Now google integer sequences &amp; go to "The On-Line Encyclopedia of Integer Sequences". Put the values $1,1,7,23,89,319,1231,$ &amp; hit search ... bingo ... it is sequence 217233, Expansion of $(1-2x+x^2)/(1-3x-3x^2+x^3)$. So we now know the solution ... we just need to prove it ... the denominator gives the equivalent recurrence relation $a_{n+3}=3a_{n+2}+3a_{n+1}-a_n$, the numerator is dictated by the initial conditions ... there is still some work to do to complete this answer but this gives a good start. </p>
Can all laws be derived from axioms?
2,098,082
4
https://math.stackexchange.com/questions/2098082/can-all-laws-be-derived-from-axioms
<p>I don't really understand the purpose of an axiom if some laws cannot be derived from them. For example, how is one supposed to prove De Morgan's laws with only the axioms of probability?</p>
[ { "answer_id": 2098144, "score": 9, "text": "<p>In fact, you will hardly find any contemporary mathematical work providing an explicit, self-contained axiomatic system; it's just too good to be possible in pratice. One can only hope that they state clearly all those postulates that are too technical to be considered alredy known. In Probability Theory they may take (naive) Set Theory for granted, and thus consider De Morgan's laws as something easy to prove from what you ‘already know’.</p>\n\n<p>There is a special branch of mathematics, called Mathematical Logic, which is actually dedicated to investigating how modern mathematical theories can be reduced to fully-rigorous axiomatic systems (they are trying ‘formalize’ mathematics). To take a glimps of how complex this task is, have a look at this site:</p>\n\n<p><a href=\"http://us.metamath.org/mpegif/mmset.html\" rel=\"noreferrer\">http://us.metamath.org/mpegif/mmset.html</a></p>\n\n<p>(Of course, what they do in Metamath is an extreme instance of what professional mathematical logicians are concerened with.)</p>\n" }, { "answer_id": 2098098, "score": 6, "text": "<p>You probably can, and it's extremely rare that you have to. De Morgan's laws probably wouldn't be the worst (and they don't really involve probability), but one of the points of theorems is that we can use the ones we already have proven to prove new ones, without going all the way back to axioms. Because if we use theorem $A$ in the proof of theorem $B$, we know that given a proof of $A$ using only axioms, we could substitute that into our proof of $B$ and have a proof of $B$ using only axioms.</p>\n\n<p>(I skipped over some details in an attempt to make it clearer).</p>\n" }, { "answer_id": 2098304, "score": 4, "text": "<p>In theory, every <strong>theorem</strong> in some practical formal system is by definition a sentence that can be <strong>deduced</strong> in finitely many steps by strictly following the allowed <strong>inference rules</strong>. Inference rules are more general than axioms, and commonly are of the form &quot;If you can deduce ... then you can deduce ...&quot;. Axioms can be easily captured by the special case of inference rules where the condition is vacuous (you need not be able to deduce anything), in other words by inference rules of the form &quot;You can always deduce ...&quot;.</p>\n<p>Concerning the axioms of probability, they are not stated in a vacuum, but within an over-arching formal system. Let's take a simpler example of the axiomatization of groups:</p>\n<blockquote>\n<p>A <strong>group</strong> is an ordered triple <span class=\"math-container\">$(G,∘,e)$</span> where <span class=\"math-container\">$G$</span> is a collection and <span class=\"math-container\">$∘$</span> is a binary operation on <span class=\"math-container\">$G$</span> and <span class=\"math-container\">$e \\in G$</span> such that:</p>\n<ul>\n<li><p><span class=\"math-container\">$\\forall x,y \\in G\\ ( x∘y \\in G )$</span>.</p>\n</li>\n<li><p><span class=\"math-container\">$\\forall x,y,z \\in G\\ ( (x∘y)∘z = x∘(y∘z) )$</span>.</p>\n</li>\n<li><p><span class=\"math-container\">$\\forall x \\in G\\ ( x∘e = x = e∘x )$</span>.</p>\n</li>\n<li><p><span class=\"math-container\">$\\forall x \\in G\\ \\exists y \\in G\\ ( x∘y = e = y∘x )$</span>.</p>\n</li>\n</ul>\n</blockquote>\n<p>Notice that these axioms are more-or-less first-order sentences applied to <span class=\"math-container\">$(G,∘,e)$</span>. But we certainly cannot deduce a lot about groups using only the axioms, such as the fact that every element in a finite group has a finite order that divides the size of the group. There is no easy explanation of this, but intuitively it is because if you restrict yourself to working within the group axiomatization you would only be able to deduce things from the perspective of a general group, and the language you have is not even able to refer to multiple elements, iterated operations or collections of groups, not to say reason about them.</p>\n<p>This is why group theory (and every other branch of mathematics) is investigated <strong>'outside'</strong> the group axiomatization, in the foundational system which allows you to set up axiomatizations in the first place. The most common foundational system in modern mathematics is usually said to be ZFC set theory, but actually it is ZFC plus a fair amount of syntactic sugar to help with actual usage. The language of pure ZFC only has a single binary predicate symbol &quot;<span class=\"math-container\">$\\in$</span>&quot;, making it very cumbersome to express anything of interest, but if we add <a href=\"https://math.stackexchange.com/a/1864310/21820\">on-the-fly definitorial expansion</a> then it very drastically condenses proofs and the resulting system ZFC* is a reasonably usable foundational system.</p>\n<p>In real mathematical practice mathematicians do not even use ZFC*, but rather most mathematics is done in a fairly informal way, with the main criterion of correctness being that any sufficiently trained reader is capable of convincing himself/herself that the argument <strong>can</strong> be translated to a formal proof in ZFC*. (Actually some mathematicians do not even know the formal specifications of ZFC, but they know what they are allowed to do that set theorists tell them is translatable to ZFC.)</p>\n" } ]
Question: Can all laws be derived from axioms? Answer 1: <p>In fact, you will hardly find any contemporary mathematical work providing an explicit, self-contained axiomatic system; it's just too good to be possible in pratice. One can only hope that they state clearly all those postulates that are too technical to be considered alredy known. In Probability Theory they may take (naive) Set Theory for granted, and thus consider De Morgan's laws as something easy to prove from what you ‘already know’.</p> <p>There is a special branch of mathematics, called Mathematical Logic, which is actually dedicated to investigating how modern mathematical theories can be reduced to fully-rigorous axiomatic systems (they are trying ‘formalize’ mathematics). To take a glimps of how complex this task is, have a look at this site:</p> <p><a href="http://us.metamath.org/mpegif/mmset.html" rel="noreferrer">http://us.metamath.org/mpegif/mmset.html</a></p> <p>(Of course, what they do in Metamath is an extreme instance of what professional mathematical logicians are concerened with.)</p> Answer 2: <p>You probably can, and it's extremely rare that you have to. De Morgan's laws probably wouldn't be the worst (and they don't really involve probability), but one of the points of theorems is that we can use the ones we already have proven to prove new ones, without going all the way back to axioms. Because if we use theorem $A$ in the proof of theorem $B$, we know that given a proof of $A$ using only axioms, we could substitute that into our proof of $B$ and have a proof of $B$ using only axioms.</p> <p>(I skipped over some details in an attempt to make it clearer).</p> Answer 3: <p>In theory, every <strong>theorem</strong> in some practical formal system is by definition a sentence that can be <strong>deduced</strong> in finitely many steps by strictly following the allowed <strong>inference rules</strong>. Inference rules are more general than axioms, and commonly are of the form &quot;If you can deduce ... then you can deduce ...&quot;. Axioms can be easily captured by the special case of inference rules where the condition is vacuous (you need not be able to deduce anything), in other words by inference rules of the form &quot;You can always deduce ...&quot;.</p> <p>Concerning the axioms of probability, they are not stated in a vacuum, but within an over-arching formal system. Let's take a simpler example of the axiomatization of groups:</p> <blockquote> <p>A <strong>group</strong> is an ordered triple <span class="math-container">$(G,∘,e)$</span> where <span class="math-container">$G$</span> is a collection and <span class="math-container">$∘$</span> is a binary operation on <span class="math-container">$G$</span> and <span class="math-container">$e \in G$</span> such that:</p> <ul> <li><p><span class="math-container">$\forall x,y \in G\ ( x∘y \in G )$</span>.</p> </li> <li><p><span class="math-container">$\forall x,y,z \in G\ ( (x∘y)∘z = x∘(y∘z) )$</span>.</p> </li> <li><p><span class="math-container">$\forall x \in G\ ( x∘e = x = e∘x )$</span>.</p> </li> <li><p><span class="math-container">$\forall x \in G\ \exists y \in G\ ( x∘y = e = y∘x )$</span>.</p> </li> </ul> </blockquote> <p>Notice that these axioms are more-or-less first-order sentences applied to <span class="math-container">$(G,∘,e)$</span>. But we certainly cannot deduce a lot about groups using only the axioms, such as the fact that every element in a finite group has a finite order that divides the size of the group. There is no easy explanation of this, but intuitively it is because if you restrict yourself to working within the group axiomatization you would only be able to deduce things from the perspective of a general group, and the language you have is not even able to refer to multiple elements, iterated operations or collections of groups, not to say reason about them.</p> <p>This is why group theory (and every other branch of mathematics) is investigated <strong>'outside'</strong> the group axiomatization, in the foundational system which allows you to set up axiomatizations in the first place. The most common foundational system in modern mathematics is usually said to be ZFC set theory, but actually it is ZFC plus a fair amount of syntactic sugar to help with actual usage. The language of pure ZFC only has a single binary predicate symbol &quot;<span class="math-container">$\in$</span>&quot;, making it very cumbersome to express anything of interest, but if we add <a href="https://math.stackexchange.com/a/1864310/21820">on-the-fly definitorial expansion</a> then it very drastically condenses proofs and the resulting system ZFC* is a reasonably usable foundational system.</p> <p>In real mathematical practice mathematicians do not even use ZFC*, but rather most mathematics is done in a fairly informal way, with the main criterion of correctness being that any sufficiently trained reader is capable of convincing himself/herself that the argument <strong>can</strong> be translated to a formal proof in ZFC*. (Actually some mathematicians do not even know the formal specifications of ZFC, but they know what they are allowed to do that set theorists tell them is translatable to ZFC.)</p>
Rolling Dice Probability Question
2,098,147
4
https://math.stackexchange.com/questions/2098147/rolling-dice-probability-question
<p>Rolling a fair dice $7$ times, what is the probability that the total sum is divisible by $3$, and no odd value was rolled?</p> <p>I know that there are $3^6$ possible ways for the first $6$ dice to give only even values, but how many of them are divisible by $3$ ? </p>
[ { "answer_id": 2098150, "score": 2, "text": "<p>If we forget about this \"no-odd\" thing for a second, and just focus on \"divisible by $3$\", imagine having thrown the die six times already, and just waiting for the seventh roll to decide everything. How many different results on the seventh die will make the total result divisible by $3$? Does it matter what the sum of the first $6$ are?</p>\n\n<p>Now adapt this to the case where you're only throwing evens. Does that really change anything of the above argument?</p>\n" }, { "answer_id": 2098154, "score": 1, "text": "<p>As you have pointed out, the probability of the first six dice being all even is $\\frac{3^6}{6^6}=\\frac{1}{2^6}$.</p>\n\n<p>The sum of the first six dice will be some number.\n<em>No matter what this number is,</em> the probability that the seventh roll is both even and makes the total sum divisible by $3$ is $1/6$.</p>\n\n<p>To see this explicitly, you can do some casework. I will do one case for you.\nSuppose the sum of the first six dice is one more than a multiple of $3$. Then the seventh roll must be a $2$ to make the total sum a multiple of $3$. The seventh roll could be $5$ as well, but you only want even rolls. </p>\n\n<p>It turns out that no matter what the sum of the first six rolls is, there is exactly one \"good\" seventh roll that will give you what you want.</p>\n" } ]
Question: Rolling Dice Probability Question Answer 1: <p>If we forget about this "no-odd" thing for a second, and just focus on "divisible by $3$", imagine having thrown the die six times already, and just waiting for the seventh roll to decide everything. How many different results on the seventh die will make the total result divisible by $3$? Does it matter what the sum of the first $6$ are?</p> <p>Now adapt this to the case where you're only throwing evens. Does that really change anything of the above argument?</p> Answer 2: <p>As you have pointed out, the probability of the first six dice being all even is $\frac{3^6}{6^6}=\frac{1}{2^6}$.</p> <p>The sum of the first six dice will be some number. <em>No matter what this number is,</em> the probability that the seventh roll is both even and makes the total sum divisible by $3$ is $1/6$.</p> <p>To see this explicitly, you can do some casework. I will do one case for you. Suppose the sum of the first six dice is one more than a multiple of $3$. Then the seventh roll must be a $2$ to make the total sum a multiple of $3$. The seventh roll could be $5$ as well, but you only want even rolls. </p> <p>It turns out that no matter what the sum of the first six rolls is, there is exactly one "good" seventh roll that will give you what you want.</p>
If $a^a$ divides $b^b$, then $a$ divides $b$?
2,098,224
4
https://math.stackexchange.com/questions/2098224/if-aa-divides-bb-then-a-divides-b
<p>Consider $a$ and $b$, both positive integers. Is it true that $a^a$ divides $b^b$ implies $a$ divides $b$? </p> <p>I can't seem to figure out this proof. My intuition is to use the fundamental theorem of arithmetic to break each number into its prime components, however I haven't been able to come up with a solution. Any help would be appreciated.</p>
[ { "answer_id": 2098227, "score": 11, "text": "<p>This is not true in general. The fact that $4^4$ divides $10^{10}$ would be the smallest counterexample, I think.</p>\n\n<p>So <em>why</em> does the proposition fail to hold? For any integer $n$, $\\operatorname{rad}(n)$ (for \"<em>radical</em>\") is the number with the same primes in its prime factorisation as $n$, but all powers are $1$. For instance, $\\operatorname{rad}(12) = 6$ and $\\operatorname{rad}(98) = 14$. Then we <em>do</em> have \"$a^a$ divides $b^b$ implies that $\\operatorname{rad}(a)$ divides $\\operatorname{rad}(b)$\", i.e. every prime that appears in the prime factorisation of $a$ appears in the prime factorisation of $b$. However, we have no way of controlling, for any given of those primes, that the <em>power</em> of that prime is smaller in $a$ than in $b$. We just know that for any given of those primes, $a$ times its exponent in $a$ is less than $b$ times its exponent in $b$. Thus, if $b$ is large enough, and just contains the right primes, we get $a^a$ divides $b^b$, no matter how many times those primes divide $b$.</p>\n" }, { "answer_id": 2098259, "score": 5, "text": "<p>Exceptions will be of the following form:</p>\n\n<p>Suppose the prime factorisation of $a$ is $a=p_1^{e_{a_1}}p_2^{e_{a_2}}\\cdots p_n^{e_{a_n}}$ with all the $p_i$ distinct primes and with all the $e_{a_i}\\ge 1$ and at least one $e_{a_i}\\ge 2$. Then </p>\n\n<ul>\n<li>$b$ is divisible by $p_1p_2\\cdots p_n$ and can be written as $b=c p_1^{e_{b_1}}p_2^{e_{b_2}}\\cdots p_n^{e_{b_n}}$ with $c$ coprime to $a$ and all the $e_{b_i}\\ge 1$ </li>\n<li>for at least one $i$ there is $e_{b_i}\\lt e_{a_i}$ so $a$ does not divide $b$</li>\n<li>for all $i$ there is $ae_{a_i}\\le be_{b_i}$ so $a^a$ divides $b^b$</li>\n</ul>\n\n<p>So reversing this to generate all possible exceptions</p>\n\n<ul>\n<li>choose a positive number $n$ of distinct primes $p_1,p_2,\\ldots,p_n$ </li>\n<li>choose the same number $n$ of positive integers $e_{a_1},e_{a_2},\\ldots,e_{a_n}$ not all $1$</li>\n<li>choose the same number $n$ of positive integers $e_{b_1},e_{b_2},\\ldots,e_{b_n}$ where for at least one $i$ you have $1 \\le e_{b_i}\\lt e_{a_i}$</li>\n<li>find a number $c$ coprime to $p_1p_2\\cdots p_n$ and at least $p_1^{e_{a_1}-e_{b_1}}p_2^{e_{a_2}-e_{b_2}}\\cdots p_n^{e_{a_n}-e_{b_n}} \\displaystyle \\max_i\\left(\\frac{e_{a_i}}{e_{b_i}}\\right)$</li>\n<li>take $a=p_1^{e_{a_1}}p_2^{e_{a_2}}\\cdots p_n^{e_{a_n}}$\nand $b=c p_1^{e_{b_1}}p_2^{e_{b_2}}\\cdots p_n^{e_{b_n}}$</li>\n</ul>\n" }, { "answer_id": 2098261, "score": 1, "text": "<p>If $ a^a\\mid b^b $, then for every prime $ p$ such that $ p\\mid a$ we must have $ p\\mid b $ because $ p\\mid b^b $ implies $ p\\mid b $. Thus we can write the prime factorizations of $ a$ and $ b$ as $a=p_1^{\\alpha_1}\\cdots p_r^{\\alpha_r} $ and $ b=p_1^{\\beta_1}\\cdots p_r^{\\beta_r}\\cdots $. Therefore $$ a^a=p_1^{a\\alpha_1}\\cdots p_r^{a\\alpha_r} ,$$ $$ b^b=p_1^{b\\beta_1}\\cdots p_r\n^{b\\beta_r}\\cdots $$</p>\n\n<p>So if $ a^a\\mid b^b $ we must have $ a\\alpha_i\\le b\\beta_i $, for every $1\\le i\\le r$, but this doesn't mean that $\\alpha_i\\le \\beta_i $ for every $ i $. It could happen that for some $ i$, $ \\alpha_i&gt; \\beta_i $ , which would imply that $ a \\not\\mid b $.</p>\n\n<p>As a example we have the counterexample given in the first answer: $4^4\\mid 10^{10} $, but $4\\not\\mid 10$. In this case $ r=1, p_1=2, \\alpha_1=2$ and $\\beta_1=1$. So we have $4\\cdot 2\\le 10\\cdot 1$, but $2&gt; 1$.</p>\n" } ]
Question: If $a^a$ divides $b^b$, then $a$ divides $b$? Answer 1: <p>This is not true in general. The fact that $4^4$ divides $10^{10}$ would be the smallest counterexample, I think.</p> <p>So <em>why</em> does the proposition fail to hold? For any integer $n$, $\operatorname{rad}(n)$ (for "<em>radical</em>") is the number with the same primes in its prime factorisation as $n$, but all powers are $1$. For instance, $\operatorname{rad}(12) = 6$ and $\operatorname{rad}(98) = 14$. Then we <em>do</em> have "$a^a$ divides $b^b$ implies that $\operatorname{rad}(a)$ divides $\operatorname{rad}(b)$", i.e. every prime that appears in the prime factorisation of $a$ appears in the prime factorisation of $b$. However, we have no way of controlling, for any given of those primes, that the <em>power</em> of that prime is smaller in $a$ than in $b$. We just know that for any given of those primes, $a$ times its exponent in $a$ is less than $b$ times its exponent in $b$. Thus, if $b$ is large enough, and just contains the right primes, we get $a^a$ divides $b^b$, no matter how many times those primes divide $b$.</p> Answer 2: <p>Exceptions will be of the following form:</p> <p>Suppose the prime factorisation of $a$ is $a=p_1^{e_{a_1}}p_2^{e_{a_2}}\cdots p_n^{e_{a_n}}$ with all the $p_i$ distinct primes and with all the $e_{a_i}\ge 1$ and at least one $e_{a_i}\ge 2$. Then </p> <ul> <li>$b$ is divisible by $p_1p_2\cdots p_n$ and can be written as $b=c p_1^{e_{b_1}}p_2^{e_{b_2}}\cdots p_n^{e_{b_n}}$ with $c$ coprime to $a$ and all the $e_{b_i}\ge 1$ </li> <li>for at least one $i$ there is $e_{b_i}\lt e_{a_i}$ so $a$ does not divide $b$</li> <li>for all $i$ there is $ae_{a_i}\le be_{b_i}$ so $a^a$ divides $b^b$</li> </ul> <p>So reversing this to generate all possible exceptions</p> <ul> <li>choose a positive number $n$ of distinct primes $p_1,p_2,\ldots,p_n$ </li> <li>choose the same number $n$ of positive integers $e_{a_1},e_{a_2},\ldots,e_{a_n}$ not all $1$</li> <li>choose the same number $n$ of positive integers $e_{b_1},e_{b_2},\ldots,e_{b_n}$ where for at least one $i$ you have $1 \le e_{b_i}\lt e_{a_i}$</li> <li>find a number $c$ coprime to $p_1p_2\cdots p_n$ and at least $p_1^{e_{a_1}-e_{b_1}}p_2^{e_{a_2}-e_{b_2}}\cdots p_n^{e_{a_n}-e_{b_n}} \displaystyle \max_i\left(\frac{e_{a_i}}{e_{b_i}}\right)$</li> <li>take $a=p_1^{e_{a_1}}p_2^{e_{a_2}}\cdots p_n^{e_{a_n}}$ and $b=c p_1^{e_{b_1}}p_2^{e_{b_2}}\cdots p_n^{e_{b_n}}$</li> </ul> Answer 3: <p>If $ a^a\mid b^b $, then for every prime $ p$ such that $ p\mid a$ we must have $ p\mid b $ because $ p\mid b^b $ implies $ p\mid b $. Thus we can write the prime factorizations of $ a$ and $ b$ as $a=p_1^{\alpha_1}\cdots p_r^{\alpha_r} $ and $ b=p_1^{\beta_1}\cdots p_r^{\beta_r}\cdots $. Therefore $$ a^a=p_1^{a\alpha_1}\cdots p_r^{a\alpha_r} ,$$ $$ b^b=p_1^{b\beta_1}\cdots p_r ^{b\beta_r}\cdots $$</p> <p>So if $ a^a\mid b^b $ we must have $ a\alpha_i\le b\beta_i $, for every $1\le i\le r$, but this doesn't mean that $\alpha_i\le \beta_i $ for every $ i $. It could happen that for some $ i$, $ \alpha_i&gt; \beta_i $ , which would imply that $ a \not\mid b $.</p> <p>As a example we have the counterexample given in the first answer: $4^4\mid 10^{10} $, but $4\not\mid 10$. In this case $ r=1, p_1=2, \alpha_1=2$ and $\beta_1=1$. So we have $4\cdot 2\le 10\cdot 1$, but $2&gt; 1$.</p>
How can I prove this inequality in a triangle: $\frac{1}{(b+c-a)} +\frac{1}{(c+a-b)} +\frac{1}{(a+b-c)} \gt \frac{1}{a} +\frac{1}{b} +\frac{1}{c}$?
2,098,409
4
https://math.stackexchange.com/questions/2098409/how-can-i-prove-this-inequality-in-a-triangle-frac1bc-a-frac1ca
<hr> <p><strong><em>If $a,b,c$ are the sides of a triangle then show that- $\frac{1}{(b+c-a)} +\frac{1}{(c+a-b)} +\frac{1}{(a+b-c)} \gt \frac{1}{a} +\frac{1}{b} +\frac{1}{c}$</em></strong></p> <p>I got this problem from an old book on algebra. I've been trying for a long time but can't reach the answer.</p> <p>My try on the question-</p> <p>We know that in a triangle, the sum of two sides is always greater than the third side</p> <p>$b+c\gt a$ $\implies$ $b+c-a \gt 0$</p> <p>$c+a\gt b$ $\implies$ $c+a-b\gt 0$</p> <p>$a+b \gt c$ $\implies$ $a+b-c\gt 0$</p> <p>Also,</p> <p>$\frac {1}{(b+c)} \lt \frac{1}{a}$</p> <p>$\frac{1}{(c+a)} \lt \frac{1}{b}$</p> <p>$\frac{1}{(a+b)} \lt \frac{1}{c}$</p> <p>Adding,</p> <p>$\frac{1}{(b+c)} + \frac{1}{(c+a)} +\frac{1}{(a+b)} \lt \frac{1}{a} + \frac{1}{b} + \frac{1}{c}$</p> <p>How do I proceed further? All genuine answers are welcome :)</p>
[ { "answer_id": 2098411, "score": 6, "text": "<p><strong>hint</strong>: use $\\dfrac{1}{x}+\\dfrac{1}{y} \\ge \\dfrac{4}{x+y}$ $ $ $3$ times. The first time for the first two terms with $x = a+b-c, y = b+c-a$.</p>\n" }, { "answer_id": 2101928, "score": 1, "text": "<p>Your inequality is wrong! Try <span class=\"math-container\">$a=b=c$</span>.</p>\n\n<p>It should be \n<span class=\"math-container\">$$\\frac{1}{b+c-a} +\\frac{1}{c+a-b} +\\frac{1}{a+b-c} \\geq \\frac{1}{a} +\\frac{1}{b} +\\frac{1}{c}$$</span>\nFor the proof we can use the following way.</p>\n\n<p>Let <span class=\"math-container\">$a+b+c=3u$</span>, <span class=\"math-container\">$ab+ac+bc=3v^2$</span> and <span class=\"math-container\">$abc=w^3$</span>.</p>\n\n<p>Since our inequality is fifth degree, we see that <span class=\"math-container\">$f(w^3)\\geq0,$</span> where <span class=\"math-container\">$f$</span> is a linear function,</p>\n\n<p>which says that it's enough to prove our inequality for an extreme value of <span class=\"math-container\">$w^3$</span>,</p>\n\n<p>which happens in the following cases.</p>\n\n<ol>\n<li><span class=\"math-container\">$b=c=1$</span>.</li>\n</ol>\n\n<p>We obtain:\n<span class=\"math-container\">$$(a-1)^2\\geq0;$$</span></p>\n\n<ol start=\"2\">\n<li><span class=\"math-container\">$c\\rightarrow0^+$</span> and <span class=\"math-container\">$b=1$</span>.</li>\n</ol>\n\n<p>We obtain:\n<span class=\"math-container\">$$(a+1)(a-1)^2\\geq0;$$</span></p>\n\n<ol start=\"3\">\n<li><span class=\"math-container\">$a+b-c\\rightarrow0^+$</span></li>\n</ol>\n\n<p>In this case the inequality is obvious.</p>\n\n<p>Done!</p>\n" } ]
Question: How can I prove this inequality in a triangle: $\frac{1}{(b+c-a)} +\frac{1}{(c+a-b)} +\frac{1}{(a+b-c)} \gt \frac{1}{a} +\frac{1}{b} +\frac{1}{c}$? Answer 1: <p><strong>hint</strong>: use $\dfrac{1}{x}+\dfrac{1}{y} \ge \dfrac{4}{x+y}$ $ $ $3$ times. The first time for the first two terms with $x = a+b-c, y = b+c-a$.</p> Answer 2: <p>Your inequality is wrong! Try <span class="math-container">$a=b=c$</span>.</p> <p>It should be <span class="math-container">$$\frac{1}{b+c-a} +\frac{1}{c+a-b} +\frac{1}{a+b-c} \geq \frac{1}{a} +\frac{1}{b} +\frac{1}{c}$$</span> For the proof we can use the following way.</p> <p>Let <span class="math-container">$a+b+c=3u$</span>, <span class="math-container">$ab+ac+bc=3v^2$</span> and <span class="math-container">$abc=w^3$</span>.</p> <p>Since our inequality is fifth degree, we see that <span class="math-container">$f(w^3)\geq0,$</span> where <span class="math-container">$f$</span> is a linear function,</p> <p>which says that it's enough to prove our inequality for an extreme value of <span class="math-container">$w^3$</span>,</p> <p>which happens in the following cases.</p> <ol> <li><span class="math-container">$b=c=1$</span>.</li> </ol> <p>We obtain: <span class="math-container">$$(a-1)^2\geq0;$$</span></p> <ol start="2"> <li><span class="math-container">$c\rightarrow0^+$</span> and <span class="math-container">$b=1$</span>.</li> </ol> <p>We obtain: <span class="math-container">$$(a+1)(a-1)^2\geq0;$$</span></p> <ol start="3"> <li><span class="math-container">$a+b-c\rightarrow0^+$</span></li> </ol> <p>In this case the inequality is obvious.</p> <p>Done!</p>
Prove that $f$ defined by $f(x)=1$ if $x \in \mathbb{Q}$, $f(x)=0$ if $x \in \mathbb{R}\setminus \mathbb{Q}$, is not integrable
2,098,636
4
https://math.stackexchange.com/questions/2098636/prove-that-f-defined-by-fx-1-if-x-in-mathbbq-fx-0-if-x-in-ma
<p>Prove that </p> <p>$$f\left ( x \right )= \begin{cases} 1 &amp; x \in \mathbb{Q}, \\ 0 &amp; x\in \mathbb{R}\setminus \mathbb{Q}. \end{cases}$$</p> <p>is not integrable.</p> <p>I came across this. I think that the function itself is not continuous? But I do not think that is a legitimate answer to this question.</p>
[ { "answer_id": 2098750, "score": 9, "text": "<p>An important remark: since the function <span class=\"math-container\">$f(x) = \\begin{cases} 1 &amp; x \\in \\mathbb{Q} \\\\ 0 &amp; x \\in \\mathbb{R} \\setminus \\mathbb{Q} \\end{cases}$</span> is clearly Lebesgue-integrable (being equal to <span class=\"math-container\">$0$</span> almost everywhere), you are probably asking about Riemann integrability. In this case, though, you have a problem: the concept of Riemann integrability is defined only for functions defined on <em>bounded</em> intervals, and <span class=\"math-container\">$f$</span> is defined on <span class=\"math-container\">$\\Bbb R$</span>. I believe that what you meant to ask about is the following slightly modified function: <span class=\"math-container\">$g(x) = \\begin{cases} 1 &amp; x \\in [a,b] \\cap \\mathbb{Q} \\\\ 0 &amp; x \\in [a,b] \\setminus \\mathbb{Q} \\end{cases}$</span>. In this case, an answer can be given using <a href=\"https://en.wikipedia.org/wiki/Riemann_integral#Integrability\" rel=\"noreferrer\">Lebesgue's criterion of Riemann integrability</a>: a bounded <span class=\"math-container\">$f$</span> is Riemann-integrable on <span class=\"math-container\">$[a,b]$</span> if and only if the set of discontinuities of <span class=\"math-container\">$f$</span> has Lebesgue measure <span class=\"math-container\">$0$</span>.</p>\n\n<p>Let <span class=\"math-container\">$x \\in [a,b] \\cap \\Bbb Q$</span>. Then <span class=\"math-container\">$f(x)=1$</span>. Pick a sequence of irrational numbers <span class=\"math-container\">$(x_n)_{n\\in\\Bbb N}$</span> with <span class=\"math-container\">$x_n \\to x$</span>. Then <span class=\"math-container\">$f(x_n)=0 \\not\\to 1=f(x)$</span>, so <span class=\"math-container\">$f$</span> is not continuous in <span class=\"math-container\">$x$</span>.</p>\n\n<p>Let <span class=\"math-container\">$x \\in [a,b] \\setminus \\Bbb Q$</span>. Then <span class=\"math-container\">$f(x)=0$</span>. Pick a sequence of rational numbers <span class=\"math-container\">$(x_n)_{n\\in\\Bbb N}$</span> with <span class=\"math-container\">$x_n \\to x$</span>. Then <span class=\"math-container\">$f(x_n)=1 \\not\\to 0=f(x)$</span>, so <span class=\"math-container\">$f$</span> is not continuous in <span class=\"math-container\">$x$</span>.</p>\n\n<p>The above argument shows that <span class=\"math-container\">$f$</span> is discontinuous in all the points of <span class=\"math-container\">$[a,b]$</span>, which is a set of Lebesgue measure <span class=\"math-container\">$b-a \\ne 0$</span>, therefore Lebesgue's criterion tells us that <span class=\"math-container\">$f$</span> is not Riemann-integrable on <span class=\"math-container\">$[a,b]$</span>.</p>\n" }, { "answer_id": 2098661, "score": 7, "text": "<p>Let <span class=\"math-container\">$S(f,P)$</span> and <span class=\"math-container\">$s(f,P)$</span> be the upper and low sums of <span class=\"math-container\">$f$</span> with respect to partition <span class=\"math-container\">$P$</span> of interval <span class=\"math-container\">$[0,1]$</span>. Let <span class=\"math-container\">$M_i=\\sup\\{f(x)| x \\in I_i\\}$</span> and <span class=\"math-container\">$m_i=\\inf\\{f(x)| x \\in I_i\\}$</span>, where <span class=\"math-container\">$I_i$</span> is the <span class=\"math-container\">$i$</span>th interval of the partition <span class=\"math-container\">$P$</span>. Note that <span class=\"math-container\">$M_i=1$</span> for all <span class=\"math-container\">$i$</span>, because every interval <span class=\"math-container\">$I_i$</span> of the partition <span class=\"math-container\">$P$</span> contains rational numbers. On the other hand, <span class=\"math-container\">$m_i=0$</span> for all <span class=\"math-container\">$i$</span> because every interval <span class=\"math-container\">$I_i$</span> of the partition <span class=\"math-container\">$P$</span> contains irrational numbers. By definition,\n<span class=\"math-container\">$$S(f,P)=\\sum_{i=1}^nM_i\\mu(I_i)=\\sum_{i=1}^n1\\mu(I_i)=\\sum_{i=1}^n\\mu(I_i)=1-0=1$$</span>\n<span class=\"math-container\">$$s(f,P)=\\sum_{i=1}^nm_i\\mu(I_i)=\\sum_{i=1}^n0\\mu(I_i)=0$$</span>\nThus, <span class=\"math-container\">$f$</span> is not Riemann integrable because the upper and lower integrals are not equal. (The upper integral is the limit of the upper sums and the lower integral is the limit of the lower sums).</p>\n" }, { "answer_id": 2098647, "score": 5, "text": "<p>The peculiarity of this function is that it has an infinite number of discontinuities in any finite interval over which it is Riemann-integrated.</p>\n\n<p>Here, the upper and lower Riemann sums do not coincide. Hence it is not Riemann integrable. But it is <a href=\"https://math.stackexchange.com/questions/755576/the-lebesgue-integral-and-the-dirichlet-function\">Lebesgue integrable</a>.</p>\n\n<p>For further reading, you may check <a href=\"https://math.stackexchange.com/questions/437711/is-dirichlet-function-riemann-integrable\">this</a>.</p>\n" } ]
Question: Prove that $f$ defined by $f(x)=1$ if $x \in \mathbb{Q}$, $f(x)=0$ if $x \in \mathbb{R}\setminus \mathbb{Q}$, is not integrable Answer 1: <p>An important remark: since the function <span class="math-container">$f(x) = \begin{cases} 1 &amp; x \in \mathbb{Q} \\ 0 &amp; x \in \mathbb{R} \setminus \mathbb{Q} \end{cases}$</span> is clearly Lebesgue-integrable (being equal to <span class="math-container">$0$</span> almost everywhere), you are probably asking about Riemann integrability. In this case, though, you have a problem: the concept of Riemann integrability is defined only for functions defined on <em>bounded</em> intervals, and <span class="math-container">$f$</span> is defined on <span class="math-container">$\Bbb R$</span>. I believe that what you meant to ask about is the following slightly modified function: <span class="math-container">$g(x) = \begin{cases} 1 &amp; x \in [a,b] \cap \mathbb{Q} \\ 0 &amp; x \in [a,b] \setminus \mathbb{Q} \end{cases}$</span>. In this case, an answer can be given using <a href="https://en.wikipedia.org/wiki/Riemann_integral#Integrability" rel="noreferrer">Lebesgue's criterion of Riemann integrability</a>: a bounded <span class="math-container">$f$</span> is Riemann-integrable on <span class="math-container">$[a,b]$</span> if and only if the set of discontinuities of <span class="math-container">$f$</span> has Lebesgue measure <span class="math-container">$0$</span>.</p> <p>Let <span class="math-container">$x \in [a,b] \cap \Bbb Q$</span>. Then <span class="math-container">$f(x)=1$</span>. Pick a sequence of irrational numbers <span class="math-container">$(x_n)_{n\in\Bbb N}$</span> with <span class="math-container">$x_n \to x$</span>. Then <span class="math-container">$f(x_n)=0 \not\to 1=f(x)$</span>, so <span class="math-container">$f$</span> is not continuous in <span class="math-container">$x$</span>.</p> <p>Let <span class="math-container">$x \in [a,b] \setminus \Bbb Q$</span>. Then <span class="math-container">$f(x)=0$</span>. Pick a sequence of rational numbers <span class="math-container">$(x_n)_{n\in\Bbb N}$</span> with <span class="math-container">$x_n \to x$</span>. Then <span class="math-container">$f(x_n)=1 \not\to 0=f(x)$</span>, so <span class="math-container">$f$</span> is not continuous in <span class="math-container">$x$</span>.</p> <p>The above argument shows that <span class="math-container">$f$</span> is discontinuous in all the points of <span class="math-container">$[a,b]$</span>, which is a set of Lebesgue measure <span class="math-container">$b-a \ne 0$</span>, therefore Lebesgue's criterion tells us that <span class="math-container">$f$</span> is not Riemann-integrable on <span class="math-container">$[a,b]$</span>.</p> Answer 2: <p>Let <span class="math-container">$S(f,P)$</span> and <span class="math-container">$s(f,P)$</span> be the upper and low sums of <span class="math-container">$f$</span> with respect to partition <span class="math-container">$P$</span> of interval <span class="math-container">$[0,1]$</span>. Let <span class="math-container">$M_i=\sup\{f(x)| x \in I_i\}$</span> and <span class="math-container">$m_i=\inf\{f(x)| x \in I_i\}$</span>, where <span class="math-container">$I_i$</span> is the <span class="math-container">$i$</span>th interval of the partition <span class="math-container">$P$</span>. Note that <span class="math-container">$M_i=1$</span> for all <span class="math-container">$i$</span>, because every interval <span class="math-container">$I_i$</span> of the partition <span class="math-container">$P$</span> contains rational numbers. On the other hand, <span class="math-container">$m_i=0$</span> for all <span class="math-container">$i$</span> because every interval <span class="math-container">$I_i$</span> of the partition <span class="math-container">$P$</span> contains irrational numbers. By definition, <span class="math-container">$$S(f,P)=\sum_{i=1}^nM_i\mu(I_i)=\sum_{i=1}^n1\mu(I_i)=\sum_{i=1}^n\mu(I_i)=1-0=1$$</span> <span class="math-container">$$s(f,P)=\sum_{i=1}^nm_i\mu(I_i)=\sum_{i=1}^n0\mu(I_i)=0$$</span> Thus, <span class="math-container">$f$</span> is not Riemann integrable because the upper and lower integrals are not equal. (The upper integral is the limit of the upper sums and the lower integral is the limit of the lower sums).</p> Answer 3: <p>The peculiarity of this function is that it has an infinite number of discontinuities in any finite interval over which it is Riemann-integrated.</p> <p>Here, the upper and lower Riemann sums do not coincide. Hence it is not Riemann integrable. But it is <a href="https://math.stackexchange.com/questions/755576/the-lebesgue-integral-and-the-dirichlet-function">Lebesgue integrable</a>.</p> <p>For further reading, you may check <a href="https://math.stackexchange.com/questions/437711/is-dirichlet-function-riemann-integrable">this</a>.</p>
A property of the midpoint of the hypotenuse in a right triangle
2,098,721
4
https://math.stackexchange.com/questions/2098721/a-property-of-the-midpoint-of-the-hypotenuse-in-a-right-triangle
<blockquote> <p>Prove that in a right angled triangle the mid point of the hypotenuse is equidistant from its vertices.</p> </blockquote> <p>I have asked similar question but with no satisfactory result.</p> <p>So I solved it by myself. But I am not sure whether its correct or not. So please help me.</p> <p><strong>Solution:</strong></p> <p>Let in triangle $\Delta OAB$ right angled at $A$. Where $O$ is origin and $\vec{a}$ is vector along $OA$ and $\vec{b}$ is vector along $OB$.</p> <p>Let $C$ be the mid point of hypotenuse $OB$.</p> <p>We have to prove $OC = BC = AC$.</p> <p>As $C$ is mid point so $OC = BC = \frac{\vec{b}}{2}$</p> <p>$AC = \frac{\vec{b}}{2} - \vec{a}$</p> <p>I am not getting the value of AC as $\frac{\vec{b}}{2}$</p> <p><strong>Sorry because I am using my mobile unable to draw and include image.</strong></p> <p>Anything wrong please tell me with explanation. Thanks.</p>
[ { "answer_id": 2098934, "score": 5, "text": "<p>Any right triangle can be inscribed into a circle with center $C$ where the hypotenuse $OB$ is the diameter. Now it is straightforward to deduce that $CA$ is a radius, so $CO=CB=CA=r$. </p>\n\n<p>This is a result of elementary geometry, in fact it is possible demonstrate that in any circle the angle at the centre is twice the corresponding angle at the circumference: $\\angle OCB=2\\angle OAB$: Euclid's <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">inscribed angle theorem</a>.</p>\n\n<p><a href=\"https://i.stack.imgur.com/NCjaw.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/NCjaw.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Now, if the angle $\\angle OCB=180$, then $\\angle OAB=90$, $OB$ is a diameter and $\\triangle OAB$ is a right triangle.</p>\n\n<p><a href=\"https://i.stack.imgur.com/CGPzj.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/CGPzj.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2098852, "score": 1, "text": "<p>In $\\triangle ABC$ let $A'$ be the mid-point of $BC.$ Let $B'$ lie on $AC$ with $A'B'\\| AB.$ Then $\\angle CAB =\\angle CB'A$ because $BA\\| A'B'.$ And $\\angle CBA=\\angle CA'B'$ for the same reason. So triangles $CAB$ and $CB'A'$ are similar. So $$CB'/CA=CA'/CB=1/2.$$ So $B'$ is the mid-point of $AC.$</p>\n\n<p>Suppose $\\angle CAB$ is a right angle. Then so is $\\angle CB'A'.$ Since also$AB'=CB',$ this means that $A'$ lies on the right bisector of $CA$. Therefore A' is equidistant from A and C. That is, $$A'A=A'C.$$ Interchanging the letters $B,C$ throughout all of this we also get $$A'A=A'B.$$</p>\n\n<p>In response to comments by the OP I am adding a second proof.</p>\n\n<p>Let $O$ be the origin. Let $\\|v\\|$ denote the length of a vector $v.$</p>\n\n<p>Let $B,C$ be non-zero vectors with $\\angle BOA$ being a right angle. Let $l_1$ be the line through $O,C.$ Let $l_2$ be the line through $O,B.$ Let $l_3$ be the line through $B$ parallel to $l_1.$ We have $l_1\\bot l_2$ and $l_1\\| l_3$ so $l_2\\bot l_3.$</p>\n\n<p>The vectors $B+C$ and $B-C$ lie on $l_3.$ The distances from $B+C$ to $B$, and from $ B-C$ to $B$ are, respectively, $\\|(B+C)-B\\|=\\|C\\|$ and $\\| (B-C)-B\\|=\\|-C\\|=\\|C\\|,$ which are equal distances. </p>\n\n<p>And $l_2$ contains $B$ with $l_2\\bot l_3.$ So $l_2$ is the right bisector of the segment joining $B-C$ to $B+C.$ And $O$ lies on $l_3.$ Therefore the distances from $B+C$ to $O$ and from $B-C$ to $O$ are equal. That is $$(\\bullet ) \\quad \\|B+C\\|=\\|(B+C)-O\\|=\\|(B-C)-O\\|=\\|B-C\\|.$$</p>\n\n<p>The mid-point $P$ of $B, C$ is $(B+C)/2.$</p>\n\n<p>The distance from $P$ to $B$ is $$\\|(B+C)/2-B\\|=\\|(B-C)/2\\|=\\|B-C\\|/2.$$</p>\n\n<p>The distance from $P$ to $O$ is $$ \\|(B+C)/2-O\\|=\\|(B+C)/2\\|=\\|B+C\\|/2.$$ From $(\\bullet )$ we see that $P$ is equidistant from $O$ and from $B.$ </p>\n" }, { "answer_id": 2110621, "score": 1, "text": "<p>A geometric proof is much simpler. Generate a copy ACD of your ABC triangle so as to form a rectangle ABCD. The diagonals AC and QB intesect in their mid points (0), which happens to be the center of the circumscribed circle. Therefore OA=OB=OC=Hypothenuse/2.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vo5ap.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vo5ap.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2110624, "score": 1, "text": "<p>Here is an image for the question:</p>\n\n<p><a href=\"https://i.stack.imgur.com/QcrER.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/QcrER.png\" alt=\"right angled triangle\"></a></p>\n\n<p>As $C$ is the midpoint of $\\vec{AB}$, $d$ and $e$ are the midpoints of $\\vec{OA}$ and $\\vec{AB }$ respectively, and so by symmetry, $|\\vec{AC}|=|\\vec{OC}|=|\\vec{BC}|$.</p>\n\n<p>To see it another way, we can say $\\vec{OC}=(d,e)$ with a magnitude of $\\sqrt{d^2+e^2}$. $\\vec{AC}=(-d,e)$ and therefore has the same magnitude as $\\vec{OC}$ (and $\\vec{BC}$).</p>\n" }, { "answer_id": 2116700, "score": 1, "text": "<p>Let ABC be any right angled triangle with right angled at C. Let D is the mid point of [AB].</p>\n\n<p>Take C as the origin $\\vec{CA} = \\vec{a}$</p>\n\n<p>The $\\vec{a} \\cdot \\vec{b} = 0$</p>\n\n<p>Also position vector of A = $\\vec{a}$</p>\n\n<p>And position vector of B = $\\vec{b}$</p>\n\n<p>Position vector of D = $\\frac{\\text{P.V of A + P.V of B}}{2} = \\frac{\\vec{a} + \\vec{b}}{2}$</p>\n\n<p>$CD^2 = |\\vec{CD}|^2 = |\\text{P.V of D}|^2$</p>\n\n<p>= $\\left| \\frac{\\vec{a} + \\vec{b}}{2} \\right|^2 = \\left( \\frac{\\vec{a} + \\vec{b}}{2} \\right)^2$</p>\n\n<p>= $\\frac 14 (\\vec{a} + \\vec{b}) \\cdot (\\vec{a} + \\vec{b})$</p>\n\n<p>= $\\frac 14 (a^2 + b^2 + 2 \\vec{a} \\cdot \\vec{b})$</p>\n\n<p>= $\\frac 14 (a^2 + b^2)$</p>\n\n<p>As $ \\vec{a} \\cdot \\vec{b} = 0$</p>\n\n<p>Also $AD^2 = |\\vec{AD}|^2$</p>\n\n<p>= $|\\text{P.V of D - P.V of A}|^2$</p>\n\n<p>= $\\left| \\frac{\\vec{a} + \\vec{b}}{2} - \\vec{a} \\right|^2$</p>\n\n<p>= $\\left( \\frac{\\vec{b} - \\vec{a}}{2} \\right)^2$</p>\n\n<p>= $\\frac 14 (\\vec{b} - \\vec{a}) \\cdot (\\vec{b} - \\vec{a})$</p>\n\n<p>= $\\frac 14 (b^2 + a^2 - 2 \\vec{b} \\cdot \\vec{a})$</p>\n\n<p>= $\\frac 14 (a^2 + b^2)$</p>\n\n<p>As $ \\vec{a} \\cdot \\vec{b} = 0$</p>\n\n<p>Hence $CD^2 = AD^2$</p>\n\n<p>$\\to |CD| = |AD|$</p>\n\n<p>Also |BD| = |AD|</p>\n\n<p>Then |AD| = |BD| = |CD|</p>\n\n<p>Hence proved.</p>\n" }, { "answer_id": 2118572, "score": 1, "text": "<p><a href=\"https://i.stack.imgur.com/RFmrU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RFmrU.png\" alt=\"enter image description here\" /></a></p>\n<p>Suppose the sign &quot;?&quot; is one of the signs &quot;&lt;&quot;, &quot;=&quot;, &quot;&gt;&quot; in the expression:\n<span class=\"math-container\">$$AM=CM\\ \\ ? \\ \\ BM$$</span>\nIf you already know that <strong>the longer side, the larger opposite angle</strong>, you can use this to see that &quot;?&quot; is the same in the next two expressions:\n<span class=\"math-container\">$$\\gamma\\ ?\\ \\alpha$$</span>\n<span class=\"math-container\">$$\\delta\\ ?\\ \\beta$$</span>\nWe can add them:\n<span class=\"math-container\">$$\\gamma+\\delta\\ ?\\ \\alpha+\\beta$$</span>\nUsing <span class=\"math-container\">$\\alpha+\\beta+\\gamma+\\delta=180^o$</span>, it follows that:\n<span class=\"math-container\">$$\\gamma+\\delta\\ ?\\ 90^o$$</span>\nThus <span class=\"math-container\">$BM$</span> is <strong>greater than</strong> <span class=\"math-container\">$AM$</span> and <span class=\"math-container\">$CM$</span> iff the angle <span class=\"math-container\">$B$</span> is <strong>acute</strong>, <strong>equal</strong> iff the angle <span class=\"math-container\">$B$</span> is <strong>right</strong> and <strong>less than</strong> iff the angle is <strong>obtuse</strong>.</p>\n" }, { "answer_id": 2116718, "score": 0, "text": "<ul>\n<li>Assume WLOG that the right-angled triangle has points $O(0,0), A(a,0), B(0,b)$ with $AB$ being the hypotenuse. </li>\n<li>The midpoint of the hypotenuse $M(\\frac a2, \\frac b2$). </li>\n<li>Since the length of $AB$ is $\\ell=\\sqrt{a^2+b^2}$, $AM=BM=\\frac 12 \\ell$. </li>\n<li>The length $OM$ is $\\sqrt{\\left(\\frac a2\\right)^2+\\left(\\frac b2\\right)^2}=\\frac 12 \\sqrt{a^2+b^2}=\\frac 12 \\ell$ </li>\n<li>Hence $OM=AM=BM$. </li>\n</ul>\n" }, { "answer_id": 2117306, "score": 0, "text": "<blockquote>\n <p>As $C$ is mid point so $OC = BC = \\frac{\\vec{b}}{2}$</p>\n</blockquote>\n\n<p>Not that it matters to the following, but the second vector is reversed: $\\;\\overrightarrow{OC} = \\overrightarrow{CB} = \\frac{\\vec b}{2}\\,$.</p>\n\n<blockquote>\n <p>$AC = \\frac{\\vec{b}}{2} - \\vec{a}$</p>\n</blockquote>\n\n<p>Let for brevity $\\,\\vec c = \\overrightarrow{OC} = \\frac{\\vec b}{2}\\,$, then $\\overrightarrow{AC} = \\vec c - \\vec a\\,$</p>\n\n<blockquote>\n <p>I am not getting the value of AC as $\\frac{\\vec{b}}{2}$</p>\n</blockquote>\n\n<p>That equality is obviously not true as vectors, but what we actually need to get is $|\\vec{AC}| = \\big|\\frac{\\vec{b}}{2}\\big|=|\\vec c|\\,$.</p>\n\n<p>In order to get the magnitude of $|\\vec{AC}|$ and prove that equality, we have to use the fact that $\\triangle OAB$ is a right triangle. The premise $OA \\perp AB$ translates in vector notation to the <a href=\"https://en.wikipedia.org/wiki/Dot_product#Properties\" rel=\"nofollow noreferrer\">dot product</a> being $0\\,$: $$\\overrightarrow{OA} \\cdot \\overrightarrow{AB} = 0 \\quad\\iff\\quad \\vec a \\cdot (\\vec b - \\vec a) = 0 \\quad\\iff\\quad \\vec a \\cdot \\vec b - |\\vec a|^2 = 0 \\tag{1}$$</p>\n\n<p>Since $\\vec b = 2\\,\\vec c$ relation $(1)$ can also be written as:</p>\n\n<p>$$2\\,\\vec a \\cdot \\vec c - |\\vec a|^2 = 0 \\tag{2}$$</p>\n\n<p>Using $(2)\\,$, it follows that:</p>\n\n<p>$$\n\\require{cancel}\n|\\vec{AC}|^2 = \\overrightarrow{AC} \\cdot \\overrightarrow{AC} = (\\vec c - \\vec a) \\cdot (\\vec c - \\vec a) = |\\vec c|^2 \\cancel{- 2\\,\\vec a \\cdot \\vec c} + \\cancel{|\\vec a|^2} = |\\vec c|^2 \\tag{3}\n$$</p>\n\n<p>Therefore $|\\vec{AC}|^2 = |\\vec{OC}|^2\\,$, so $|AC| = |OC|\\,$ and, since $|OC|=|BC|\\,$, $\\;|AC|=|OC|=|BC|$.</p>\n" }, { "answer_id": 3974916, "score": 0, "text": "<p>Let the triangle be ABC with angle B = 90 and AC as hypotenuse. Join B to midpoint of AC. Midpoint of AC is called D.</p>\n<p>Let angle C = x. So, angle A = 90 - x.\nLet angle DBC = y. So, angle DBA = 90 - y</p>\n<p>Now, according to Sine rule of triangle, in ∆DBC:</p>\n<p>BD/Sin(C) = DC/Sin(angle DBC)\n==&gt; BD/Sin(x) = DC/Sin(y)\n==&gt; BD = (DC * Sin(x))/Sin(y) -------------------- (equation 1)</p>\n<p>Now, according to Sine rule of triangle, in ∆DBA:</p>\n<p>BD/Sin(A) = AD/Sin(angle DBA)\n==&gt; BD/Sin(90 - x) = AD/Sin(90 - y)\n==&gt; BD = (AD * Cos(x))/Cos(y) -------------------- (equation 2)</p>\n<p>We know that AD = DC, So:\nBD = (DC * Cos(x))/Cos(y) -------------------- (equation 3)</p>\n<p>So, now from equation 1 and equation 3, we get:</p>\n<p>(DC * Sin(x))/Sin(y) = (DC * Cos(x))/Cos(y)\n==&gt; Sin(x)/Sin(y) = Cos(x)/Cos(y)\n==&gt; Sin(x)Cos(y) = Cos(x)Sin(y)\n==&gt; Sin(x)Cos(y) - Cos(x)Sin(y) = 0\n==&gt; Sin(x - y) = 0\n==&gt; x - y = 0\n==&gt; x = y</p>\n<p>So, angle C = angle DBC.\nSo, ∆DBC is isosceles triangle and so,\nBD = DC.</p>\n<p>So, BD = DC = AD --------- (Proved)</p>\n<p>------End of proof--------</p>\n" } ]
Question: A property of the midpoint of the hypotenuse in a right triangle Answer 1: <p>Any right triangle can be inscribed into a circle with center $C$ where the hypotenuse $OB$ is the diameter. Now it is straightforward to deduce that $CA$ is a radius, so $CO=CB=CA=r$. </p> <p>This is a result of elementary geometry, in fact it is possible demonstrate that in any circle the angle at the centre is twice the corresponding angle at the circumference: $\angle OCB=2\angle OAB$: Euclid's <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">inscribed angle theorem</a>.</p> <p><a href="https://i.stack.imgur.com/NCjaw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NCjaw.jpg" alt="enter image description here"></a></p> <p>Now, if the angle $\angle OCB=180$, then $\angle OAB=90$, $OB$ is a diameter and $\triangle OAB$ is a right triangle.</p> <p><a href="https://i.stack.imgur.com/CGPzj.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CGPzj.jpg" alt="enter image description here"></a></p> Answer 2: <p>In $\triangle ABC$ let $A'$ be the mid-point of $BC.$ Let $B'$ lie on $AC$ with $A'B'\| AB.$ Then $\angle CAB =\angle CB'A$ because $BA\| A'B'.$ And $\angle CBA=\angle CA'B'$ for the same reason. So triangles $CAB$ and $CB'A'$ are similar. So $$CB'/CA=CA'/CB=1/2.$$ So $B'$ is the mid-point of $AC.$</p> <p>Suppose $\angle CAB$ is a right angle. Then so is $\angle CB'A'.$ Since also$AB'=CB',$ this means that $A'$ lies on the right bisector of $CA$. Therefore A' is equidistant from A and C. That is, $$A'A=A'C.$$ Interchanging the letters $B,C$ throughout all of this we also get $$A'A=A'B.$$</p> <p>In response to comments by the OP I am adding a second proof.</p> <p>Let $O$ be the origin. Let $\|v\|$ denote the length of a vector $v.$</p> <p>Let $B,C$ be non-zero vectors with $\angle BOA$ being a right angle. Let $l_1$ be the line through $O,C.$ Let $l_2$ be the line through $O,B.$ Let $l_3$ be the line through $B$ parallel to $l_1.$ We have $l_1\bot l_2$ and $l_1\| l_3$ so $l_2\bot l_3.$</p> <p>The vectors $B+C$ and $B-C$ lie on $l_3.$ The distances from $B+C$ to $B$, and from $ B-C$ to $B$ are, respectively, $\|(B+C)-B\|=\|C\|$ and $\| (B-C)-B\|=\|-C\|=\|C\|,$ which are equal distances. </p> <p>And $l_2$ contains $B$ with $l_2\bot l_3.$ So $l_2$ is the right bisector of the segment joining $B-C$ to $B+C.$ And $O$ lies on $l_3.$ Therefore the distances from $B+C$ to $O$ and from $B-C$ to $O$ are equal. That is $$(\bullet ) \quad \|B+C\|=\|(B+C)-O\|=\|(B-C)-O\|=\|B-C\|.$$</p> <p>The mid-point $P$ of $B, C$ is $(B+C)/2.$</p> <p>The distance from $P$ to $B$ is $$\|(B+C)/2-B\|=\|(B-C)/2\|=\|B-C\|/2.$$</p> <p>The distance from $P$ to $O$ is $$ \|(B+C)/2-O\|=\|(B+C)/2\|=\|B+C\|/2.$$ From $(\bullet )$ we see that $P$ is equidistant from $O$ and from $B.$ </p> Answer 3: <p>A geometric proof is much simpler. Generate a copy ACD of your ABC triangle so as to form a rectangle ABCD. The diagonals AC and QB intesect in their mid points (0), which happens to be the center of the circumscribed circle. Therefore OA=OB=OC=Hypothenuse/2.</p> <p><a href="https://i.stack.imgur.com/vo5ap.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vo5ap.png" alt="enter image description here"></a></p> Answer 4: <p>Here is an image for the question:</p> <p><a href="https://i.stack.imgur.com/QcrER.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QcrER.png" alt="right angled triangle"></a></p> <p>As $C$ is the midpoint of $\vec{AB}$, $d$ and $e$ are the midpoints of $\vec{OA}$ and $\vec{AB }$ respectively, and so by symmetry, $|\vec{AC}|=|\vec{OC}|=|\vec{BC}|$.</p> <p>To see it another way, we can say $\vec{OC}=(d,e)$ with a magnitude of $\sqrt{d^2+e^2}$. $\vec{AC}=(-d,e)$ and therefore has the same magnitude as $\vec{OC}$ (and $\vec{BC}$).</p> Answer 5: <p>Let ABC be any right angled triangle with right angled at C. Let D is the mid point of [AB].</p> <p>Take C as the origin $\vec{CA} = \vec{a}$</p> <p>The $\vec{a} \cdot \vec{b} = 0$</p> <p>Also position vector of A = $\vec{a}$</p> <p>And position vector of B = $\vec{b}$</p> <p>Position vector of D = $\frac{\text{P.V of A + P.V of B}}{2} = \frac{\vec{a} + \vec{b}}{2}$</p> <p>$CD^2 = |\vec{CD}|^2 = |\text{P.V of D}|^2$</p> <p>= $\left| \frac{\vec{a} + \vec{b}}{2} \right|^2 = \left( \frac{\vec{a} + \vec{b}}{2} \right)^2$</p> <p>= $\frac 14 (\vec{a} + \vec{b}) \cdot (\vec{a} + \vec{b})$</p> <p>= $\frac 14 (a^2 + b^2 + 2 \vec{a} \cdot \vec{b})$</p> <p>= $\frac 14 (a^2 + b^2)$</p> <p>As $ \vec{a} \cdot \vec{b} = 0$</p> <p>Also $AD^2 = |\vec{AD}|^2$</p> <p>= $|\text{P.V of D - P.V of A}|^2$</p> <p>= $\left| \frac{\vec{a} + \vec{b}}{2} - \vec{a} \right|^2$</p> <p>= $\left( \frac{\vec{b} - \vec{a}}{2} \right)^2$</p> <p>= $\frac 14 (\vec{b} - \vec{a}) \cdot (\vec{b} - \vec{a})$</p> <p>= $\frac 14 (b^2 + a^2 - 2 \vec{b} \cdot \vec{a})$</p> <p>= $\frac 14 (a^2 + b^2)$</p> <p>As $ \vec{a} \cdot \vec{b} = 0$</p> <p>Hence $CD^2 = AD^2$</p> <p>$\to |CD| = |AD|$</p> <p>Also |BD| = |AD|</p> <p>Then |AD| = |BD| = |CD|</p> <p>Hence proved.</p> Answer 6: <p><a href="https://i.stack.imgur.com/RFmrU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RFmrU.png" alt="enter image description here" /></a></p> <p>Suppose the sign &quot;?&quot; is one of the signs &quot;&lt;&quot;, &quot;=&quot;, &quot;&gt;&quot; in the expression: <span class="math-container">$$AM=CM\ \ ? \ \ BM$$</span> If you already know that <strong>the longer side, the larger opposite angle</strong>, you can use this to see that &quot;?&quot; is the same in the next two expressions: <span class="math-container">$$\gamma\ ?\ \alpha$$</span> <span class="math-container">$$\delta\ ?\ \beta$$</span> We can add them: <span class="math-container">$$\gamma+\delta\ ?\ \alpha+\beta$$</span> Using <span class="math-container">$\alpha+\beta+\gamma+\delta=180^o$</span>, it follows that: <span class="math-container">$$\gamma+\delta\ ?\ 90^o$$</span> Thus <span class="math-container">$BM$</span> is <strong>greater than</strong> <span class="math-container">$AM$</span> and <span class="math-container">$CM$</span> iff the angle <span class="math-container">$B$</span> is <strong>acute</strong>, <strong>equal</strong> iff the angle <span class="math-container">$B$</span> is <strong>right</strong> and <strong>less than</strong> iff the angle is <strong>obtuse</strong>.</p> Answer 7: <ul> <li>Assume WLOG that the right-angled triangle has points $O(0,0), A(a,0), B(0,b)$ with $AB$ being the hypotenuse. </li> <li>The midpoint of the hypotenuse $M(\frac a2, \frac b2$). </li> <li>Since the length of $AB$ is $\ell=\sqrt{a^2+b^2}$, $AM=BM=\frac 12 \ell$. </li> <li>The length $OM$ is $\sqrt{\left(\frac a2\right)^2+\left(\frac b2\right)^2}=\frac 12 \sqrt{a^2+b^2}=\frac 12 \ell$ </li> <li>Hence $OM=AM=BM$. </li> </ul> Answer 8: <blockquote> <p>As $C$ is mid point so $OC = BC = \frac{\vec{b}}{2}$</p> </blockquote> <p>Not that it matters to the following, but the second vector is reversed: $\;\overrightarrow{OC} = \overrightarrow{CB} = \frac{\vec b}{2}\,$.</p> <blockquote> <p>$AC = \frac{\vec{b}}{2} - \vec{a}$</p> </blockquote> <p>Let for brevity $\,\vec c = \overrightarrow{OC} = \frac{\vec b}{2}\,$, then $\overrightarrow{AC} = \vec c - \vec a\,$</p> <blockquote> <p>I am not getting the value of AC as $\frac{\vec{b}}{2}$</p> </blockquote> <p>That equality is obviously not true as vectors, but what we actually need to get is $|\vec{AC}| = \big|\frac{\vec{b}}{2}\big|=|\vec c|\,$.</p> <p>In order to get the magnitude of $|\vec{AC}|$ and prove that equality, we have to use the fact that $\triangle OAB$ is a right triangle. The premise $OA \perp AB$ translates in vector notation to the <a href="https://en.wikipedia.org/wiki/Dot_product#Properties" rel="nofollow noreferrer">dot product</a> being $0\,$: $$\overrightarrow{OA} \cdot \overrightarrow{AB} = 0 \quad\iff\quad \vec a \cdot (\vec b - \vec a) = 0 \quad\iff\quad \vec a \cdot \vec b - |\vec a|^2 = 0 \tag{1}$$</p> <p>Since $\vec b = 2\,\vec c$ relation $(1)$ can also be written as:</p> <p>$$2\,\vec a \cdot \vec c - |\vec a|^2 = 0 \tag{2}$$</p> <p>Using $(2)\,$, it follows that:</p> <p>$$ \require{cancel} |\vec{AC}|^2 = \overrightarrow{AC} \cdot \overrightarrow{AC} = (\vec c - \vec a) \cdot (\vec c - \vec a) = |\vec c|^2 \cancel{- 2\,\vec a \cdot \vec c} + \cancel{|\vec a|^2} = |\vec c|^2 \tag{3} $$</p> <p>Therefore $|\vec{AC}|^2 = |\vec{OC}|^2\,$, so $|AC| = |OC|\,$ and, since $|OC|=|BC|\,$, $\;|AC|=|OC|=|BC|$.</p> Answer 9: <p>Let the triangle be ABC with angle B = 90 and AC as hypotenuse. Join B to midpoint of AC. Midpoint of AC is called D.</p> <p>Let angle C = x. So, angle A = 90 - x. Let angle DBC = y. So, angle DBA = 90 - y</p> <p>Now, according to Sine rule of triangle, in ∆DBC:</p> <p>BD/Sin(C) = DC/Sin(angle DBC) ==&gt; BD/Sin(x) = DC/Sin(y) ==&gt; BD = (DC * Sin(x))/Sin(y) -------------------- (equation 1)</p> <p>Now, according to Sine rule of triangle, in ∆DBA:</p> <p>BD/Sin(A) = AD/Sin(angle DBA) ==&gt; BD/Sin(90 - x) = AD/Sin(90 - y) ==&gt; BD = (AD * Cos(x))/Cos(y) -------------------- (equation 2)</p> <p>We know that AD = DC, So: BD = (DC * Cos(x))/Cos(y) -------------------- (equation 3)</p> <p>So, now from equation 1 and equation 3, we get:</p> <p>(DC * Sin(x))/Sin(y) = (DC * Cos(x))/Cos(y) ==&gt; Sin(x)/Sin(y) = Cos(x)/Cos(y) ==&gt; Sin(x)Cos(y) = Cos(x)Sin(y) ==&gt; Sin(x)Cos(y) - Cos(x)Sin(y) = 0 ==&gt; Sin(x - y) = 0 ==&gt; x - y = 0 ==&gt; x = y</p> <p>So, angle C = angle DBC. So, ∆DBC is isosceles triangle and so, BD = DC.</p> <p>So, BD = DC = AD --------- (Proved)</p> <p>------End of proof--------</p>
How can I integrate $\ln \left( x+\sqrt{1+x^2} \right) $?
2,098,843
4
https://math.stackexchange.com/questions/2098843/how-can-i-integrate-ln-left-x-sqrt1x2-right
<blockquote> <p>How can I integrate <span class="math-container">$$\ln \left( x+\sqrt{1+x^2} \right) $$</span> The problem is from IA Maron Calculus, Problem 4.3.17. The problem is under the heading 'Integration by parts'. I have tried it, but it just leads to 2 unsolvable integrals</p> </blockquote>
[ { "answer_id": 2101380, "score": 2, "text": "<p>Assuming the correct question is the edited one. With integration by parts:\n$$\\int \\ln\\left(x+\\sqrt{1+x^{2}}\\right) \\,\\mbox{d}x\n= x\\ln\\left(x+\\sqrt{1+x^{2}}\\right)-\\color{blue}{\\int \\frac{x}{\\sqrt{1+x^2}}\\,\\mbox{d}x}$$\nSince:\n$$\\left(\\ln\\left(x+\\sqrt{1+x^{2}}\\right) \\right)' =\\left( \\mbox{arcsinh}\\,x \\right)' = \\frac{1}{\\sqrt{1+x^2}}$$</p>\n\n<p>For the blue integral, you can substitute $u=1+x^2$ and end up with:</p>\n\n<p>$$\\int \\ln\\left(x+\\sqrt{1+x^{2}}\\right) \\,\\mbox{d}x\n= x\\ln\\left(x+\\sqrt{1+x^{2}}\\right)-\\color{blue}{\\sqrt{1+x^2}}+C$$</p>\n" }, { "answer_id": 2098890, "score": 0, "text": "<p>As I wrote in the comment, the integral is not trivial. There has to be some error in the command.</p>\n\n<p>But for the sake of completeness, here is the solution.</p>\n\n<p>$$\\log \\left(x+\\sqrt{x^4+1}\\right) x-2 x+\\frac{1}{2} \\left(\\sinh ^{-1}\\left(x^2\\right)-\\frac{i \\left(-3 i+\\sqrt{3}\\right) \\tan ^{-1}\\left(\\frac{\\left(3+i \\sqrt{3}\\right) x^8-2 i \\sqrt{6-6 i \\sqrt{3}} \\sqrt{x^4+1} x^6+2 \\sqrt{3} i \\left(\\sqrt{2-2 i \\sqrt{3}} \\sqrt{x^4+1}+2\\right) x^4-2 i \\sqrt{6-6 i \\sqrt{3}} \\sqrt{x^4+1} x^2+\\sqrt{3} i-3}{\\left(-7 i+\\sqrt{3}\\right) x^8+2 \\left(i+\\sqrt{3}\\right) x^6+2 \\left(-3 i+\\sqrt{3}\\right) x^4+2 \\left(i+\\sqrt{3}\\right) x^2-i+\\sqrt{3}}\\right)}{\\sqrt{6-6 i \\sqrt{3}}}+\\frac{\\left(3-i \\sqrt{3}\\right) \\tan ^{-1}\\left(\\frac{\\left(3 i+\\sqrt{3}\\right) i x^8-2 i \\sqrt{6+6 i \\sqrt{3}} \\sqrt{x^4+1} x^6+2 \\sqrt{3} i \\left(\\sqrt{2+2 i \\sqrt{3}} \\sqrt{x^4+1}+2\\right) x^4-2 i \\sqrt{6+6 i \\sqrt{3}} \\sqrt{x^4+1} x^2+\\sqrt{3} i+3}{\\left(7 i+\\sqrt{3}\\right) x^8+2 \\left(-i+\\sqrt{3}\\right) x^6+2 \\left(3 i+\\sqrt{3}\\right) x^4+2 \\left(-i+\\sqrt{3}\\right) x^2+i+\\sqrt{3}}\\right)}{\\sqrt{6+6 i \\sqrt{3}}}+\\frac{\\left(3 i+\\sqrt{3}\\right) \\log \\left(16 \\left(x^4-x^2+1\\right)^2\\right)}{2 \\sqrt{6+6 i \\sqrt{3}}}+\\frac{\\left(-3 i+\\sqrt{3}\\right) \\log \\left(16 \\left(x^4-x^2+1\\right)^2\\right)}{2 \\sqrt{6-6 i \\sqrt{3}}}-\\frac{\\left(3 i+\\sqrt{3}\\right) \\log \\left(\\left(x^4-x^2+1\\right) \\left(\\left(-3 i+\\sqrt{3}\\right) x^4-i \\left(\\sqrt{2+2 i \\sqrt{3}} \\sqrt{x^4+1}+2\\right) x^2-3 i+\\sqrt{3}-2 i \\sqrt{2+2 i \\sqrt{3}} \\sqrt{x^4+1}\\right)\\right)}{2 \\sqrt{6+6 i \\sqrt{3}}}-\\frac{\\left(-3 i+\\sqrt{3}\\right) \\log \\left(\\left(x^4-x^2+1\\right) \\left(\\left(3 i+\\sqrt{3}\\right) x^4+i \\left(\\sqrt{2-2 i \\sqrt{3}} \\sqrt{x^4+1}+2\\right) x^2+3 i+\\sqrt{3}+2 \\sqrt{2-2 i \\sqrt{3}} i \\sqrt{x^4+1}\\right)\\right)}{2 \\sqrt{6-6 i \\sqrt{3}}}\\right)-\\frac{\\left(-3 i+\\sqrt{3}\\right) \\tan ^{-1}\\left(\\frac{1}{2} \\left(1-i \\sqrt{3}\\right) x\\right)}{\\sqrt{-6+6 i \\sqrt{3}}}-\\frac{\\left(3 i+\\sqrt{3}\\right) \\tan ^{-1}\\left(\\frac{1}{2} \\left(1+i \\sqrt{3}\\right) x\\right)}{\\sqrt{-6-6 i \\sqrt{3}}}$$</p>\n" }, { "answer_id": 3512569, "score": 0, "text": "<p>Notice the identity</p>\n\n<p><span class=\"math-container\">$$\\sqrt{\\sinh^2(t)+1}=\\cosh t$$</span> which hints the change of variable <span class=\"math-container\">$x=\\sinh t$</span> and <span class=\"math-container\">$dx=\\cosh t\\,dt$</span>.</p>\n\n<p>The integral becomes</p>\n\n<p><span class=\"math-container\">$$\\int\\log(\\sinh t+\\cosh t)\\cosh t\\,dt=\\int t\\cosh t\\,dt.$$</span></p>\n\n<p>Then by parts</p>\n\n<p><span class=\"math-container\">$$t\\,\\sinh t-\\int\\sinh t\\,dt=t\\,\\sinh t-\\cosh t=x\\text{ arsinh }x-\\sqrt{x^2+1}.$$</span></p>\n" } ]
Question: How can I integrate $\ln \left( x+\sqrt{1+x^2} \right) $? Answer 1: <p>Assuming the correct question is the edited one. With integration by parts: $$\int \ln\left(x+\sqrt{1+x^{2}}\right) \,\mbox{d}x = x\ln\left(x+\sqrt{1+x^{2}}\right)-\color{blue}{\int \frac{x}{\sqrt{1+x^2}}\,\mbox{d}x}$$ Since: $$\left(\ln\left(x+\sqrt{1+x^{2}}\right) \right)' =\left( \mbox{arcsinh}\,x \right)' = \frac{1}{\sqrt{1+x^2}}$$</p> <p>For the blue integral, you can substitute $u=1+x^2$ and end up with:</p> <p>$$\int \ln\left(x+\sqrt{1+x^{2}}\right) \,\mbox{d}x = x\ln\left(x+\sqrt{1+x^{2}}\right)-\color{blue}{\sqrt{1+x^2}}+C$$</p> Answer 2: <p>As I wrote in the comment, the integral is not trivial. There has to be some error in the command.</p> <p>But for the sake of completeness, here is the solution.</p> <p>$$\log \left(x+\sqrt{x^4+1}\right) x-2 x+\frac{1}{2} \left(\sinh ^{-1}\left(x^2\right)-\frac{i \left(-3 i+\sqrt{3}\right) \tan ^{-1}\left(\frac{\left(3+i \sqrt{3}\right) x^8-2 i \sqrt{6-6 i \sqrt{3}} \sqrt{x^4+1} x^6+2 \sqrt{3} i \left(\sqrt{2-2 i \sqrt{3}} \sqrt{x^4+1}+2\right) x^4-2 i \sqrt{6-6 i \sqrt{3}} \sqrt{x^4+1} x^2+\sqrt{3} i-3}{\left(-7 i+\sqrt{3}\right) x^8+2 \left(i+\sqrt{3}\right) x^6+2 \left(-3 i+\sqrt{3}\right) x^4+2 \left(i+\sqrt{3}\right) x^2-i+\sqrt{3}}\right)}{\sqrt{6-6 i \sqrt{3}}}+\frac{\left(3-i \sqrt{3}\right) \tan ^{-1}\left(\frac{\left(3 i+\sqrt{3}\right) i x^8-2 i \sqrt{6+6 i \sqrt{3}} \sqrt{x^4+1} x^6+2 \sqrt{3} i \left(\sqrt{2+2 i \sqrt{3}} \sqrt{x^4+1}+2\right) x^4-2 i \sqrt{6+6 i \sqrt{3}} \sqrt{x^4+1} x^2+\sqrt{3} i+3}{\left(7 i+\sqrt{3}\right) x^8+2 \left(-i+\sqrt{3}\right) x^6+2 \left(3 i+\sqrt{3}\right) x^4+2 \left(-i+\sqrt{3}\right) x^2+i+\sqrt{3}}\right)}{\sqrt{6+6 i \sqrt{3}}}+\frac{\left(3 i+\sqrt{3}\right) \log \left(16 \left(x^4-x^2+1\right)^2\right)}{2 \sqrt{6+6 i \sqrt{3}}}+\frac{\left(-3 i+\sqrt{3}\right) \log \left(16 \left(x^4-x^2+1\right)^2\right)}{2 \sqrt{6-6 i \sqrt{3}}}-\frac{\left(3 i+\sqrt{3}\right) \log \left(\left(x^4-x^2+1\right) \left(\left(-3 i+\sqrt{3}\right) x^4-i \left(\sqrt{2+2 i \sqrt{3}} \sqrt{x^4+1}+2\right) x^2-3 i+\sqrt{3}-2 i \sqrt{2+2 i \sqrt{3}} \sqrt{x^4+1}\right)\right)}{2 \sqrt{6+6 i \sqrt{3}}}-\frac{\left(-3 i+\sqrt{3}\right) \log \left(\left(x^4-x^2+1\right) \left(\left(3 i+\sqrt{3}\right) x^4+i \left(\sqrt{2-2 i \sqrt{3}} \sqrt{x^4+1}+2\right) x^2+3 i+\sqrt{3}+2 \sqrt{2-2 i \sqrt{3}} i \sqrt{x^4+1}\right)\right)}{2 \sqrt{6-6 i \sqrt{3}}}\right)-\frac{\left(-3 i+\sqrt{3}\right) \tan ^{-1}\left(\frac{1}{2} \left(1-i \sqrt{3}\right) x\right)}{\sqrt{-6+6 i \sqrt{3}}}-\frac{\left(3 i+\sqrt{3}\right) \tan ^{-1}\left(\frac{1}{2} \left(1+i \sqrt{3}\right) x\right)}{\sqrt{-6-6 i \sqrt{3}}}$$</p> Answer 3: <p>Notice the identity</p> <p><span class="math-container">$$\sqrt{\sinh^2(t)+1}=\cosh t$$</span> which hints the change of variable <span class="math-container">$x=\sinh t$</span> and <span class="math-container">$dx=\cosh t\,dt$</span>.</p> <p>The integral becomes</p> <p><span class="math-container">$$\int\log(\sinh t+\cosh t)\cosh t\,dt=\int t\cosh t\,dt.$$</span></p> <p>Then by parts</p> <p><span class="math-container">$$t\,\sinh t-\int\sinh t\,dt=t\,\sinh t-\cosh t=x\text{ arsinh }x-\sqrt{x^2+1}.$$</span></p>
Show that $\overline{f(z)}$ is holomorhpic in $D(0;1)$ if and only if $f$ is constant.
2,098,877
4
https://math.stackexchange.com/questions/2098877/show-that-overlinefz-is-holomorhpic-in-d01-if-and-only-if-f-is-con
<p>Let $f$ be holomorphic in $D(0;1)$. Show that $\overline{f(z)}$ is holomorhpic in $D(0;1)$ if and only if $f$ is constant. </p> <p>It is clear to me that if $f$ is constant then $\overline{f(z)}$ is holomorphic since the Cauchy-Riemann equations will be satisfied. However, I'm not sure about how to show the other direction. How do I do that?</p>
[ { "answer_id": 2098885, "score": 4, "text": "<p>There are several ways to show that. The most elementary is probably to look at the Cauchy-Riemann equations again.</p>\n\n<p>A little bit more advanced, but shorter would be:</p>\n\n<p>If $\\overline f$ is holomorphic, then so are $f+\\overline f$ and $f-\\overline f$. But these are $\\mathbb R$-valued and $i\\mathbb R$-valued functions respectively, hence not open mappings. Then they must both be constant. Now $f$ is the sum of constant functions.</p>\n" }, { "answer_id": 2865196, "score": 0, "text": "<p>If $f$ and $\\overline f$ are holomorphic in a region $G$ (eg unit disc), then by Cauchy-Riemann, $$u_x = v_y, v_x = -u_y,u_x=-v_y,v_x=u_y \\implies u_x=v_x=u_y=v_y=f_x=f_y=f'(z)=0$$</p>\n\n<p>By <a href=\"http://math.sfsu.edu/beck/complex.html\" rel=\"nofollow noreferrer\">Thm 2.17</a>, $f$ is constant in $G$.</p>\n" } ]
Question: Show that $\overline{f(z)}$ is holomorhpic in $D(0;1)$ if and only if $f$ is constant. Answer 1: <p>There are several ways to show that. The most elementary is probably to look at the Cauchy-Riemann equations again.</p> <p>A little bit more advanced, but shorter would be:</p> <p>If $\overline f$ is holomorphic, then so are $f+\overline f$ and $f-\overline f$. But these are $\mathbb R$-valued and $i\mathbb R$-valued functions respectively, hence not open mappings. Then they must both be constant. Now $f$ is the sum of constant functions.</p> Answer 2: <p>If $f$ and $\overline f$ are holomorphic in a region $G$ (eg unit disc), then by Cauchy-Riemann, $$u_x = v_y, v_x = -u_y,u_x=-v_y,v_x=u_y \implies u_x=v_x=u_y=v_y=f_x=f_y=f'(z)=0$$</p> <p>By <a href="http://math.sfsu.edu/beck/complex.html" rel="nofollow noreferrer">Thm 2.17</a>, $f$ is constant in $G$.</p>
A limit of indeterminate
2,098,912
4
https://math.stackexchange.com/questions/2098912/a-limit-of-indeterminate
<p>$$\lim _{x\to \infty }\frac{\left(e^x+x\right)^{n+1}-e^{\left(n+1\right)x}}{xe^{nx}}\:= \:\:?$$</p> <p>I tried to get it to a simpler form like this:</p> <p>$$\lim _{x\to \infty }\frac{\left(e^x+x\right)^{n+1}-e^{\left(n+1\right)x}}{xe^{nx}}\:=\:\lim _{x\to \infty }\frac{e^{\left(n+1\right)x}\left(1+\frac{x}{e^x}\right)^{n+1}-e^{\left(n+1\right)x}}{xe^{nx}}=\lim _{x\to \infty }\frac{e^x}{x}\left(\left(1+\frac{x}{e^x}\right)^{n+1}-1\right)$$</p> <p>Then i noted $\frac{e^x}{x}$ with t which tends to infinity also. Then i applyed the formula for $a^n-b^n$ (I've considered $1$ as $1^\left(n+1\right)$) and i got 1, but the answer is $n+1$. What have i missed ?</p>
[ { "answer_id": 2098924, "score": 2, "text": "<blockquote>\n <p>$$\\begin{align}\\lim_{x\\to \\infty}\\frac{e^x}{x}((1+\\frac{x}{e^x})^{n+1} - 1) &amp;= \\lim_{t\\to \\infty}t((1+\\frac{1}{t})^{n+1}-1) \\\\&amp;= \\lim_{t\\to \\infty}t(1+\\frac{n+1}{t}-1 + o(\\frac{1}{t}))\\\\&amp;=n+1\\end{align}$$</p>\n</blockquote>\n" }, { "answer_id": 2098925, "score": 2, "text": "<p>Use binomial expansion on the numerator :\n$$\\left(e^x+x\\right)^{n+1}-e^{(n+1)x} = \\sum_{k=0}^n \\binom{k}{n+1} e^{kx}x^{n+1-k}$$\nNow the dominant term here is the term of number $k=n$, so\n$$\\frac{\\left(e^x+x\\right)^{n+1}-e^{(n+1)x}}{xe^{nx}} \\sim \\frac{(n+1)xe^{nx}}{xe^{nx}}=n+1$$\nAn alternative method, starting with your factorization, is to use $(1+u)^\\alpha-1\\sim \\alpha u$, so :\n$$\\frac{e^x}{x}\\left(\\left(1+\\frac{x}{e^x}\\right)^{n+1}-1\\right) \\sim \\frac{e^x}{x}.(n+1)\\frac{x}{e^x}$$\nwhich gives the same result.</p>\n" }, { "answer_id": 3241798, "score": 0, "text": "<p>First multiply numerator and denominator by e,</p>\n\n<p>We get <span class=\"math-container\">$\\dfrac{e(1+\\frac{x}{e^x})^{n+1}-1}{x}. $</span>\n= <span class=\"math-container\">$$\\dfrac{e*\\left(e^{\\frac{x}{e^x}(n+1)}-1\\right)}{x}. $$</span> ( as <span class=\"math-container\">$1^\\infty$</span> form)\nApplying L' hopital's rule and simplifying,we get </p>\n\n<p>= <span class=\"math-container\">$$ \\dfrac{e(e^{\\frac{x}{e^x}(n+1)})*(n+1)*(1-x)}{e^x}\n= $$</span>e(e^{\\frac{x}{e^x}(n+1)-1})(n+1)$$</p>\n\n<p><span class=\"math-container\">$$~e * e^{-1} * (n+1) = n+1$$</span></p>\n" } ]
Question: A limit of indeterminate Answer 1: <blockquote> <p>$$\begin{align}\lim_{x\to \infty}\frac{e^x}{x}((1+\frac{x}{e^x})^{n+1} - 1) &amp;= \lim_{t\to \infty}t((1+\frac{1}{t})^{n+1}-1) \\&amp;= \lim_{t\to \infty}t(1+\frac{n+1}{t}-1 + o(\frac{1}{t}))\\&amp;=n+1\end{align}$$</p> </blockquote> Answer 2: <p>Use binomial expansion on the numerator : $$\left(e^x+x\right)^{n+1}-e^{(n+1)x} = \sum_{k=0}^n \binom{k}{n+1} e^{kx}x^{n+1-k}$$ Now the dominant term here is the term of number $k=n$, so $$\frac{\left(e^x+x\right)^{n+1}-e^{(n+1)x}}{xe^{nx}} \sim \frac{(n+1)xe^{nx}}{xe^{nx}}=n+1$$ An alternative method, starting with your factorization, is to use $(1+u)^\alpha-1\sim \alpha u$, so : $$\frac{e^x}{x}\left(\left(1+\frac{x}{e^x}\right)^{n+1}-1\right) \sim \frac{e^x}{x}.(n+1)\frac{x}{e^x}$$ which gives the same result.</p> Answer 3: <p>First multiply numerator and denominator by e,</p> <p>We get <span class="math-container">$\dfrac{e(1+\frac{x}{e^x})^{n+1}-1}{x}. $</span> = <span class="math-container">$$\dfrac{e*\left(e^{\frac{x}{e^x}(n+1)}-1\right)}{x}. $$</span> ( as <span class="math-container">$1^\infty$</span> form) Applying L' hopital's rule and simplifying,we get </p> <p>= <span class="math-container">$$ \dfrac{e(e^{\frac{x}{e^x}(n+1)})*(n+1)*(1-x)}{e^x} = $$</span>e(e^{\frac{x}{e^x}(n+1)-1})(n+1)$$</p> <p><span class="math-container">$$~e * e^{-1} * (n+1) = n+1$$</span></p>
Summation of $\arcsin $ series.
2,099,081
4
https://math.stackexchange.com/questions/2099081/summation-of-arcsin-series
<p>What is $a $ if </p> <p>$$\sum _{n=1} ^{\infty} \arcsin \left(\frac {\sqrt {n}-\sqrt {n-1}}{\sqrt {n (n+1)}}\right) =\frac {\pi }{a} \,?$$</p> <p><strong>Attempt:</strong> What I tried is to convert the series to $\arctan$ and then convert it telescoping series. So in terms of $\arctan $ it becomes </p> <p>$$\arctan \left(\frac {\sqrt {n}-\sqrt {n-1}}{\sqrt {n}+\sqrt {n-1}}\right) $$</p> <p>but now if I divide by $n$ it simplifies as $n\frac {\pi}4-\sum _1^{\infty} \arctan \left(\frac {\sqrt {n-1}}{\sqrt {n}}\right) $ but as $n$ is tending towards infinity it will lead to infinity which seems wrong. Also note that $a$ is an integer . Thanks!</p>
[ { "answer_id": 2099218, "score": 6, "text": "<p>We can utilize the following trigonometric identity \n$$\\arcsin x-\\arcsin y = \\arcsin(x\\sqrt{1-y^2}-y\\sqrt{1-x^2})$$\nby putting $x=\\frac{1}{\\sqrt{n}}$, $y=\\frac{1}{\\sqrt{n+1}}$. Then we get\n\\begin{align}\\arcsin \\frac{1}{\\sqrt{n}}-\\arcsin \\frac{1}{\\sqrt{n+1}} &amp;= \\arcsin\\left(\\frac{1}{\\sqrt{n}}\\sqrt{1-\\frac{1}{n+1}}-\\frac{1}{\\sqrt{n+1}}\\sqrt{1-\\frac{1}{n}}\\right)\\\\\n&amp;=\\arcsin\\left(\\frac{\\sqrt{n}-\\sqrt{n-1}}{\\sqrt{n}\\sqrt{n+1}}\\right). \n\\end{align}\nSo we have a telescoping series\n\\begin{align}\n\\sum_{n=1}^{k} \\arcsin\\left(\\frac{\\sqrt{n}-\\sqrt{n-1}}{\\sqrt{n}\\sqrt{n+1}}\\right) &amp;= \\sum_{n=1}^{k} \\left(\\arcsin \\frac{1}{\\sqrt{n}}-\\arcsin \\frac{1}{\\sqrt{n+1}}\\right)\\\\\n&amp;=\\arcsin 1-\\arcsin \\frac{1}{\\sqrt{k+1}}\n\\end{align}\nand in the limit $\\sum_{n=1}^{\\infty} \\arcsin\\left(\\frac{\\sqrt{n}-\\sqrt{n-1}}{\\sqrt{n}\\sqrt{n+1}}\\right)=\\arcsin 1-\\arcsin 0=\\frac{\\pi}{2}$ and $a=2$.</p>\n" }, { "answer_id": 2099205, "score": 4, "text": "<p>Taking the principal branch of $\\arcsin$ (with values in $\\bigl[-\\frac{\\pi}{2}, \\frac{\\pi}{2}\\bigr]$), we have</p>\n\n<p>$$\\tan\\bigl(\\arcsin s\\bigr) = \\frac{\\sin \\bigl(\\arcsin s\\bigr)}{\\cos \\bigl(\\arcsin s\\bigr)} = \\frac{s}{\\sqrt{1 - s^2}}.$$</p>\n\n<p>With $s = \\frac{\\sqrt{n} - \\sqrt{n-1}}{\\sqrt{n(n+1)}}$, we get</p>\n\n<p>\\begin{align}\n1 - s^2 &amp;= 1 - \\frac{(\\sqrt{n} - \\sqrt{n-1})^2}{n(n+1)}\\\\\n&amp;= \\frac{n^2 + n - (n - 2\\sqrt{n(n-1)} + n-1)}{n(n+1)}\\\\\n&amp;= \\frac{n(n-1) + 2\\sqrt{n(n-1)} + 1}{n(n+1)}\\\\\n&amp;= \\frac{(1 + \\sqrt{n(n-1)})^2}{n(n+1)},\n\\end{align}</p>\n\n<p>and so</p>\n\n<p>\\begin{align}\n\\tan \\biggl(\\arcsin \\frac{\\sqrt{n} - \\sqrt{n-1}}{\\sqrt{n(n+1)}}\\biggr)\n&amp;= \\frac{\\sqrt{n} - \\sqrt{n-1}}{\\sqrt{n(n+1)}}\\cdot \\frac{\\sqrt{n(n+1)}}{1 + \\sqrt{n(n-1)}} \\\\\n&amp;= \\frac{\\sqrt{n} - \\sqrt{n-1}}{1 + \\sqrt{n} \\sqrt{n-1}} \\\\\n&amp;= \\tan\\bigl(\\arctan \\sqrt{n} - \\arctan \\sqrt{n-1}\\bigr),\n\\end{align}</p>\n\n<p>whence we obtain</p>\n\n<p>$$\\sum_{n = 1}^{\\infty} \\arcsin \\frac{\\sqrt{n} - \\sqrt{n-1}}{\\sqrt{n(n+1)}} = \\sum_{n = 1}^\\infty \\bigl( \\arctan \\sqrt{n} - \\arctan \\sqrt{n-1}\\bigr) = \\frac{\\pi}{2}.$$</p>\n" } ]
Question: Summation of $\arcsin $ series. Answer 1: <p>We can utilize the following trigonometric identity $$\arcsin x-\arcsin y = \arcsin(x\sqrt{1-y^2}-y\sqrt{1-x^2})$$ by putting $x=\frac{1}{\sqrt{n}}$, $y=\frac{1}{\sqrt{n+1}}$. Then we get \begin{align}\arcsin \frac{1}{\sqrt{n}}-\arcsin \frac{1}{\sqrt{n+1}} &amp;= \arcsin\left(\frac{1}{\sqrt{n}}\sqrt{1-\frac{1}{n+1}}-\frac{1}{\sqrt{n+1}}\sqrt{1-\frac{1}{n}}\right)\\ &amp;=\arcsin\left(\frac{\sqrt{n}-\sqrt{n-1}}{\sqrt{n}\sqrt{n+1}}\right). \end{align} So we have a telescoping series \begin{align} \sum_{n=1}^{k} \arcsin\left(\frac{\sqrt{n}-\sqrt{n-1}}{\sqrt{n}\sqrt{n+1}}\right) &amp;= \sum_{n=1}^{k} \left(\arcsin \frac{1}{\sqrt{n}}-\arcsin \frac{1}{\sqrt{n+1}}\right)\\ &amp;=\arcsin 1-\arcsin \frac{1}{\sqrt{k+1}} \end{align} and in the limit $\sum_{n=1}^{\infty} \arcsin\left(\frac{\sqrt{n}-\sqrt{n-1}}{\sqrt{n}\sqrt{n+1}}\right)=\arcsin 1-\arcsin 0=\frac{\pi}{2}$ and $a=2$.</p> Answer 2: <p>Taking the principal branch of $\arcsin$ (with values in $\bigl[-\frac{\pi}{2}, \frac{\pi}{2}\bigr]$), we have</p> <p>$$\tan\bigl(\arcsin s\bigr) = \frac{\sin \bigl(\arcsin s\bigr)}{\cos \bigl(\arcsin s\bigr)} = \frac{s}{\sqrt{1 - s^2}}.$$</p> <p>With $s = \frac{\sqrt{n} - \sqrt{n-1}}{\sqrt{n(n+1)}}$, we get</p> <p>\begin{align} 1 - s^2 &amp;= 1 - \frac{(\sqrt{n} - \sqrt{n-1})^2}{n(n+1)}\\ &amp;= \frac{n^2 + n - (n - 2\sqrt{n(n-1)} + n-1)}{n(n+1)}\\ &amp;= \frac{n(n-1) + 2\sqrt{n(n-1)} + 1}{n(n+1)}\\ &amp;= \frac{(1 + \sqrt{n(n-1)})^2}{n(n+1)}, \end{align}</p> <p>and so</p> <p>\begin{align} \tan \biggl(\arcsin \frac{\sqrt{n} - \sqrt{n-1}}{\sqrt{n(n+1)}}\biggr) &amp;= \frac{\sqrt{n} - \sqrt{n-1}}{\sqrt{n(n+1)}}\cdot \frac{\sqrt{n(n+1)}}{1 + \sqrt{n(n-1)}} \\ &amp;= \frac{\sqrt{n} - \sqrt{n-1}}{1 + \sqrt{n} \sqrt{n-1}} \\ &amp;= \tan\bigl(\arctan \sqrt{n} - \arctan \sqrt{n-1}\bigr), \end{align}</p> <p>whence we obtain</p> <p>$$\sum_{n = 1}^{\infty} \arcsin \frac{\sqrt{n} - \sqrt{n-1}}{\sqrt{n(n+1)}} = \sum_{n = 1}^\infty \bigl( \arctan \sqrt{n} - \arctan \sqrt{n-1}\bigr) = \frac{\pi}{2}.$$</p>
Subgroup of $\mathbb{R}$ that is a nontrivial direct sum of $n$ groups
2,099,597
4
https://math.stackexchange.com/questions/2099597/subgroup-of-mathbbr-that-is-a-nontrivial-direct-sum-of-n-groups
<p>Given any integer $n\geq 2$, I need to find a subgroup of $\mathbb{R}$ which is a nontrivial direct sum of $n$ groups.</p> <p>Initially, I was thinking of $\mathbb{Q}$, since it has the nice property of count ability, but them I remembered that every two nontrivial subgroups of $\mathbb{Q}$ has nontrivial intersection. </p> <p>So, I am really at a loss as to how to even start. What $n$ groups comprise the nontrivial direct sum of a subgroup of $\mathbb{R}$? And which subgroup of $\mathbb{R}$? </p> <p>I thank you ahead of time for any help you can give.</p>
[ { "answer_id": 2099603, "score": 3, "text": "<p>Any finetely generated subgroup of $\\mathbb{R}$ is torsion free, thus free. Thus, you are to find $n$ elements of $\\mathbb{R}$ which are linearly independent over $\\mathbb{Z}$. Hint: a bunch of irrationals will do the trick!</p>\n" }, { "answer_id": 2113714, "score": 2, "text": "<p>Consider a subgroup $\\langle x_1, \\ldots, x_t \\rangle$ such that $x_1, \\ldots, x_t \\in \\mathbb{R}$ are linearly independent over $\\mathbb{Z}$. In this case $\\langle x_1, \\ldots, x_t \\rangle \\cong \\langle x_1 \\rangle \\oplus \\cdots \\oplus \\langle x_t \\rangle$.</p>\n\n<p>For example, the subgroup $H = \\langle 1, \\sqrt{2} \\rangle$ works for $t = 2$. Here $H = \\{a + b \\sqrt{2} : a, b \\in \\mathbb{Z}\\} \\cong \\mathbb{Z} \\oplus \\mathbb{Z}$.</p>\n\n<p>Some families of $\\mathbb{Z}$-independent real numbers are $\\{ \\log(p) : p \\text{ prime} \\}$ and $\\{ \\sqrt{p} : p \\text{ prime} \\}$.</p>\n" }, { "answer_id": 2113859, "score": 2, "text": "<p>There are two ways of solving this problem which came to my mind at first glance.</p>\n\n<p>1) <strong>Probably a bit \"cheating\"</strong>, as I assume you don't know the classification of finiteley generated abelian groups. However it says (in its simplest form) that a <em>finitely generated</em> abelian group $A$ is of the form $A \\cong \\mathbb{Z}^k \\times T$, where $T$ is the subgroup of all torsion elements. As you are looking for a certain subgroup of $\\mathbb{R}$ (I assume your operation is <em>addition</em>), it is nevessary to note that $(\\mathbb{R}, +)$ does <em>not</em> contain any non-trivial elemtents of finite order. Indeed, can you think of a <em>non-zero</em> $x \\in \\mathbb{R}$, s.t. $x + \\dots + x = 0$?</p>\n\n<p>So if you take some elements $x_1, \\dots, x_n \\in \\mathbb{R}$, you can be certain that they generate a subgroup isomorphic to $\\mathbb{Z}^k = \\mathbb{Z} \\times \\dots \\times \\mathbb{Z}$ for some $k$, i.e. $\\langle x_1, \\dots, x_n \\rangle \\cong \\mathbb{Z}^k$. However you may not always be lucky and have $k &lt; n$. And in your question you are asking for the \"optimal\" case where $k = n$. But this is exactly the case when your chosen $x_1, \\dots, x_n$ happen to be <em>linearly independant over</em> $\\mathbb{Z}$. Why? Suppose they are not. Then you can find <em>integers</em> $a_1, \\dots, a_n \\in \\mathbb{Z}$ not all equal to zero, s.t. $a_1 x_1 + \\dots + a_n x_n = 0$. Without loss let us assume that $a_n \\neq 0$, then you can rearrange this to $x_n = -\\frac{1}{a_n} (a_1 x_1 + \\dots + a_{n-1} x_{n-1})$. In other words your $x_n$ can be expressed in terms of the other $x_1, \\dots , x_{n-1}$, i.e. $x_n$ is an \"unnecessary gnerator\" for the group $\\langle x_1, \\dots, x_n \\rangle$. Hence it would suffice to look at $\\langle x_1, \\dots, x_{n-1} \\rangle$. I am sure you now get the point.</p>\n\n<p>2) <strong>The more or less direct approach</strong> would be to look at one of the characterisations of direct products. I assume you know this, since this is probably something done in every first course in group theory (<em>internal direct product</em>). Namely, let $G$ be a group and $A,B \\leq G$ be subgroups. Then $G \\cong A \\times B$ if and only if the following $3$ properties hold:</p>\n\n<p>a) $A \\cap B = \\{ 0 \\}$</p>\n\n<p>b) Every element of $G$ can be expressed as a product of an element of $A$ and an element of $B$, i.e. $G = A + B := \\{a+b | a \\in A, b \\in B\\}$ (I took the symbol \"+\" since we are working with addition in this very example)</p>\n\n<p>c) Both $A$ and $B$ are normal subgroups of $G$.</p>\n\n<p>So take two non-trivial element $x,y \\in \\mathbb{R}$. Since they are non-torsion (see above paragraph) we have $A:=\\langle y \\rangle \\cong \\mathbb{Z}$ and $B:= \\langle x \\rangle \\cong \\mathbb{Z}$. As $\\mathbb{R}$ is abelian all its subgroups are <em>normal</em>, in particular item c) holds. As we will form the group $A \\cdot B = : G \\subseteq \\mathbb{R}$, we only have to handle item a). But can you think of two explicit elements $x$ and $y$ such that the intersection of the corresponding subgroups $A$ and $B$ is exactly the zero element? (I am most surely that you can). However if you have such two elements you have accomplished that $\\mathbb{R} \\supseteq G \\cong A \\times B \\cong \\mathbb{Z}^2$. For example take (the already mentioned) numbers of the form $\\sqrt{j^2 + 1}$. If you take $x=\\sqrt{1^2 + 1} = \\sqrt{2}$ and $y =\\sqrt{2^2 + 1} = \\sqrt{5}$ you have an example. But now you can proceed inductively. Add another element, $\\sqrt{10}$, and consider the subgroup generated by this element. You want it to have trivial intersection with your already formed group $G$. In any case this breaks down to the fact that these numbers are linearly independant over $\\mathbb{Z}$, which you can easily show by simple algebra. Indeed suppose they have non-trivial intersection, i.e. there exist <em>integers</em> $a,b,c \\in \\mathbb{Z}$ (where $c \\neq 0$), s.t. $a \\sqrt{2} + b \\sqrt{5} = c \\sqrt{10}$ (you certainly see why this is \"the same\" as linear independancy of $\\mathbb{Z}$). Square the whole thing and you will see the solution. You will get a contradiction and obtain that $a = b = c = 0$. I hope my answer helps you to understand why the other answers to this question were already enough help :)</p>\n" }, { "answer_id": 2113879, "score": 2, "text": "<p>If you want a specific example, consider for instance the subgroup generated by\nthe numbers \n$$\ne^1, e^2, e^3,..., e^n\n$$\nIn view of <a href=\"https://en.wikipedia.org/wiki/Lindemann%E2%80%93Weierstrass_theorem\" rel=\"nofollow noreferrer\">Lindemann–Weierstrass Theorem (Baker's Reformulation)</a>, these numbers are linearly independent over $\\bar{{\\mathbb Q}}$ and, hence, over ${\\mathbb Q}$. Therefore, regarding ${\\mathbb R}$ as a vector space over ${\\mathbb Q}$, the subspace spanned by these exponentials is isomorphic to ${\\mathbb Q}^n$. Hence, the subgroup they generate is isomorphic to ${\\mathbb Z}^n$. The latter, I trust you know, is an $n$-fold direct product of nontrivial groups. </p>\n\n<p>As for your side question why ${\\mathbb R}$ is torsion-free, consider the sum of strictly positive number $a_1+ ... + a_n$. Prove (by induction) that this sum is still strictly positive using Hilbert's axioms of real numbers. Now, specialize to the case $a_1=...=a_n=a&gt;0$. </p>\n" } ]
Question: Subgroup of $\mathbb{R}$ that is a nontrivial direct sum of $n$ groups Answer 1: <p>Any finetely generated subgroup of $\mathbb{R}$ is torsion free, thus free. Thus, you are to find $n$ elements of $\mathbb{R}$ which are linearly independent over $\mathbb{Z}$. Hint: a bunch of irrationals will do the trick!</p> Answer 2: <p>Consider a subgroup $\langle x_1, \ldots, x_t \rangle$ such that $x_1, \ldots, x_t \in \mathbb{R}$ are linearly independent over $\mathbb{Z}$. In this case $\langle x_1, \ldots, x_t \rangle \cong \langle x_1 \rangle \oplus \cdots \oplus \langle x_t \rangle$.</p> <p>For example, the subgroup $H = \langle 1, \sqrt{2} \rangle$ works for $t = 2$. Here $H = \{a + b \sqrt{2} : a, b \in \mathbb{Z}\} \cong \mathbb{Z} \oplus \mathbb{Z}$.</p> <p>Some families of $\mathbb{Z}$-independent real numbers are $\{ \log(p) : p \text{ prime} \}$ and $\{ \sqrt{p} : p \text{ prime} \}$.</p> Answer 3: <p>There are two ways of solving this problem which came to my mind at first glance.</p> <p>1) <strong>Probably a bit "cheating"</strong>, as I assume you don't know the classification of finiteley generated abelian groups. However it says (in its simplest form) that a <em>finitely generated</em> abelian group $A$ is of the form $A \cong \mathbb{Z}^k \times T$, where $T$ is the subgroup of all torsion elements. As you are looking for a certain subgroup of $\mathbb{R}$ (I assume your operation is <em>addition</em>), it is nevessary to note that $(\mathbb{R}, +)$ does <em>not</em> contain any non-trivial elemtents of finite order. Indeed, can you think of a <em>non-zero</em> $x \in \mathbb{R}$, s.t. $x + \dots + x = 0$?</p> <p>So if you take some elements $x_1, \dots, x_n \in \mathbb{R}$, you can be certain that they generate a subgroup isomorphic to $\mathbb{Z}^k = \mathbb{Z} \times \dots \times \mathbb{Z}$ for some $k$, i.e. $\langle x_1, \dots, x_n \rangle \cong \mathbb{Z}^k$. However you may not always be lucky and have $k &lt; n$. And in your question you are asking for the "optimal" case where $k = n$. But this is exactly the case when your chosen $x_1, \dots, x_n$ happen to be <em>linearly independant over</em> $\mathbb{Z}$. Why? Suppose they are not. Then you can find <em>integers</em> $a_1, \dots, a_n \in \mathbb{Z}$ not all equal to zero, s.t. $a_1 x_1 + \dots + a_n x_n = 0$. Without loss let us assume that $a_n \neq 0$, then you can rearrange this to $x_n = -\frac{1}{a_n} (a_1 x_1 + \dots + a_{n-1} x_{n-1})$. In other words your $x_n$ can be expressed in terms of the other $x_1, \dots , x_{n-1}$, i.e. $x_n$ is an "unnecessary gnerator" for the group $\langle x_1, \dots, x_n \rangle$. Hence it would suffice to look at $\langle x_1, \dots, x_{n-1} \rangle$. I am sure you now get the point.</p> <p>2) <strong>The more or less direct approach</strong> would be to look at one of the characterisations of direct products. I assume you know this, since this is probably something done in every first course in group theory (<em>internal direct product</em>). Namely, let $G$ be a group and $A,B \leq G$ be subgroups. Then $G \cong A \times B$ if and only if the following $3$ properties hold:</p> <p>a) $A \cap B = \{ 0 \}$</p> <p>b) Every element of $G$ can be expressed as a product of an element of $A$ and an element of $B$, i.e. $G = A + B := \{a+b | a \in A, b \in B\}$ (I took the symbol "+" since we are working with addition in this very example)</p> <p>c) Both $A$ and $B$ are normal subgroups of $G$.</p> <p>So take two non-trivial element $x,y \in \mathbb{R}$. Since they are non-torsion (see above paragraph) we have $A:=\langle y \rangle \cong \mathbb{Z}$ and $B:= \langle x \rangle \cong \mathbb{Z}$. As $\mathbb{R}$ is abelian all its subgroups are <em>normal</em>, in particular item c) holds. As we will form the group $A \cdot B = : G \subseteq \mathbb{R}$, we only have to handle item a). But can you think of two explicit elements $x$ and $y$ such that the intersection of the corresponding subgroups $A$ and $B$ is exactly the zero element? (I am most surely that you can). However if you have such two elements you have accomplished that $\mathbb{R} \supseteq G \cong A \times B \cong \mathbb{Z}^2$. For example take (the already mentioned) numbers of the form $\sqrt{j^2 + 1}$. If you take $x=\sqrt{1^2 + 1} = \sqrt{2}$ and $y =\sqrt{2^2 + 1} = \sqrt{5}$ you have an example. But now you can proceed inductively. Add another element, $\sqrt{10}$, and consider the subgroup generated by this element. You want it to have trivial intersection with your already formed group $G$. In any case this breaks down to the fact that these numbers are linearly independant over $\mathbb{Z}$, which you can easily show by simple algebra. Indeed suppose they have non-trivial intersection, i.e. there exist <em>integers</em> $a,b,c \in \mathbb{Z}$ (where $c \neq 0$), s.t. $a \sqrt{2} + b \sqrt{5} = c \sqrt{10}$ (you certainly see why this is "the same" as linear independancy of $\mathbb{Z}$). Square the whole thing and you will see the solution. You will get a contradiction and obtain that $a = b = c = 0$. I hope my answer helps you to understand why the other answers to this question were already enough help :)</p> Answer 4: <p>If you want a specific example, consider for instance the subgroup generated by the numbers $$ e^1, e^2, e^3,..., e^n $$ In view of <a href="https://en.wikipedia.org/wiki/Lindemann%E2%80%93Weierstrass_theorem" rel="nofollow noreferrer">Lindemann–Weierstrass Theorem (Baker's Reformulation)</a>, these numbers are linearly independent over $\bar{{\mathbb Q}}$ and, hence, over ${\mathbb Q}$. Therefore, regarding ${\mathbb R}$ as a vector space over ${\mathbb Q}$, the subspace spanned by these exponentials is isomorphic to ${\mathbb Q}^n$. Hence, the subgroup they generate is isomorphic to ${\mathbb Z}^n$. The latter, I trust you know, is an $n$-fold direct product of nontrivial groups. </p> <p>As for your side question why ${\mathbb R}$ is torsion-free, consider the sum of strictly positive number $a_1+ ... + a_n$. Prove (by induction) that this sum is still strictly positive using Hilbert's axioms of real numbers. Now, specialize to the case $a_1=...=a_n=a&gt;0$. </p>
What is the term for vector in same direction but with different length
2,099,706
4
https://math.stackexchange.com/questions/2099706/what-is-the-term-for-vector-in-same-direction-but-with-different-length
<p>I'll say $\vec{a}$ and $\vec{b}$ are collinear if they lies in same line, like (1, 2, 3) and (-1, -2, -3). What should I call two vectors in same direction but with different lengths? like (1, 2, 3) and (2, 4, 6). (Of course they are collinear, but is there any more specific term for them?)</p>
[ { "answer_id": 2099711, "score": 2, "text": "<p>You can say (1,2,3) and (2,4,6) are parallel.</p>\n\n<p>As $ \\vec{a} = k \\vec{b}$</p>\n\n<blockquote>\n <p>Edit - Difference between parallel and collinear</p>\n</blockquote>\n\n<p>If two parallel vectors have angle = 0 between them that is both are in same direction but collinear vector lie in same plane they may be parallel or anti parallel that is angle = 180. </p>\n" }, { "answer_id": 2099761, "score": 1, "text": "<p>The proper term is that one vector is a positive scalar multiple of the other. Or you could say that they give the same normalised vector (which is the vector obtained by dividing them by their own length). In a more informal discussion you could say the vectors point in the same direction.</p>\n" } ]
Question: What is the term for vector in same direction but with different length Answer 1: <p>You can say (1,2,3) and (2,4,6) are parallel.</p> <p>As $ \vec{a} = k \vec{b}$</p> <blockquote> <p>Edit - Difference between parallel and collinear</p> </blockquote> <p>If two parallel vectors have angle = 0 between them that is both are in same direction but collinear vector lie in same plane they may be parallel or anti parallel that is angle = 180. </p> Answer 2: <p>The proper term is that one vector is a positive scalar multiple of the other. Or you could say that they give the same normalised vector (which is the vector obtained by dividing them by their own length). In a more informal discussion you could say the vectors point in the same direction.</p>
Definition of significant figures
2,099,735
4
https://math.stackexchange.com/questions/2099735/definition-of-significant-figures
<p>Normally when we are taught how to add numbers with regards to significant figures, we are told to round the result to the rightmost place of the least precise digit. $3.55 + 4 = 7.55$, for example, would be rounded off to $8$. But for my argument I will be considering the addition of two numbers $9$ and $2$, both of which are significant to the ones digit.</p> <p>Following the usual rule for addition, $9 + 2 = 11$ and now we have two significant figures. However, whenever I think about significant figures I would intuitively think of these two definitions:</p> <blockquote> <p>1 - Digits to the right of the last significant digit, can be any number between 0 and 9.</p> <p>2 - Any digit that has the possibility of being more than one numerical value, is not a significant digit.</p> </blockquote> <p>Using the first definition, we can put $9$ and $2$ in these forms:</p> <p>$9.A$</p> <p>$2.B$</p> <p>where $A$ and $B$ are arbitrary digits ranging from $0$ to $9$. Thus, when we add these two numbers, we would end up with:</p> <p>$$9.A + 2.B$$</p> <p>$$= 9 + 2 + 0.A + 0.B$$</p> <p>$$= 11 + 0.A + 0.B$$</p> <p>Finally when we assign some meaningful values to $A$ and $B$ such as $(A , B) = (0 , 0)$ or $(A , B) = (9 , 9)$, we can observe something very interesting:</p> <ul> <li>For $(A , B) = (0 , 0)$:</li> </ul> <p>$$11 + 0.0 + 0.0 = 11$$</p> <ul> <li>For $(A , B) = (9 , 9)$:</li> </ul> <p>$$11 + 0.9 + 0.9 = 12.8$$</p> <p>In the set of all possible numbers that can result from adding $9.A$ and $2.B$, both of which are significant to the ones digit, the minimum happens to be $11$ and the maximum happens to be $12.8$ (the set becomes even greater if we add more uncertain digits to the right of $A$ and $B$, and in that case the maximum resulting value starts to approach $13$ as you add more and more 9's to both $9$ and $2$). </p> <p>Lastly, we can see that tens digit is always $1$, and using my second definition of significant digits, we can claim that the tens digit is significant. On the other hand, because ones digit can either be $1$ or $2$, the ones digit is not significant. Thus, using my two definitions of significant figures, we can claim that the addition between $9$ and $2$ results in one significant figure (at the tens place), rather than two significant figures one would get from the usual rule of thumb.</p> <p><strong>So my question is, are my definitions of significant digits correct?</strong> If yes, then I can say the general rules of thumb for adding numbers with significant figures is wrong and is being blindly taught and learned in high schools and universities. If not, then can you show me why not and if you have any credible sources.</p>
[ { "answer_id": 2099768, "score": 1, "text": "<p>The standard significant digits rules assume that the values you working with are <em>rounded</em> versions of the true values. Thus you cannot take (A, B) = (9,9) in your example, because 9.9 would have been rounded to 10, not 9.</p>\n\n<p>Note also that 8.9 is rounded to 9, so you cannot assume that the true value has the form \"9.A\".</p>\n" }, { "answer_id": 2099794, "score": 0, "text": "<p>There is some inconsistency in usage. Some people count significant digits from the decimal point, which works well enough when the possible range of values is reasonably small. For the rule you cite (and others related to error analysis and propagation) to be applied consistency, one must instead start counting significant digits from the first non-zero digit from the left. This is equivalent to writing all numbers in scientific notation and simply counting digits starting on the left. </p>\n\n<p>In your example, then, you’re actually adding $9\\times10^0$ and $2\\times10^0$. Both exponents are the same so no adjustment is necessary before performing the addition. Their sum is $1.1\\times10^1$, but this has too many significant digits, so it gets rounded to $1\\times 10^1$, or $10$. </p>\n\n<p>There is an ambiguity that you have to deal with, though: how many significant digits does an integer that ends with a string of zeros have? A number like $1000$ can cause the precision of a calculation to collapse very quickly if not handled correctly. This is another advantage of scientific notation in this context: there’s no such ambiguity. Trailing zeros, if any are significant digits.</p>\n" }, { "answer_id": 2099933, "score": 0, "text": "<p>There are two issues here. Number of significant digits, and how much rounding is appropriate. Both are slippery concepts with apparently different definitions and advice in different fields. Here I will focus mainly on rounding. The crucial point is not to throw away information that may be useful. Here are some examples to think about.</p>\n\n<p>$1.$ <em>Distinctions between individuals and averages.</em> If one person loses a pound during a month, that is not usually a significant event. In the process of eating, drinking, exercising, or elimination a person can temporarily gain or lose a pound during a day. However, if there is an <em>average</em> gain or loss of one pound for 100 subjects during a month long weight loss program, that may be a significant if not startling change. So for most purposes, it is probably appropriate to round\nweights of individuals to the nearest pound and to round weights of groups to the nearest tenth of a pound.</p>\n\n<p>$2.$ <em>Premature rounding.</em> In a famous study of boarding school boys in India, it was established that students were about one $cm$ shorter in the evening than in the morning. Each student was measured twice by each of two investigators in the morning, and again in the evening. Even though it is not really feasible to\nmeasure height to the nearest millimeter, best attempts were made to do so for all eight measurements on each child. If all the original measurements had been rounded to the nearest centimeter, the \"1$cm$\" difference would not have been as clear. Also, it turned out that one investigator systematically measured students as a little taller than the other, and that finding might have been totally obscured by rounding to the nearest centimeter. </p>\n\n<p>$3.$ <em>Rounding before statistical tests.</em> Certain kinds of rank-based statistical tests do not work well if there are many ties in theoretically-continuous data. So, in performing such tests, one should be careful about rounding measurements before testing. </p>\n\n<p>Example: A brief simulation shows the effect of rounding. Suppose a certain\nmeasurement in population can be modeled for practical purposes as normal with mean 400 and standard deviation 60. As a practical matter, such measurements will usually span an interval such as $(260,540).$ If we simulate 100 such measurements\nand record them to the nearest 0.1, then typically there will be few if any\nties; specifically, one tie in the instance shown below:</p>\n\n<pre><code>x = rnorm(100, 400, 60)\nlength(unique(round(x,1)))\n## 99\n</code></pre>\n\n<p>But if we round to the nearest <em>integer,</em> it is not unusual to get as many\nas 10 or 15 ties (enough to make rank-based analysis problematic).</p>\n\n<pre><code>length(unique(round(x)))\n## 87\n</code></pre>\n\n<p>$4.$ <em>Authoritative standards.</em> My opinion is that the rounding rules in mathematics\ntexts are generally not appropriate in statistical practice. Governments and industries \noften adopt standards of practice for rounding that focus on rounding that\nsimplifies without sacrificing useful informaion. <a href=\"https://www.nist.gov/sites/default/files/documents/pml/wmd/labmetrology/GLP_9_20140911.pdf\" rel=\"nofollow noreferrer\">This report</a> by the US National Institute for Science and Technology is one\nexample. I am not claiming that these standards are always ideal, but they are\nbased on thoughtful consideration by people who handle real-world data.</p>\n" } ]
Question: Definition of significant figures Answer 1: <p>The standard significant digits rules assume that the values you working with are <em>rounded</em> versions of the true values. Thus you cannot take (A, B) = (9,9) in your example, because 9.9 would have been rounded to 10, not 9.</p> <p>Note also that 8.9 is rounded to 9, so you cannot assume that the true value has the form "9.A".</p> Answer 2: <p>There is some inconsistency in usage. Some people count significant digits from the decimal point, which works well enough when the possible range of values is reasonably small. For the rule you cite (and others related to error analysis and propagation) to be applied consistency, one must instead start counting significant digits from the first non-zero digit from the left. This is equivalent to writing all numbers in scientific notation and simply counting digits starting on the left. </p> <p>In your example, then, you’re actually adding $9\times10^0$ and $2\times10^0$. Both exponents are the same so no adjustment is necessary before performing the addition. Their sum is $1.1\times10^1$, but this has too many significant digits, so it gets rounded to $1\times 10^1$, or $10$. </p> <p>There is an ambiguity that you have to deal with, though: how many significant digits does an integer that ends with a string of zeros have? A number like $1000$ can cause the precision of a calculation to collapse very quickly if not handled correctly. This is another advantage of scientific notation in this context: there’s no such ambiguity. Trailing zeros, if any are significant digits.</p> Answer 3: <p>There are two issues here. Number of significant digits, and how much rounding is appropriate. Both are slippery concepts with apparently different definitions and advice in different fields. Here I will focus mainly on rounding. The crucial point is not to throw away information that may be useful. Here are some examples to think about.</p> <p>$1.$ <em>Distinctions between individuals and averages.</em> If one person loses a pound during a month, that is not usually a significant event. In the process of eating, drinking, exercising, or elimination a person can temporarily gain or lose a pound during a day. However, if there is an <em>average</em> gain or loss of one pound for 100 subjects during a month long weight loss program, that may be a significant if not startling change. So for most purposes, it is probably appropriate to round weights of individuals to the nearest pound and to round weights of groups to the nearest tenth of a pound.</p> <p>$2.$ <em>Premature rounding.</em> In a famous study of boarding school boys in India, it was established that students were about one $cm$ shorter in the evening than in the morning. Each student was measured twice by each of two investigators in the morning, and again in the evening. Even though it is not really feasible to measure height to the nearest millimeter, best attempts were made to do so for all eight measurements on each child. If all the original measurements had been rounded to the nearest centimeter, the "1$cm$" difference would not have been as clear. Also, it turned out that one investigator systematically measured students as a little taller than the other, and that finding might have been totally obscured by rounding to the nearest centimeter. </p> <p>$3.$ <em>Rounding before statistical tests.</em> Certain kinds of rank-based statistical tests do not work well if there are many ties in theoretically-continuous data. So, in performing such tests, one should be careful about rounding measurements before testing. </p> <p>Example: A brief simulation shows the effect of rounding. Suppose a certain measurement in population can be modeled for practical purposes as normal with mean 400 and standard deviation 60. As a practical matter, such measurements will usually span an interval such as $(260,540).$ If we simulate 100 such measurements and record them to the nearest 0.1, then typically there will be few if any ties; specifically, one tie in the instance shown below:</p> <pre><code>x = rnorm(100, 400, 60) length(unique(round(x,1))) ## 99 </code></pre> <p>But if we round to the nearest <em>integer,</em> it is not unusual to get as many as 10 or 15 ties (enough to make rank-based analysis problematic).</p> <pre><code>length(unique(round(x))) ## 87 </code></pre> <p>$4.$ <em>Authoritative standards.</em> My opinion is that the rounding rules in mathematics texts are generally not appropriate in statistical practice. Governments and industries often adopt standards of practice for rounding that focus on rounding that simplifies without sacrificing useful informaion. <a href="https://www.nist.gov/sites/default/files/documents/pml/wmd/labmetrology/GLP_9_20140911.pdf" rel="nofollow noreferrer">This report</a> by the US National Institute for Science and Technology is one example. I am not claiming that these standards are always ideal, but they are based on thoughtful consideration by people who handle real-world data.</p>
Every linearly independent set can be extended to a basis
2,099,858
4
https://math.stackexchange.com/questions/2099858/every-linearly-independent-set-can-be-extended-to-a-basis
<p>Let $E$ be linear space (infinite dimensional in general). We know by Zorn's lemma that there exists a basis. Now let $S \subset E$ be any linear independent subset. How to prove that it is contained in some basis of $E$? </p> <p>And moreover if $F \subset E$ is subspace then there are linear functional such that $f(F) = 0$ and linear complement to $F$ in $E$.</p> <p>I know how it can be done for finite dimensional spaces but I am always confused when infinite dimension and Zorn's lemma are involved.</p>
[ { "answer_id": 2099881, "score": 3, "text": "<p>Use again Zorn's Lemma. Define:</p>\n\n<p>$$C:=\\left\\{\\,T\\supset S\\,/\\,\\,T\\;\\text{is linearly independent}\\,\\right\\}$$</p>\n\n<p>Observe that $\\;S\\in C\\implies C\\neq\\emptyset\\;$ and we can partial order $\\;C\\;$ by set inclusion and etc. (check this!), so by Zorn's Lemma we're done if we succeed in proving a maximal element $\\;M\\;$ in $\\;C\\;$ is a basis of $\\;E\\;$ , but this is easy as otherwise there'd be an element $\\;x\\in E\\,,\\,\\,x\\notin\\text{ Span}\\,M\\;$, but then $\\;M\\cup\\{x\\}\\;$ is lin. ind. and contains $\\;S\\;$ so contradiction...(fill in details here)</p>\n" }, { "answer_id": 2099920, "score": 0, "text": "<p>Let $U$ be the span of $S$. Then the vector space $E/U$ has a basis, say $T=\\{y_i+U:i\\in I\\}$.</p>\n\n<p>Consider $S'=S\\cup\\{y_i:i\\in I\\}$ and prove it is a basis for $E$.</p>\n\n<p>The span of $\\{y_i:i\\in I\\}$ is a complement for $U$. A linear functional that is $0$ on $U$ can be found by just choosing an element $y_i$ and decreeing that this one is mapped to $1$ and all other elements of $S'$ are mapped to $0$ (provided $S$ is not a basis of $E$ to begin with, in this case the zero functional is the only possible one).</p>\n" } ]
Question: Every linearly independent set can be extended to a basis Answer 1: <p>Use again Zorn's Lemma. Define:</p> <p>$$C:=\left\{\,T\supset S\,/\,\,T\;\text{is linearly independent}\,\right\}$$</p> <p>Observe that $\;S\in C\implies C\neq\emptyset\;$ and we can partial order $\;C\;$ by set inclusion and etc. (check this!), so by Zorn's Lemma we're done if we succeed in proving a maximal element $\;M\;$ in $\;C\;$ is a basis of $\;E\;$ , but this is easy as otherwise there'd be an element $\;x\in E\,,\,\,x\notin\text{ Span}\,M\;$, but then $\;M\cup\{x\}\;$ is lin. ind. and contains $\;S\;$ so contradiction...(fill in details here)</p> Answer 2: <p>Let $U$ be the span of $S$. Then the vector space $E/U$ has a basis, say $T=\{y_i+U:i\in I\}$.</p> <p>Consider $S'=S\cup\{y_i:i\in I\}$ and prove it is a basis for $E$.</p> <p>The span of $\{y_i:i\in I\}$ is a complement for $U$. A linear functional that is $0$ on $U$ can be found by just choosing an element $y_i$ and decreeing that this one is mapped to $1$ and all other elements of $S'$ are mapped to $0$ (provided $S$ is not a basis of $E$ to begin with, in this case the zero functional is the only possible one).</p>
The most outrageous (or ridiculous) conjectures in mathematics
259,844
208
https://mathoverflow.net/questions/259844/the-most-outrageous-or-ridiculous-conjectures-in-mathematics
<h2>The purpose of this question is to collect the most outrageous (or ridiculous) conjectures in mathematics.</h2> <p>An outrageous conjecture is qualified <strong>ONLY</strong> if:</p> <h3>1) It is most likely false</h3> <p>(Being hopeless is NOT enough.)</p> <h3>2) It is not known to be false</h3> <h3>3) It was published or made publicly before 2006.</h3> <h3>4) It is Important:</h3> <p>(It is based on some appealing heuristic or idea; refuting it will be important etc.)</p> <h3>5) IT IS NOT just the negation of a famous commonly believed conjecture.</h3> <p>As always with big list problems please make one conjecture per answer. (I am not sure this is really a big list question, since I am not aware of many such outrageous conjectures. I am aware of one wonderful example that I hope to post as an answer in a couple of weeks.)</p> <p><em>Very important examples where the conjecture was believed as false when it was made but this is no longer the consensus may also qualify!</em></p> <p>Shmuel Weinberger <a href="https://gilkalai.wordpress.com/2008/11/19/about-conjectures-shmuel-weinberger/" rel="noreferrer">described various types of mathematical conjectures</a>. And the type of conjectures the question proposes to collect is of the kind:</p> <blockquote> <h3>On other times, I have conjectured to lay down the gauntlet: “See,</h3> <h3>you can’t even disprove this ridiculous idea.&quot;</h3> </blockquote> <p><strong>Summary of answers</strong> (updated: <del> March, 13, 2017 </del> <del> February 27, 2020 </del> September 2, 2023):</p> <ol> <li><p><a href="https://mathoverflow.net/a/259846/1532">Berkeley Cardinals exist</a></p> </li> <li><p><a href="https://mathoverflow.net/a/259862/1532">There are at least as many primes between <span class="math-container">$2$</span> to <span class="math-container">$n+1$</span> as there are between <span class="math-container">$k$</span> to <span class="math-container">$n+k-1$</span></a></p> </li> <li><p><a href="https://mathoverflow.net/a/259845/1532">P=NP</a></p> </li> <li><p><a href="https://mathoverflow.net/a/259851/1532">A super exact (too good to be true) estimate for the number of twin primes below <span class="math-container">$n$</span>.</a></p> </li> <li><p><a href="https://mathoverflow.net/a/259880/1532">Peano Arithmetic is inconsistent</a>.</p> </li> <li><p><a href="https://mathoverflow.net/a/259878/1532">The set of prime differences has intermediate Turing degree</a>.</p> </li> <li><p><a href="https://mathoverflow.net/a/259943/1532">Vopěnka's principle.</a></p> </li> <li><p><a href="https://mathoverflow.net/a/259919/1532">Siegel zeros exist</a>.</p> </li> <li><p><a href="https://mathoverflow.net/a/259977/1532">All rationally connected varieties are unirational</a>.</p> </li> <li><p><a href="https://mathoverflow.net/a/259855/1532">Hall's original conjecture</a> (number theory).</p> </li> <li><p><a href="https://mathoverflow.net/a/259915/1532">Siegel's disk exists</a>.</p> </li> <li><p>The telescope conjecture in homotopy theory. (<a href="https://www.quantamagazine.org/an-old-conjecture-falls-making-spheres-a-lot-more-complicated-20230822/" rel="noreferrer">Disproof by Robert Burklund, Jeremy Hahn, Ishan Levy, and Tomer Schlank announced 2023</a>), <a href="https://arxiv.org/abs/2310.17459" rel="noreferrer">preprint</a></p> </li> <li><p><a href="https://mathoverflow.net/a/260028/1532">Tarski's monster do not exist</a> (settled by Olshanski)</p> </li> <li><p><a href="https://mathoverflow.net/a/260189/1532">All zeros of the Riemann zeta functions have rational imaginary part</a>.</p> </li> <li><p><a href="https://mathoverflow.net/a/260132/1532">The Lusternik-Schnirelmann category of <span class="math-container">$Sp(n)$</span> equals <span class="math-container">$2n-1$</span></a>.</p> </li> <li><p><a href="https://mathoverflow.net/a/259917/1532">The finitistic dimension conjecture for finite dimensional algebras</a>.</p> </li> <li><p><a href="https://mathoverflow.net/a/259955/1532">The implicit graph conjecture</a> (graph theory, theory of computing)</p> </li> <li><p><a href="https://mathoverflow.net/a/260197/1532"><span class="math-container">$e+\pi$</span> is rational</a>.</p> </li> <li><p><a href="https://mathoverflow.net/a/260226/1532">Zeeman's collapsing conjecture</a>.</p> </li> <li><p><a href="https://mathoverflow.net/a/350559/1532">All groups are sofic</a>.</p> </li> <li><p><a href="https://mathoverflow.net/a/387439/">The Lovász conjecture</a>.</p> </li> </ol> <p>(From comments, incomplete list) 22. <a href="https://en.wikipedia.org/wiki/Jacobian_conjecture" rel="noreferrer">The Jacobian conjecture</a>; 23. The <a href="https://en.wikipedia.org/wiki/Berman%E2%80%93Hartmanis_conjecture" rel="noreferrer">Berman–Hartmanis conjecture</a> 24. The <a href="https://en.wikipedia.org/wiki/Casas-Alvero_conjecture" rel="noreferrer">Casas-Alvero conjecture</a> 25. An <a href="https://mathoverflow.net/questions/101821/can-there-be-an-embedding-jv-to-l-from-the-set-theoretic-universe-v-to-the-con">implausible embedding into <span class="math-container">$L$</span></a> (set theory). 26. There is <a href="https://arxiv.org/abs/math/0603218" rel="noreferrer">a gap of at most <span class="math-container">$\log n$</span> between threshold and expectation threshold</a> (Update: a slightly weaker version of this conjecture <a href="https://gilkalai.wordpress.com/2019/10/30/amazing-keith-frankston-jeff-kahn-bhargav-narayanan-jinyoung-park-thresholds-versus-fractional-expectation-thresholds/" rel="noreferrer">was proved</a> by Keith Frankston, Jeff Kahn, Bhargav Narayanan, and Jinyoung Park!; Further update: the conjecture <a href="https://gilkalai.wordpress.com/2022/04/02/amazing-jinyoung-park-and-huy-tuan-pham-settled-the-expectation-threshold-conjecture/" rel="noreferrer">was fully proved by Jinyoung Park and Huy Tuan Pham</a> ). 27. NEXP-complete problems are solvable by logarithmic depth, polynomial-size circuits consisting entirely of mod 6 gates. 28. Fermat had a marvelous proof for Fermat's last theorem. (History of mathematics).</p>
[ { "answer_id": 259862, "score": 132, "text": "<p>A long-standing conjecture in Number Theory is that for each positive integer <span class=\"math-container\">$n$</span> there is no stretch of <span class=\"math-container\">$n$</span> consecutive integers containing more primes than the stretch from 2 to <span class=\"math-container\">$n+1$</span>. Just looking at a table of primes and seeing how they thin out is enough to make the conjecture plausible. </p>\n\n<p>But Hensley and Richards (<a href=\"https://eudml.org/doc/205282\" rel=\"noreferrer\">Primes in intervals</a>, Acta Arith 25 (1973/74) 375-391, <a href=\"https://mathscinet.ams.org/mathscinet-getitem?mr=0396440\" rel=\"noreferrer\">MR0396440</a>) proved that this conjecture is incompatible with an equally long-standing conjecture, the prime <span class=\"math-container\">$k$</span>-tuples conjecture. </p>\n\n<p>The current consensus, I believe, is that prime <span class=\"math-container\">$k$</span>-tuples is true, while the first conjecture is false (but not proved to be false). </p>\n" }, { "answer_id": 259846, "score": 125, "text": "<p>W. Hugh Woodin, at a 1992 seminar in Berkeley at which I was present, proposed a new and ridiculously strong large cardinal concept, now called the <a href=\"http://cantorsattic.info/Berkeley\">Berkeley cardinals</a>, and challenged the seminar audience to refute their existence. </p>\n\n<p>He ridiculed the cardinals as overly strong, stronger than Reinhardt cardinals, and proposed them in a \"Refute this!\" manner that seems to be in exactly the spirit of your question. </p>\n\n<p>Meanwhile, no-one has yet succeeded in refuting the Berkeley cardinals. </p>\n" }, { "answer_id": 259845, "score": 89, "text": "<h1><a href=\"https://stackoverflow.com/questions/111307/whats-p-np-and-why-is-it-such-a-famous-question\"><span class=\"math-container\">$P=NP$</span></a></h1>\n<p>Let me tick the list:</p>\n<ol>\n<li><p>Most likely false, because, as Scott Aaronson said <em>&quot;If <span class=\"math-container\">$P = NP$</span>, then the world would be a profoundly different place than we usually assume it to be.&quot;</em></p>\n</li>\n<li><p>Yes, it's The Open Problem in computational complexity theory</p>\n</li>\n<li><p>Yes, it's old</p>\n</li>\n<li><p>It's important, again quoting Scott: <em>&quot;[because if it were true], there would be no special value in &quot;creative leaps,&quot; no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss...&quot;</em></p>\n</li>\n<li><p>It's an equality rather than a negation</p>\n</li>\n</ol>\n" }, { "answer_id": 259851, "score": 68, "text": "<p>I've heard that Roger Heath-Brown has presented the following \"conjecture\" at several conferences, most likely to illustrate our poor understanding of the topic more than because he actually believes it to be true.</p>\n\n<p>Let $\\pi_2(x)$ denote the number of twin primes less than $x$. Then</p>\n\n<blockquote>\n <blockquote>\n <p>$\\pi_2(x) = c \\int_{0}^{x}\\frac{dt}{\\log^2 t} + O(1)$</p>\n </blockquote>\n</blockquote>\n\n<p>where $c$ is the twin prime constant. </p>\n\n<p>In other words, the twin prime asymptotic holds with error term is $O(1)$.</p>\n" }, { "answer_id": 259880, "score": 59, "text": "<p>I propose Edward Nelson's \"conjecture\" that Peano's arithmetic is inconsistent. </p>\n\n<p>First, to be honest, I am not aware that he stated it as \"conjecture\", using that word, but this is something he said he believed to be true, and that he wasn't able to prove (except for a little while but a mistake was discovered by Terry Tao and others independently) though he tried a lot. So a conjecture it is, in the sense this word currently has. </p>\n\n<p>It is also certainly \"outrageous\", in the usual sense of the word -- to many mathematicians, according to my own experience, the simple mention of it provokes disbelief, sarcasm, sometimes outright hostility. </p>\n\n<p>But let's check that it is also \"outrageous\" in the sense of this question. 1) It is most certainly false, or at least this is what most mathematicians, including myself, think. 2) But it is certainly not known to be false -- not now, not ever. 3) Nelson made his program public much before 2006. 4) it is obviously extremely important. 5) The negation, that is the assertion that \"Peano's arithmetic is consistent\" was once a conjecture by Hilbert, but since Gödel it cannot be called a conjecture anymore, since we know it cannot be proven (in a system etc.)</p>\n\n<p>Let me add that it also satisfies something Gil Kalai added in comment, namely \"I prefer examples where [...] the proposer proposes the conjecture genuinely suggesting that it is true\".</p>\n" }, { "answer_id": 259878, "score": 48, "text": "<p>From this <a href=\"https://mathoverflow.net/a/218926/12978\">Math Overflow question</a>, Joel David Hamkins wrote:</p>\n\n<blockquote>\n <p>I once heard Harvey Friedman suggest that the set of prime-differences, that is, the set of all natural numbers $n$ for which there are primes $p,q$ with $p-q=n$, as a possible candidate for all we knew for an intermediate Turing degree &mdash; a noncomputable set between $0$ and $0'$ &mdash; that was <em>natural</em>, not specifically constructed to have that feature.</p>\n</blockquote>\n\n<p>I've also heard others (albeit more recently than 2006) conjecture that Hilbert's 10th problem for rationals is an intermediate degree. </p>\n\n<p>Really, any conjecture that there is a <em>natural</em> intermediate degree is outrageous (although not exactly formal enough to refute). </p>\n" }, { "answer_id": 259943, "score": 42, "text": "<p><strong><a href=\"https://en.wikipedia.org/wiki/Vop%C4%9Bnka%27s_principle\" rel=\"nofollow noreferrer\">Vopěnka's Principle</a></strong></p>\n<p>It fits here perfectly except that it has never been called a conjecture. Vopěnka himself was convinced it was wrong! But I will better just post a section from page 279 of Adámek and Rosický “<a href=\"https://doi.org/10.1017/CBO9780511600579\" rel=\"nofollow noreferrer\">Locally presentable and accessible categories.</a>”:</p>\n<blockquote>\n<p>The story of Vopěnka's principle (as related to the authors by Petr Vopěnka)\nis that of a practical joke which misfired: In the 1960's P. Vopěnka was repelled by the multitude of large cardinals which emerged in set theory.\nWhen he constructed, in collaboration with Z. Hedrlín and A. Pultr,\na rigid graph on every set (see Lemma 2.64), he came to the conclusion\nthat, with some more effort, a large rigid class of graphs must surely be\nalso constructible. He then decided to tease set-theorists: he introduced\na new principle (known today as Vopěnka's principle), and proved some\nconsequences concerning large cardinals. He hoped that some set-theorists\nwould continue this line of research (which they did) until somebody showed\nthat the principle is nonsense. However, the latter never materialized - after\na number of unsuccessful attempts at constructing a large rigid class\nof graphs, Vopěnka's principle received its name from Vopěnka's disciples.\nOne of them, T. J. Jech, made Vopěnka's principle widely known. Later\nthe consistency of this principle was derived from the existence of huge\ncardinals: see [Powell 1972]; our account (in the Appendix) is taken from\n[Jech 1978]. Thus, today this principle has a firm position in the theory of\nlarge cardinals. Petr Vopěnka himself never published anything related to\nthat principle.</p>\n</blockquote>\n" }, { "answer_id": 259855, "score": 37, "text": "<p>I don't know about \"ridiculous\", but there is <a href=\"https://en.wikipedia.org/wiki/Hall&#39;s_conjecture\" rel=\"noreferrer\">Hall's Conjecture</a> in its original form:</p>\n\n<blockquote>\n <p>There is a positive constant $C$ such that for any two integers $a$ and $b$ with $a^2 \\neq b^3$, one has $$|a^2-b^3|&gt; C \\sqrt{|b|}\\;.$$</p>\n</blockquote>\n" }, { "answer_id": 260197, "score": 37, "text": "<p>I'm surprised no one has mentioned it, but the first one that comes to my mind is this.</p>\n\n<blockquote>\n <p>$e+\\pi$ is rational.</p>\n</blockquote>\n\n<p>I think most mathematicians would agree that it is ridiculous. It would follow from Schanuel's conjecture that it is false, but as far as I know, the conjecture is wide open, and when it comes to (ir)rationality of $e+\\pi$, more or less all that is known is the (elementary) fact that either $e+\\pi$ or $e\\cdot\\pi$ is transcendental (naturally, we expect <em>both</em> of them to be transcendental, so it doesn't really get us any closer to a proof).</p>\n\n<p>I'm not sure when it was made publicly, but it is very natural and unlikely to not have been considered before (I heard about it as an undergrad around 2010). I think it is quite important in that it is an obvious test case for Schanuel's conjecture, and in that it would certainly be quite shocking if it was true.</p>\n\n<p>(Caveat: I am not a specialist, so if someone more competent can contradict me, please do!)</p>\n" }, { "answer_id": 259919, "score": 35, "text": "<p>Existence of <a href=\"https://en.wikipedia.org/wiki/Siegel_zero\" rel=\"noreferrer\">Siegel zeros</a>.</p>\n\n<p>1) If we are to believe (like most mathematicians do) in the generalized Riemann hypothesis, this is completely false. I wouldn't necessarily call this ridiculous or outrageous, but within the evidence we have it is rather unlikely to hold.</p>\n\n<p>2) Nonexistence of Siegel zeros is a problem wide, wide open, nowhere near close to being resolved.</p>\n\n<p>3) According to the Wikipedia article, this type of zeros was considered back in 1930s, and earlier by Landau, but I don't know if they have explicitly stated the conjecture. GRH was posed <a href=\"https://en.wikipedia.org/wiki/Generalized_Riemann_hypothesis#Generalized_Riemann_hypothesis_.28GRH.29\" rel=\"noreferrer\">back in 1884</a> though.</p>\n\n<p>4) They are immensely useful in many applications, since if they exist, primes in certain arithmetic progressions \"conspire\" to have certain non-uniform distribution. I'm no expert, but <a href=\"https://mathoverflow.net/questions/125276/good-uses-of-siegel-zeros\">here</a> some uses are listed (see also <a href=\"https://terrytao.wordpress.com/2015/08/26/heath-browns-theorem-on-prime-twins-and-siegel-zeroes/\" rel=\"noreferrer\">this blog post</a> by Terry Tao).</p>\n\n<p>5) It <em>implies</em> the negation of GRH, but the negation of a statement itself is quite an awkward statement, saying \"yeah, zeros might exist, but not too close to $1$\".</p>\n" }, { "answer_id": 259977, "score": 34, "text": "<p>The &quot;conjecture&quot; in algebraic geometry that all rationally connected varieties are unirational comes to mind. It's usually thrown around as a way of saying, &quot;See, we know so little about what varieties can be unirational that we can't prove a single rationally connected variety isn't.&quot; Unirationality implies rational connectedness, but I think almost everyone believes the converse should be false.</p>\n<p>Some background: Algebraic geometers have been interested for a long time in proving that certain varieties are or are not rational (very roughly, figuring out which systems of polynomial equations can have their solutions parametrized). Clemens and Griffiths showed in 1972 that a cubic hypersurface in <span class=\"math-container\">$\\mathbb{P}^4$</span> is irrational. Since then, there's been a lot of progress in rationality obstructions e.g., Artin–Mumford's obstruction via torsion in <span class=\"math-container\">$H^3$</span>, Iskovskikh–Manin on quartic threefolds, Kollár's work on rationality of hypersurfaces, and most recently, Voisin's new decomposition of the diagonal invariants which have led to major breakthroughs.</p>\n<p>On the other hand, unirationality has proved a far harder notion to control, and to my mind the biggest open question in this area is to find any obstruction to unirationality.</p>\n" }, { "answer_id": 260189, "score": 29, "text": "<p>I was giving a talk several years ago about the conjectured linear independence (over $\\Bbb Q$) of the ordinates of the zeros of the Riemann zeta function, and Lior Silberman crystallized our current lack of knowledge into a \"Refute this!\" statement:</p>\n\n<blockquote>\n <p>If $\\zeta(x+iy)=0$, then $y\\in\\Bbb Q$.</p>\n</blockquote>\n\n<p>(In other words, even though the imaginary parts of the nontrivial zeros of $\\zeta(s)$ are believed to be transcendental, algebraically independent, and generally unrelated to any other constants we've ever seen ... we currently can't even prove that a single one of those imaginary parts is irrational!)</p>\n\n<p>This \"conjecture\" can be extended to Dirichlet $L$-functions, and perhaps even further (though one needs to be careful that we don't allow a class of $L$-functions that includes arbitrary vertical shifts of some of its members).</p>\n" }, { "answer_id": 333105, "score": 24, "text": "<p>Here is my favorite.</p>\n\n<p>Spherical simplex is called rational, if all its dihedral angles are rational multiples of <span class=\"math-container\">$\\pi.$</span></p>\n\n<p><strong>Rational Volume Conjecture:</strong>\nVolume of every rational spherical simplex is a rational multiple of <span class=\"math-container\">$\\pi^2.$</span></p>\n\n<p>It is true in all known examples, but is expected to be false. The conjecture first appeared in \"<a href=\"https://pdfs.semanticscholar.org/879a/91356bcaac5e3605dbc18778a869d09cb96a.pdf\" rel=\"noreferrer\">Differential characters and geometric invariants</a>\" paper of Cheeger and Simons, p.73. (Dis)proving this conjecture would lead to interesting results in scissor congruence theory and group homology.</p>\n" }, { "answer_id": 260028, "score": 23, "text": "<p>For a prime $p$, an infinite group $G$ is a <a href=\"https://en.wikipedia.org/wiki/Tarski_monster_group\">Tarski monster</a> if each of its proper subgroups has order $p$.</p>\n\n<p>If I am correctly informed, then the Tarski monster was defined to demonstrate our poor understanding of infinite groups, because such monsters obviously don't exist, it should be easy to prove that they don't exist, but we cannot prove it.</p>\n\n<p>Then Olshanskii proved that Tarski monsters do exist for all large primes, and by now many people believe that \"large\" means something like $p\\geq 11$.</p>\n" }, { "answer_id": 284260, "score": 20, "text": "<p>Seen this, I believe really striking, example yesterday here on MO: although the question <a href=\"https://mathoverflow.net/q/7304/41291\">Complex vector bundles that are not holomorphic</a> is from 2009, a <a href=\"https://mathoverflow.net/a/37205/41291\">recent post</a> by <a href=\"https://mathoverflow.net/users/2349\">algori</a> suggests that this is still open.</p>\n\n<p>And the really ridiculous conjecture is that <em>all topological vector bundles on $\\mathbb C\\mathbf P^n$ are algebraic</em>.</p>\n\n<p>The question is described as an open problem in Okonek-Schneider-Spindler (1980) but I believe must have been asked much earlier.</p>\n" }, { "answer_id": 260132, "score": 18, "text": "<p><span class=\"math-container\">$\\DeclareMathOperator\\Sp{Sp}\\DeclareMathOperator\\cat{cat}$</span>The Lusternik–Schnirelmann category of the Lie groups <span class=\"math-container\">$\\Sp(n)$</span>. Since <span class=\"math-container\">$\\Sp(1) = S^3$</span>, <span class=\"math-container\">$\\cat(\\Sp(1)) = 1$</span>. In the 1960s, P. Schweitzer proved that <span class=\"math-container\">$\\cat(\\Sp(2)) = 3$</span>. <em>Based on this</em>, a folklore conjecture emerged that in general <span class=\"math-container\">$\\cat(\\Sp(n)) = 2n-1$</span>. In 2001, it was proved that <span class=\"math-container\">$\\cat(\\Sp(3)) = 5$</span>, so maybe it's true?</p>\n" }, { "answer_id": 259915, "score": 17, "text": "<p>If the holomorphic map $f:\\mathbb{C}\\to\\mathbb{C}$\nhas a fixed point $p$, and the derivative $\\lambda := f'(p)$ equals $e^{2\\pi i \\theta}$ (with irrational $\\theta$), one can ask if $f$ is conjugate to $z\\mapsto\\lambda\\cdot z$ in a neighborhood of $p$. If it exists, the largest domain of conjugacy is called a 'Siegel disk'. Two properties to keep in mind are:</p>\n\n<ul>\n<li>A Siegel disk cannot contain a critical point in its interior (boundary is ok).</li>\n<li>The boundary of a Siegel disk belongs to the Julia set of $f$.</li>\n</ul>\n\n<p>Quadratic maps can have Siegel disks, but not for just any $\\theta$; the number theoretical properties of this 'rotation number' are relevant. However, if $\\theta$ is Diophantine, the boundary of the disk is well behaved (Jordan curve, quasi-circle...)</p>\n\n<p>... But the boundary of a Siegel disk can also be wild; for instance, it can be non-locally connected. The <strong>outrageous conjecture</strong> that has been floating around is that:</p>\n\n<ul>\n<li>There is a quadratic polynomial with a Siegel disk whose boundary equals the Julia set.</li>\n</ul>\n\n<p>Since a quadratic Julia set is symmetric with respect to the critical point, a quadratic Siegel disk would have a symmetric preimage whose boundary <em>also</em> equals the Julia set, but the unbounded component of the complement (Fatou set) also has boundary equal to the Julia set, so our conjectured Siegel disk would form part of a 'lakes of Wada' configuration.</p>\n" }, { "answer_id": 264543, "score": 17, "text": "<p><span class=\"math-container\">$S^6$</span> has a complex structure.</p>\n<p>I don’t know if this apply, but this has a nice story. It has been &quot;published&quot; to to be true and now Atiyah has a short paper <a href=\"https://arxiv.org/abs/1610.09366\" rel=\"nofollow noreferrer\">The non-existent complex 6-sphere</a> on arXiv claiming to be false, other important mathematicians has also work on this problem. According to LeBrun this would be a minor disaster.</p>\n" }, { "answer_id": 259955, "score": 15, "text": "<p>Let us say a graph class $\\mathcal{C}$ is small if it has at most $n^{O(n)}$ graphs on $n$ vertices. The <strong>implicit graph conjecture</strong> states that every small, hereditary graph class has an adjacency labeling scheme (ALS) with a label decoder that can be computed in polynomial time (a formal definition of ALS is given at the end of the answer).</p>\n\n<p>Initially, this was posed as question by Kannan, Naor and Rudich in their paper Implicit Representation of Graphs which appeared at STOC '88. It was restated as conjecture by Spinrad in the book Efficient Graph Representations (2003). </p>\n\n<p>It is important because it would imply the existence of space-efficient representations for all small, hereditary graph classes where querying an edge requires only polylogarithmic time with respect to the number of vertices of the graph. </p>\n\n<p>As far as I know there is no consensus about whether this conjecture should be true or not. However, from my perspective it would be an immense surprise if it holds for the following reason. The concept of adjacency labeling schemes can be defined with respect to arbitrary complexity classes. For a complexity class $\\text{C}$ (more formally, a set of languages over the binary alphabet) we can define the class of graph classes $\\text{GC}$ as the set of all graph classes that have an ALS with a label decoder that can be computed in $\\text{C}$. It can be shown that $\\text{G1EXP} \\subsetneq \\text{G2EXP} \\subsetneq \\text{G3EXP} \\dots \\subsetneq \\text{GR} \\subsetneq \\text{GALL}$ where $\\text{kEXP}$ is the set of languages that can be computed in time $\\exp^k(\\text{poly}(n))$, $\\text{R}$ is the set of all decidable languages and $\\text{ALL}$ is the set of all languages. I find it hard to believe that <strong>every</strong> small, hereditary graph class falls down through all these classes and just happens to sit in $\\text{GP}$ (the choice just seems too arbitrary and weak). In fact, there are natural graph classes such as disk graph or line segment graphs for which it is not even known whether they are in $\\text{GALL}$. Additionally, a graph class $\\mathcal{C}$ is in $\\text{GALL}$ iff $\\mathcal{C}$ has a polynomial universal graph, i.e. a family of graphs $(G_n)_{n \\in \\mathbb{N}}$ such that $|V(G_n)|$ is polynomially bounded and $G_n$ contains all graphs from $\\mathcal{C}$ on $n$ vertices as induced subgraph. It already seems doubtful to me that every small, hereditary graph class has such polynomial universal graphs. </p>\n\n<p>An ALS is a tuple $S=(F,c)$ where $F \\subseteq \\{0,1\\}^* \\times \\{0,1\\}^*$ is called label decoder and $c \\in \\mathbb{N}$ is the label length. A graph $G$ with $n$ vertices is represented by $S$ if there exists a labeling $\\ell \\colon V(G) \\rightarrow \\{0,1\\}^{c \\lceil \\log n \\rceil}$ such that for all $u,v \\in V(G)$ it holds that $(u,v) \\in E(G) \\Leftrightarrow (\\ell(u),\\ell(v)) \\in F$. A graph class $\\mathcal{C}$ has an ALS $S$ if every graph in $\\mathcal{C}$ can be represented by $S$ (but not necessarily every graph represented by $S$ must be in $\\mathcal{C}$). </p>\n" }, { "answer_id": 350559, "score": 14, "text": "<p>Let me promote a comment of Terry Tao into an answer and mention two conjectures.\n(See, e.g., <a href=\"https://arxiv.org/abs/1711.10238\" rel=\"nofollow noreferrer\">Thom - Finitary approximations of groups and their applications</a> and <a href=\"https://arxiv.org/abs/1711.10238\" rel=\"nofollow noreferrer\">De Chiffre, Glebsky, Lubotzky, Thom - Stability, cohomology vanishing, and non-approximable groups</a> for more information.)</p>\n<p><strong>Conjecture 1:</strong> (a question by Gromov): Every group is sofic.</p>\n<p><strong>Conjecture 2:</strong> (a question by Connes): Every group is hyperlinear</p>\n<p>Conjecture 1 and 2 as well as several other problems can be asked in the context of approximate homomorphisms of a group <span class=\"math-container\">$\\Gamma$</span>. If the target groups are symmetric groups, and the approximation is in terms of normalized Hamming norm then this leads to Conjecture 1. If the target groups are <span class=\"math-container\">$U(n)$</span> and the notion of approximation is in terms of the normalized Hilbert Schmidt norm then this leads to Conjecture 2. A counter example to Conjecture 2 leads also to a counter example to <a href=\"https://en.wikipedia.org/wiki/Connes_embedding_problem\" rel=\"nofollow noreferrer\">Connes 1976 embedding conjecture</a> that was refuted <a href=\"https://arxiv.org/abs/2001.04383\" rel=\"nofollow noreferrer\">a few days ago by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen in their paper <span class=\"math-container\">$\\mathrm{MIP}^* = \\mathrm{RE}$</span></a>, which gives us additional reasons to consider Conjecture 1 and 2 as outrageous conjectures!</p>\n" }, { "answer_id": 387439, "score": 11, "text": "<p>The so-called <a href=\"https://en.wikipedia.org/wiki/Lov%C3%A1sz_conjecture\" rel=\"noreferrer\">Lovász conjecture</a> states:</p>\n<blockquote>\n<p>Every finite connected vertex-transitive graph contains a Hamiltonian path.</p>\n</blockquote>\n<p>One reason I think that this conjecture fits what you're asking for is that, even though it is standardly called a conjecture, it is not clear how many people believe it. László Lovász himself originally phrased the question negatively, inviting the reader to find a counterexample. László Babai has vocally argued that there is no reason to believe it is true, and that the value of the question lies in how it highlights our poor understanding of obstructions to Hamiltonicity. For further information, see <a href=\"https://www.math.ucla.edu/~pak/papers/hamcayley9.pdf\" rel=\"noreferrer\">Hamiltonian paths in Cayley graphs</a> by Pak and Radoičić.</p>\n" }, { "answer_id": 259917, "score": 10, "text": "<p>The finitistic dimension conjecture for finite dimensional algebras states that the supremum of all projective dimensions of modules having finite projective dimension is finite.\nIt it just proven for some very special classes of algebras and in general there seems to be no reason why this should be true.\nReferences: <a href=\"https://arxiv.org/pdf/1407.2383v1.pdf\" rel=\"nofollow noreferrer\">https://arxiv.org/pdf/1407.2383v1.pdf</a>\n<a href=\"http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf\" rel=\"nofollow noreferrer\">http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf</a> (<a href=\"https://web.archive.org/web/20210506191407/https://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf\" rel=\"nofollow noreferrer\">Wayback Machine</a>)</p>\n" }, { "answer_id": 260226, "score": 10, "text": "<p>It seems to me that <a href=\"https://en.wikipedia.org/wiki/Zeeman_conjecture\" rel=\"nofollow noreferrer\">Zeeman's collapsing conjecture</a> satisfies the criteria given. The Zeeman conjecture implies both the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture\" rel=\"nofollow noreferrer\">Poincaré conjecture</a> (proved in 2003) and the <a href=\"https://en.wikipedia.org/wiki/Andrews%E2%80%93Curtis_conjecture\" rel=\"nofollow noreferrer\">Andrews-Curtis conjecture</a>.</p>\n\n<p>The following is a quote from Matveev's <a href=\"https://books.google.com/books?id=vFLgAyeVSqAC&amp;pg=PA46\" rel=\"nofollow noreferrer\">book</a>, where it is proved that ZC restricted to special polyhedra is equivalent to the union of PC and AC.</p>\n\n<blockquote>\n <p>Theorem 1.3.58 may cast a doubt on the widespread belief that ZC is\n false. If a counterexample indeed exists, then either it has a “bad”\n local structure (is not a special polyhedron) or it is a\n counterexample to either AC or PC.</p>\n</blockquote>\n" }, { "answer_id": 264502, "score": 4, "text": "<p>This one is due to Errett Bishop: \"all meaningful mathematics is reducible to finite calculations with strings of $0$s and $1$s\" (imho Bishop formulated this not as a conjecture but as an article of faith but that doesn't necessarily affect the truth or falsity thereof).</p>\n\n<p>A reference for Bishop's claim is his article <a href=\"http://www.ams.org/mathscinet-getitem?mr=498014\" rel=\"nofollow noreferrer\">\"Crisis in contemporary mathematics\"</a> (the link is to the mathscinet review of the article) which discusses the constructivist opposition to a principle called LPO (\"limited principle of omniscience\") related to the law of excluded middle. The LPO is discussed starting on page 511 of the article.</p>\n" }, { "answer_id": 387473, "score": 4, "text": "<p>I think the Erdős-Szekeres conjecture (see the Wikipedia article on the <a href=\"https://en.wikipedia.org/wiki/Happy_ending_problem\" rel=\"nofollow noreferrer\">Happy Ending Problem</a>), which says that the smallest <span class=\"math-container\">$N$</span> for which any <span class=\"math-container\">$N$</span> points in general position in the plane contain the vertices of some convex <span class=\"math-container\">$n$</span>-gon is <span class=\"math-container\">$N=2^{n-2}+1$</span>, is a little bit ridiculous in that there is not a lot of reason to believe it. Indeed, only the cases <span class=\"math-container\">$n=3,4,5,6$</span> are known, and while it is known that <span class=\"math-container\">$2^{n-2}+1$</span> is a lower bound for this <span class=\"math-container\">$N$</span>, I really am not aware of any other substantial evidence in favor of this conjecture other than that it's a very nice pattern.</p>\n" }, { "answer_id": 457228, "score": 3, "text": "<p><a href=\"https://en.wikipedia.org/wiki/K%C3%B6the_conjecture#:%7E:text=In%20mathematics%2C%20the%20K%C3%B6the%20conjecture,%2C%20other%20than%20%7B0%7D.\" rel=\"nofollow noreferrer\">Köthe's conjecture</a> is a hugely important problem in ring theory, it was posed in 1930, it is still wide open, and it is also widely believed to be false.</p>\n" }, { "answer_id": 259862, "score": 132, "text": "<p>A long-standing conjecture in Number Theory is that for each positive integer <span class=\"math-container\">$n$</span> there is no stretch of <span class=\"math-container\">$n$</span> consecutive integers containing more primes than the stretch from 2 to <span class=\"math-container\">$n+1$</span>. Just looking at a table of primes and seeing how they thin out is enough to make the conjecture plausible. </p>\n\n<p>But Hensley and Richards (<a href=\"https://eudml.org/doc/205282\" rel=\"noreferrer\">Primes in intervals</a>, Acta Arith 25 (1973/74) 375-391, <a href=\"https://mathscinet.ams.org/mathscinet-getitem?mr=0396440\" rel=\"noreferrer\">MR0396440</a>) proved that this conjecture is incompatible with an equally long-standing conjecture, the prime <span class=\"math-container\">$k$</span>-tuples conjecture. </p>\n\n<p>The current consensus, I believe, is that prime <span class=\"math-container\">$k$</span>-tuples is true, while the first conjecture is false (but not proved to be false). </p>\n" }, { "answer_id": 259846, "score": 125, "text": "<p>W. Hugh Woodin, at a 1992 seminar in Berkeley at which I was present, proposed a new and ridiculously strong large cardinal concept, now called the <a href=\"http://cantorsattic.info/Berkeley\">Berkeley cardinals</a>, and challenged the seminar audience to refute their existence. </p>\n\n<p>He ridiculed the cardinals as overly strong, stronger than Reinhardt cardinals, and proposed them in a \"Refute this!\" manner that seems to be in exactly the spirit of your question. </p>\n\n<p>Meanwhile, no-one has yet succeeded in refuting the Berkeley cardinals. </p>\n" }, { "answer_id": 259845, "score": 89, "text": "<h1><a href=\"https://stackoverflow.com/questions/111307/whats-p-np-and-why-is-it-such-a-famous-question\"><span class=\"math-container\">$P=NP$</span></a></h1>\n<p>Let me tick the list:</p>\n<ol>\n<li><p>Most likely false, because, as Scott Aaronson said <em>&quot;If <span class=\"math-container\">$P = NP$</span>, then the world would be a profoundly different place than we usually assume it to be.&quot;</em></p>\n</li>\n<li><p>Yes, it's The Open Problem in computational complexity theory</p>\n</li>\n<li><p>Yes, it's old</p>\n</li>\n<li><p>It's important, again quoting Scott: <em>&quot;[because if it were true], there would be no special value in &quot;creative leaps,&quot; no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss...&quot;</em></p>\n</li>\n<li><p>It's an equality rather than a negation</p>\n</li>\n</ol>\n" }, { "answer_id": 259851, "score": 68, "text": "<p>I've heard that Roger Heath-Brown has presented the following \"conjecture\" at several conferences, most likely to illustrate our poor understanding of the topic more than because he actually believes it to be true.</p>\n\n<p>Let $\\pi_2(x)$ denote the number of twin primes less than $x$. Then</p>\n\n<blockquote>\n <blockquote>\n <p>$\\pi_2(x) = c \\int_{0}^{x}\\frac{dt}{\\log^2 t} + O(1)$</p>\n </blockquote>\n</blockquote>\n\n<p>where $c$ is the twin prime constant. </p>\n\n<p>In other words, the twin prime asymptotic holds with error term is $O(1)$.</p>\n" }, { "answer_id": 259880, "score": 59, "text": "<p>I propose Edward Nelson's \"conjecture\" that Peano's arithmetic is inconsistent. </p>\n\n<p>First, to be honest, I am not aware that he stated it as \"conjecture\", using that word, but this is something he said he believed to be true, and that he wasn't able to prove (except for a little while but a mistake was discovered by Terry Tao and others independently) though he tried a lot. So a conjecture it is, in the sense this word currently has. </p>\n\n<p>It is also certainly \"outrageous\", in the usual sense of the word -- to many mathematicians, according to my own experience, the simple mention of it provokes disbelief, sarcasm, sometimes outright hostility. </p>\n\n<p>But let's check that it is also \"outrageous\" in the sense of this question. 1) It is most certainly false, or at least this is what most mathematicians, including myself, think. 2) But it is certainly not known to be false -- not now, not ever. 3) Nelson made his program public much before 2006. 4) it is obviously extremely important. 5) The negation, that is the assertion that \"Peano's arithmetic is consistent\" was once a conjecture by Hilbert, but since Gödel it cannot be called a conjecture anymore, since we know it cannot be proven (in a system etc.)</p>\n\n<p>Let me add that it also satisfies something Gil Kalai added in comment, namely \"I prefer examples where [...] the proposer proposes the conjecture genuinely suggesting that it is true\".</p>\n" }, { "answer_id": 259878, "score": 48, "text": "<p>From this <a href=\"https://mathoverflow.net/a/218926/12978\">Math Overflow question</a>, Joel David Hamkins wrote:</p>\n\n<blockquote>\n <p>I once heard Harvey Friedman suggest that the set of prime-differences, that is, the set of all natural numbers $n$ for which there are primes $p,q$ with $p-q=n$, as a possible candidate for all we knew for an intermediate Turing degree &mdash; a noncomputable set between $0$ and $0'$ &mdash; that was <em>natural</em>, not specifically constructed to have that feature.</p>\n</blockquote>\n\n<p>I've also heard others (albeit more recently than 2006) conjecture that Hilbert's 10th problem for rationals is an intermediate degree. </p>\n\n<p>Really, any conjecture that there is a <em>natural</em> intermediate degree is outrageous (although not exactly formal enough to refute). </p>\n" }, { "answer_id": 259943, "score": 42, "text": "<p><strong><a href=\"https://en.wikipedia.org/wiki/Vop%C4%9Bnka%27s_principle\" rel=\"nofollow noreferrer\">Vopěnka's Principle</a></strong></p>\n<p>It fits here perfectly except that it has never been called a conjecture. Vopěnka himself was convinced it was wrong! But I will better just post a section from page 279 of Adámek and Rosický “<a href=\"https://doi.org/10.1017/CBO9780511600579\" rel=\"nofollow noreferrer\">Locally presentable and accessible categories.</a>”:</p>\n<blockquote>\n<p>The story of Vopěnka's principle (as related to the authors by Petr Vopěnka)\nis that of a practical joke which misfired: In the 1960's P. Vopěnka was repelled by the multitude of large cardinals which emerged in set theory.\nWhen he constructed, in collaboration with Z. Hedrlín and A. Pultr,\na rigid graph on every set (see Lemma 2.64), he came to the conclusion\nthat, with some more effort, a large rigid class of graphs must surely be\nalso constructible. He then decided to tease set-theorists: he introduced\na new principle (known today as Vopěnka's principle), and proved some\nconsequences concerning large cardinals. He hoped that some set-theorists\nwould continue this line of research (which they did) until somebody showed\nthat the principle is nonsense. However, the latter never materialized - after\na number of unsuccessful attempts at constructing a large rigid class\nof graphs, Vopěnka's principle received its name from Vopěnka's disciples.\nOne of them, T. J. Jech, made Vopěnka's principle widely known. Later\nthe consistency of this principle was derived from the existence of huge\ncardinals: see [Powell 1972]; our account (in the Appendix) is taken from\n[Jech 1978]. Thus, today this principle has a firm position in the theory of\nlarge cardinals. Petr Vopěnka himself never published anything related to\nthat principle.</p>\n</blockquote>\n" }, { "answer_id": 259855, "score": 37, "text": "<p>I don't know about \"ridiculous\", but there is <a href=\"https://en.wikipedia.org/wiki/Hall&#39;s_conjecture\" rel=\"noreferrer\">Hall's Conjecture</a> in its original form:</p>\n\n<blockquote>\n <p>There is a positive constant $C$ such that for any two integers $a$ and $b$ with $a^2 \\neq b^3$, one has $$|a^2-b^3|&gt; C \\sqrt{|b|}\\;.$$</p>\n</blockquote>\n" }, { "answer_id": 260197, "score": 37, "text": "<p>I'm surprised no one has mentioned it, but the first one that comes to my mind is this.</p>\n\n<blockquote>\n <p>$e+\\pi$ is rational.</p>\n</blockquote>\n\n<p>I think most mathematicians would agree that it is ridiculous. It would follow from Schanuel's conjecture that it is false, but as far as I know, the conjecture is wide open, and when it comes to (ir)rationality of $e+\\pi$, more or less all that is known is the (elementary) fact that either $e+\\pi$ or $e\\cdot\\pi$ is transcendental (naturally, we expect <em>both</em> of them to be transcendental, so it doesn't really get us any closer to a proof).</p>\n\n<p>I'm not sure when it was made publicly, but it is very natural and unlikely to not have been considered before (I heard about it as an undergrad around 2010). I think it is quite important in that it is an obvious test case for Schanuel's conjecture, and in that it would certainly be quite shocking if it was true.</p>\n\n<p>(Caveat: I am not a specialist, so if someone more competent can contradict me, please do!)</p>\n" }, { "answer_id": 259919, "score": 35, "text": "<p>Existence of <a href=\"https://en.wikipedia.org/wiki/Siegel_zero\" rel=\"noreferrer\">Siegel zeros</a>.</p>\n\n<p>1) If we are to believe (like most mathematicians do) in the generalized Riemann hypothesis, this is completely false. I wouldn't necessarily call this ridiculous or outrageous, but within the evidence we have it is rather unlikely to hold.</p>\n\n<p>2) Nonexistence of Siegel zeros is a problem wide, wide open, nowhere near close to being resolved.</p>\n\n<p>3) According to the Wikipedia article, this type of zeros was considered back in 1930s, and earlier by Landau, but I don't know if they have explicitly stated the conjecture. GRH was posed <a href=\"https://en.wikipedia.org/wiki/Generalized_Riemann_hypothesis#Generalized_Riemann_hypothesis_.28GRH.29\" rel=\"noreferrer\">back in 1884</a> though.</p>\n\n<p>4) They are immensely useful in many applications, since if they exist, primes in certain arithmetic progressions \"conspire\" to have certain non-uniform distribution. I'm no expert, but <a href=\"https://mathoverflow.net/questions/125276/good-uses-of-siegel-zeros\">here</a> some uses are listed (see also <a href=\"https://terrytao.wordpress.com/2015/08/26/heath-browns-theorem-on-prime-twins-and-siegel-zeroes/\" rel=\"noreferrer\">this blog post</a> by Terry Tao).</p>\n\n<p>5) It <em>implies</em> the negation of GRH, but the negation of a statement itself is quite an awkward statement, saying \"yeah, zeros might exist, but not too close to $1$\".</p>\n" }, { "answer_id": 259977, "score": 34, "text": "<p>The &quot;conjecture&quot; in algebraic geometry that all rationally connected varieties are unirational comes to mind. It's usually thrown around as a way of saying, &quot;See, we know so little about what varieties can be unirational that we can't prove a single rationally connected variety isn't.&quot; Unirationality implies rational connectedness, but I think almost everyone believes the converse should be false.</p>\n<p>Some background: Algebraic geometers have been interested for a long time in proving that certain varieties are or are not rational (very roughly, figuring out which systems of polynomial equations can have their solutions parametrized). Clemens and Griffiths showed in 1972 that a cubic hypersurface in <span class=\"math-container\">$\\mathbb{P}^4$</span> is irrational. Since then, there's been a lot of progress in rationality obstructions e.g., Artin–Mumford's obstruction via torsion in <span class=\"math-container\">$H^3$</span>, Iskovskikh–Manin on quartic threefolds, Kollár's work on rationality of hypersurfaces, and most recently, Voisin's new decomposition of the diagonal invariants which have led to major breakthroughs.</p>\n<p>On the other hand, unirationality has proved a far harder notion to control, and to my mind the biggest open question in this area is to find any obstruction to unirationality.</p>\n" }, { "answer_id": 260189, "score": 29, "text": "<p>I was giving a talk several years ago about the conjectured linear independence (over $\\Bbb Q$) of the ordinates of the zeros of the Riemann zeta function, and Lior Silberman crystallized our current lack of knowledge into a \"Refute this!\" statement:</p>\n\n<blockquote>\n <p>If $\\zeta(x+iy)=0$, then $y\\in\\Bbb Q$.</p>\n</blockquote>\n\n<p>(In other words, even though the imaginary parts of the nontrivial zeros of $\\zeta(s)$ are believed to be transcendental, algebraically independent, and generally unrelated to any other constants we've ever seen ... we currently can't even prove that a single one of those imaginary parts is irrational!)</p>\n\n<p>This \"conjecture\" can be extended to Dirichlet $L$-functions, and perhaps even further (though one needs to be careful that we don't allow a class of $L$-functions that includes arbitrary vertical shifts of some of its members).</p>\n" }, { "answer_id": 333105, "score": 24, "text": "<p>Here is my favorite.</p>\n\n<p>Spherical simplex is called rational, if all its dihedral angles are rational multiples of <span class=\"math-container\">$\\pi.$</span></p>\n\n<p><strong>Rational Volume Conjecture:</strong>\nVolume of every rational spherical simplex is a rational multiple of <span class=\"math-container\">$\\pi^2.$</span></p>\n\n<p>It is true in all known examples, but is expected to be false. The conjecture first appeared in \"<a href=\"https://pdfs.semanticscholar.org/879a/91356bcaac5e3605dbc18778a869d09cb96a.pdf\" rel=\"noreferrer\">Differential characters and geometric invariants</a>\" paper of Cheeger and Simons, p.73. (Dis)proving this conjecture would lead to interesting results in scissor congruence theory and group homology.</p>\n" }, { "answer_id": 260028, "score": 23, "text": "<p>For a prime $p$, an infinite group $G$ is a <a href=\"https://en.wikipedia.org/wiki/Tarski_monster_group\">Tarski monster</a> if each of its proper subgroups has order $p$.</p>\n\n<p>If I am correctly informed, then the Tarski monster was defined to demonstrate our poor understanding of infinite groups, because such monsters obviously don't exist, it should be easy to prove that they don't exist, but we cannot prove it.</p>\n\n<p>Then Olshanskii proved that Tarski monsters do exist for all large primes, and by now many people believe that \"large\" means something like $p\\geq 11$.</p>\n" }, { "answer_id": 284260, "score": 20, "text": "<p>Seen this, I believe really striking, example yesterday here on MO: although the question <a href=\"https://mathoverflow.net/q/7304/41291\">Complex vector bundles that are not holomorphic</a> is from 2009, a <a href=\"https://mathoverflow.net/a/37205/41291\">recent post</a> by <a href=\"https://mathoverflow.net/users/2349\">algori</a> suggests that this is still open.</p>\n\n<p>And the really ridiculous conjecture is that <em>all topological vector bundles on $\\mathbb C\\mathbf P^n$ are algebraic</em>.</p>\n\n<p>The question is described as an open problem in Okonek-Schneider-Spindler (1980) but I believe must have been asked much earlier.</p>\n" }, { "answer_id": 260132, "score": 18, "text": "<p><span class=\"math-container\">$\\DeclareMathOperator\\Sp{Sp}\\DeclareMathOperator\\cat{cat}$</span>The Lusternik–Schnirelmann category of the Lie groups <span class=\"math-container\">$\\Sp(n)$</span>. Since <span class=\"math-container\">$\\Sp(1) = S^3$</span>, <span class=\"math-container\">$\\cat(\\Sp(1)) = 1$</span>. In the 1960s, P. Schweitzer proved that <span class=\"math-container\">$\\cat(\\Sp(2)) = 3$</span>. <em>Based on this</em>, a folklore conjecture emerged that in general <span class=\"math-container\">$\\cat(\\Sp(n)) = 2n-1$</span>. In 2001, it was proved that <span class=\"math-container\">$\\cat(\\Sp(3)) = 5$</span>, so maybe it's true?</p>\n" }, { "answer_id": 259915, "score": 17, "text": "<p>If the holomorphic map $f:\\mathbb{C}\\to\\mathbb{C}$\nhas a fixed point $p$, and the derivative $\\lambda := f'(p)$ equals $e^{2\\pi i \\theta}$ (with irrational $\\theta$), one can ask if $f$ is conjugate to $z\\mapsto\\lambda\\cdot z$ in a neighborhood of $p$. If it exists, the largest domain of conjugacy is called a 'Siegel disk'. Two properties to keep in mind are:</p>\n\n<ul>\n<li>A Siegel disk cannot contain a critical point in its interior (boundary is ok).</li>\n<li>The boundary of a Siegel disk belongs to the Julia set of $f$.</li>\n</ul>\n\n<p>Quadratic maps can have Siegel disks, but not for just any $\\theta$; the number theoretical properties of this 'rotation number' are relevant. However, if $\\theta$ is Diophantine, the boundary of the disk is well behaved (Jordan curve, quasi-circle...)</p>\n\n<p>... But the boundary of a Siegel disk can also be wild; for instance, it can be non-locally connected. The <strong>outrageous conjecture</strong> that has been floating around is that:</p>\n\n<ul>\n<li>There is a quadratic polynomial with a Siegel disk whose boundary equals the Julia set.</li>\n</ul>\n\n<p>Since a quadratic Julia set is symmetric with respect to the critical point, a quadratic Siegel disk would have a symmetric preimage whose boundary <em>also</em> equals the Julia set, but the unbounded component of the complement (Fatou set) also has boundary equal to the Julia set, so our conjectured Siegel disk would form part of a 'lakes of Wada' configuration.</p>\n" }, { "answer_id": 264543, "score": 17, "text": "<p><span class=\"math-container\">$S^6$</span> has a complex structure.</p>\n<p>I don’t know if this apply, but this has a nice story. It has been &quot;published&quot; to to be true and now Atiyah has a short paper <a href=\"https://arxiv.org/abs/1610.09366\" rel=\"nofollow noreferrer\">The non-existent complex 6-sphere</a> on arXiv claiming to be false, other important mathematicians has also work on this problem. According to LeBrun this would be a minor disaster.</p>\n" }, { "answer_id": 259955, "score": 15, "text": "<p>Let us say a graph class $\\mathcal{C}$ is small if it has at most $n^{O(n)}$ graphs on $n$ vertices. The <strong>implicit graph conjecture</strong> states that every small, hereditary graph class has an adjacency labeling scheme (ALS) with a label decoder that can be computed in polynomial time (a formal definition of ALS is given at the end of the answer).</p>\n\n<p>Initially, this was posed as question by Kannan, Naor and Rudich in their paper Implicit Representation of Graphs which appeared at STOC '88. It was restated as conjecture by Spinrad in the book Efficient Graph Representations (2003). </p>\n\n<p>It is important because it would imply the existence of space-efficient representations for all small, hereditary graph classes where querying an edge requires only polylogarithmic time with respect to the number of vertices of the graph. </p>\n\n<p>As far as I know there is no consensus about whether this conjecture should be true or not. However, from my perspective it would be an immense surprise if it holds for the following reason. The concept of adjacency labeling schemes can be defined with respect to arbitrary complexity classes. For a complexity class $\\text{C}$ (more formally, a set of languages over the binary alphabet) we can define the class of graph classes $\\text{GC}$ as the set of all graph classes that have an ALS with a label decoder that can be computed in $\\text{C}$. It can be shown that $\\text{G1EXP} \\subsetneq \\text{G2EXP} \\subsetneq \\text{G3EXP} \\dots \\subsetneq \\text{GR} \\subsetneq \\text{GALL}$ where $\\text{kEXP}$ is the set of languages that can be computed in time $\\exp^k(\\text{poly}(n))$, $\\text{R}$ is the set of all decidable languages and $\\text{ALL}$ is the set of all languages. I find it hard to believe that <strong>every</strong> small, hereditary graph class falls down through all these classes and just happens to sit in $\\text{GP}$ (the choice just seems too arbitrary and weak). In fact, there are natural graph classes such as disk graph or line segment graphs for which it is not even known whether they are in $\\text{GALL}$. Additionally, a graph class $\\mathcal{C}$ is in $\\text{GALL}$ iff $\\mathcal{C}$ has a polynomial universal graph, i.e. a family of graphs $(G_n)_{n \\in \\mathbb{N}}$ such that $|V(G_n)|$ is polynomially bounded and $G_n$ contains all graphs from $\\mathcal{C}$ on $n$ vertices as induced subgraph. It already seems doubtful to me that every small, hereditary graph class has such polynomial universal graphs. </p>\n\n<p>An ALS is a tuple $S=(F,c)$ where $F \\subseteq \\{0,1\\}^* \\times \\{0,1\\}^*$ is called label decoder and $c \\in \\mathbb{N}$ is the label length. A graph $G$ with $n$ vertices is represented by $S$ if there exists a labeling $\\ell \\colon V(G) \\rightarrow \\{0,1\\}^{c \\lceil \\log n \\rceil}$ such that for all $u,v \\in V(G)$ it holds that $(u,v) \\in E(G) \\Leftrightarrow (\\ell(u),\\ell(v)) \\in F$. A graph class $\\mathcal{C}$ has an ALS $S$ if every graph in $\\mathcal{C}$ can be represented by $S$ (but not necessarily every graph represented by $S$ must be in $\\mathcal{C}$). </p>\n" }, { "answer_id": 350559, "score": 14, "text": "<p>Let me promote a comment of Terry Tao into an answer and mention two conjectures.\n(See, e.g., <a href=\"https://arxiv.org/abs/1711.10238\" rel=\"nofollow noreferrer\">Thom - Finitary approximations of groups and their applications</a> and <a href=\"https://arxiv.org/abs/1711.10238\" rel=\"nofollow noreferrer\">De Chiffre, Glebsky, Lubotzky, Thom - Stability, cohomology vanishing, and non-approximable groups</a> for more information.)</p>\n<p><strong>Conjecture 1:</strong> (a question by Gromov): Every group is sofic.</p>\n<p><strong>Conjecture 2:</strong> (a question by Connes): Every group is hyperlinear</p>\n<p>Conjecture 1 and 2 as well as several other problems can be asked in the context of approximate homomorphisms of a group <span class=\"math-container\">$\\Gamma$</span>. If the target groups are symmetric groups, and the approximation is in terms of normalized Hamming norm then this leads to Conjecture 1. If the target groups are <span class=\"math-container\">$U(n)$</span> and the notion of approximation is in terms of the normalized Hilbert Schmidt norm then this leads to Conjecture 2. A counter example to Conjecture 2 leads also to a counter example to <a href=\"https://en.wikipedia.org/wiki/Connes_embedding_problem\" rel=\"nofollow noreferrer\">Connes 1976 embedding conjecture</a> that was refuted <a href=\"https://arxiv.org/abs/2001.04383\" rel=\"nofollow noreferrer\">a few days ago by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen in their paper <span class=\"math-container\">$\\mathrm{MIP}^* = \\mathrm{RE}$</span></a>, which gives us additional reasons to consider Conjecture 1 and 2 as outrageous conjectures!</p>\n" }, { "answer_id": 387439, "score": 11, "text": "<p>The so-called <a href=\"https://en.wikipedia.org/wiki/Lov%C3%A1sz_conjecture\" rel=\"noreferrer\">Lovász conjecture</a> states:</p>\n<blockquote>\n<p>Every finite connected vertex-transitive graph contains a Hamiltonian path.</p>\n</blockquote>\n<p>One reason I think that this conjecture fits what you're asking for is that, even though it is standardly called a conjecture, it is not clear how many people believe it. László Lovász himself originally phrased the question negatively, inviting the reader to find a counterexample. László Babai has vocally argued that there is no reason to believe it is true, and that the value of the question lies in how it highlights our poor understanding of obstructions to Hamiltonicity. For further information, see <a href=\"https://www.math.ucla.edu/~pak/papers/hamcayley9.pdf\" rel=\"noreferrer\">Hamiltonian paths in Cayley graphs</a> by Pak and Radoičić.</p>\n" }, { "answer_id": 259917, "score": 10, "text": "<p>The finitistic dimension conjecture for finite dimensional algebras states that the supremum of all projective dimensions of modules having finite projective dimension is finite.\nIt it just proven for some very special classes of algebras and in general there seems to be no reason why this should be true.\nReferences: <a href=\"https://arxiv.org/pdf/1407.2383v1.pdf\" rel=\"nofollow noreferrer\">https://arxiv.org/pdf/1407.2383v1.pdf</a>\n<a href=\"http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf\" rel=\"nofollow noreferrer\">http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf</a> (<a href=\"https://web.archive.org/web/20210506191407/https://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf\" rel=\"nofollow noreferrer\">Wayback Machine</a>)</p>\n" }, { "answer_id": 260226, "score": 10, "text": "<p>It seems to me that <a href=\"https://en.wikipedia.org/wiki/Zeeman_conjecture\" rel=\"nofollow noreferrer\">Zeeman's collapsing conjecture</a> satisfies the criteria given. The Zeeman conjecture implies both the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture\" rel=\"nofollow noreferrer\">Poincaré conjecture</a> (proved in 2003) and the <a href=\"https://en.wikipedia.org/wiki/Andrews%E2%80%93Curtis_conjecture\" rel=\"nofollow noreferrer\">Andrews-Curtis conjecture</a>.</p>\n\n<p>The following is a quote from Matveev's <a href=\"https://books.google.com/books?id=vFLgAyeVSqAC&amp;pg=PA46\" rel=\"nofollow noreferrer\">book</a>, where it is proved that ZC restricted to special polyhedra is equivalent to the union of PC and AC.</p>\n\n<blockquote>\n <p>Theorem 1.3.58 may cast a doubt on the widespread belief that ZC is\n false. If a counterexample indeed exists, then either it has a “bad”\n local structure (is not a special polyhedron) or it is a\n counterexample to either AC or PC.</p>\n</blockquote>\n" }, { "answer_id": 264502, "score": 4, "text": "<p>This one is due to Errett Bishop: \"all meaningful mathematics is reducible to finite calculations with strings of $0$s and $1$s\" (imho Bishop formulated this not as a conjecture but as an article of faith but that doesn't necessarily affect the truth or falsity thereof).</p>\n\n<p>A reference for Bishop's claim is his article <a href=\"http://www.ams.org/mathscinet-getitem?mr=498014\" rel=\"nofollow noreferrer\">\"Crisis in contemporary mathematics\"</a> (the link is to the mathscinet review of the article) which discusses the constructivist opposition to a principle called LPO (\"limited principle of omniscience\") related to the law of excluded middle. The LPO is discussed starting on page 511 of the article.</p>\n" }, { "answer_id": 387473, "score": 4, "text": "<p>I think the Erdős-Szekeres conjecture (see the Wikipedia article on the <a href=\"https://en.wikipedia.org/wiki/Happy_ending_problem\" rel=\"nofollow noreferrer\">Happy Ending Problem</a>), which says that the smallest <span class=\"math-container\">$N$</span> for which any <span class=\"math-container\">$N$</span> points in general position in the plane contain the vertices of some convex <span class=\"math-container\">$n$</span>-gon is <span class=\"math-container\">$N=2^{n-2}+1$</span>, is a little bit ridiculous in that there is not a lot of reason to believe it. Indeed, only the cases <span class=\"math-container\">$n=3,4,5,6$</span> are known, and while it is known that <span class=\"math-container\">$2^{n-2}+1$</span> is a lower bound for this <span class=\"math-container\">$N$</span>, I really am not aware of any other substantial evidence in favor of this conjecture other than that it's a very nice pattern.</p>\n" }, { "answer_id": 457228, "score": 3, "text": "<p><a href=\"https://en.wikipedia.org/wiki/K%C3%B6the_conjecture#:%7E:text=In%20mathematics%2C%20the%20K%C3%B6the%20conjecture,%2C%20other%20than%20%7B0%7D.\" rel=\"nofollow noreferrer\">Köthe's conjecture</a> is a hugely important problem in ring theory, it was posed in 1930, it is still wide open, and it is also widely believed to be false.</p>\n" }, { "answer_id": 259862, "score": 132, "text": "<p>A long-standing conjecture in Number Theory is that for each positive integer <span class=\"math-container\">$n$</span> there is no stretch of <span class=\"math-container\">$n$</span> consecutive integers containing more primes than the stretch from 2 to <span class=\"math-container\">$n+1$</span>. Just looking at a table of primes and seeing how they thin out is enough to make the conjecture plausible. </p>\n\n<p>But Hensley and Richards (<a href=\"https://eudml.org/doc/205282\" rel=\"noreferrer\">Primes in intervals</a>, Acta Arith 25 (1973/74) 375-391, <a href=\"https://mathscinet.ams.org/mathscinet-getitem?mr=0396440\" rel=\"noreferrer\">MR0396440</a>) proved that this conjecture is incompatible with an equally long-standing conjecture, the prime <span class=\"math-container\">$k$</span>-tuples conjecture. </p>\n\n<p>The current consensus, I believe, is that prime <span class=\"math-container\">$k$</span>-tuples is true, while the first conjecture is false (but not proved to be false). </p>\n" }, { "answer_id": 259846, "score": 125, "text": "<p>W. Hugh Woodin, at a 1992 seminar in Berkeley at which I was present, proposed a new and ridiculously strong large cardinal concept, now called the <a href=\"http://cantorsattic.info/Berkeley\">Berkeley cardinals</a>, and challenged the seminar audience to refute their existence. </p>\n\n<p>He ridiculed the cardinals as overly strong, stronger than Reinhardt cardinals, and proposed them in a \"Refute this!\" manner that seems to be in exactly the spirit of your question. </p>\n\n<p>Meanwhile, no-one has yet succeeded in refuting the Berkeley cardinals. </p>\n" }, { "answer_id": 259845, "score": 89, "text": "<h1><a href=\"https://stackoverflow.com/questions/111307/whats-p-np-and-why-is-it-such-a-famous-question\"><span class=\"math-container\">$P=NP$</span></a></h1>\n<p>Let me tick the list:</p>\n<ol>\n<li><p>Most likely false, because, as Scott Aaronson said <em>&quot;If <span class=\"math-container\">$P = NP$</span>, then the world would be a profoundly different place than we usually assume it to be.&quot;</em></p>\n</li>\n<li><p>Yes, it's The Open Problem in computational complexity theory</p>\n</li>\n<li><p>Yes, it's old</p>\n</li>\n<li><p>It's important, again quoting Scott: <em>&quot;[because if it were true], there would be no special value in &quot;creative leaps,&quot; no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss...&quot;</em></p>\n</li>\n<li><p>It's an equality rather than a negation</p>\n</li>\n</ol>\n" }, { "answer_id": 259851, "score": 68, "text": "<p>I've heard that Roger Heath-Brown has presented the following \"conjecture\" at several conferences, most likely to illustrate our poor understanding of the topic more than because he actually believes it to be true.</p>\n\n<p>Let $\\pi_2(x)$ denote the number of twin primes less than $x$. Then</p>\n\n<blockquote>\n <blockquote>\n <p>$\\pi_2(x) = c \\int_{0}^{x}\\frac{dt}{\\log^2 t} + O(1)$</p>\n </blockquote>\n</blockquote>\n\n<p>where $c$ is the twin prime constant. </p>\n\n<p>In other words, the twin prime asymptotic holds with error term is $O(1)$.</p>\n" }, { "answer_id": 259880, "score": 59, "text": "<p>I propose Edward Nelson's \"conjecture\" that Peano's arithmetic is inconsistent. </p>\n\n<p>First, to be honest, I am not aware that he stated it as \"conjecture\", using that word, but this is something he said he believed to be true, and that he wasn't able to prove (except for a little while but a mistake was discovered by Terry Tao and others independently) though he tried a lot. So a conjecture it is, in the sense this word currently has. </p>\n\n<p>It is also certainly \"outrageous\", in the usual sense of the word -- to many mathematicians, according to my own experience, the simple mention of it provokes disbelief, sarcasm, sometimes outright hostility. </p>\n\n<p>But let's check that it is also \"outrageous\" in the sense of this question. 1) It is most certainly false, or at least this is what most mathematicians, including myself, think. 2) But it is certainly not known to be false -- not now, not ever. 3) Nelson made his program public much before 2006. 4) it is obviously extremely important. 5) The negation, that is the assertion that \"Peano's arithmetic is consistent\" was once a conjecture by Hilbert, but since Gödel it cannot be called a conjecture anymore, since we know it cannot be proven (in a system etc.)</p>\n\n<p>Let me add that it also satisfies something Gil Kalai added in comment, namely \"I prefer examples where [...] the proposer proposes the conjecture genuinely suggesting that it is true\".</p>\n" }, { "answer_id": 259878, "score": 48, "text": "<p>From this <a href=\"https://mathoverflow.net/a/218926/12978\">Math Overflow question</a>, Joel David Hamkins wrote:</p>\n\n<blockquote>\n <p>I once heard Harvey Friedman suggest that the set of prime-differences, that is, the set of all natural numbers $n$ for which there are primes $p,q$ with $p-q=n$, as a possible candidate for all we knew for an intermediate Turing degree &mdash; a noncomputable set between $0$ and $0'$ &mdash; that was <em>natural</em>, not specifically constructed to have that feature.</p>\n</blockquote>\n\n<p>I've also heard others (albeit more recently than 2006) conjecture that Hilbert's 10th problem for rationals is an intermediate degree. </p>\n\n<p>Really, any conjecture that there is a <em>natural</em> intermediate degree is outrageous (although not exactly formal enough to refute). </p>\n" }, { "answer_id": 259943, "score": 42, "text": "<p><strong><a href=\"https://en.wikipedia.org/wiki/Vop%C4%9Bnka%27s_principle\" rel=\"nofollow noreferrer\">Vopěnka's Principle</a></strong></p>\n<p>It fits here perfectly except that it has never been called a conjecture. Vopěnka himself was convinced it was wrong! But I will better just post a section from page 279 of Adámek and Rosický “<a href=\"https://doi.org/10.1017/CBO9780511600579\" rel=\"nofollow noreferrer\">Locally presentable and accessible categories.</a>”:</p>\n<blockquote>\n<p>The story of Vopěnka's principle (as related to the authors by Petr Vopěnka)\nis that of a practical joke which misfired: In the 1960's P. Vopěnka was repelled by the multitude of large cardinals which emerged in set theory.\nWhen he constructed, in collaboration with Z. Hedrlín and A. Pultr,\na rigid graph on every set (see Lemma 2.64), he came to the conclusion\nthat, with some more effort, a large rigid class of graphs must surely be\nalso constructible. He then decided to tease set-theorists: he introduced\na new principle (known today as Vopěnka's principle), and proved some\nconsequences concerning large cardinals. He hoped that some set-theorists\nwould continue this line of research (which they did) until somebody showed\nthat the principle is nonsense. However, the latter never materialized - after\na number of unsuccessful attempts at constructing a large rigid class\nof graphs, Vopěnka's principle received its name from Vopěnka's disciples.\nOne of them, T. J. Jech, made Vopěnka's principle widely known. Later\nthe consistency of this principle was derived from the existence of huge\ncardinals: see [Powell 1972]; our account (in the Appendix) is taken from\n[Jech 1978]. Thus, today this principle has a firm position in the theory of\nlarge cardinals. Petr Vopěnka himself never published anything related to\nthat principle.</p>\n</blockquote>\n" }, { "answer_id": 259855, "score": 37, "text": "<p>I don't know about \"ridiculous\", but there is <a href=\"https://en.wikipedia.org/wiki/Hall&#39;s_conjecture\" rel=\"noreferrer\">Hall's Conjecture</a> in its original form:</p>\n\n<blockquote>\n <p>There is a positive constant $C$ such that for any two integers $a$ and $b$ with $a^2 \\neq b^3$, one has $$|a^2-b^3|&gt; C \\sqrt{|b|}\\;.$$</p>\n</blockquote>\n" }, { "answer_id": 260197, "score": 37, "text": "<p>I'm surprised no one has mentioned it, but the first one that comes to my mind is this.</p>\n\n<blockquote>\n <p>$e+\\pi$ is rational.</p>\n</blockquote>\n\n<p>I think most mathematicians would agree that it is ridiculous. It would follow from Schanuel's conjecture that it is false, but as far as I know, the conjecture is wide open, and when it comes to (ir)rationality of $e+\\pi$, more or less all that is known is the (elementary) fact that either $e+\\pi$ or $e\\cdot\\pi$ is transcendental (naturally, we expect <em>both</em> of them to be transcendental, so it doesn't really get us any closer to a proof).</p>\n\n<p>I'm not sure when it was made publicly, but it is very natural and unlikely to not have been considered before (I heard about it as an undergrad around 2010). I think it is quite important in that it is an obvious test case for Schanuel's conjecture, and in that it would certainly be quite shocking if it was true.</p>\n\n<p>(Caveat: I am not a specialist, so if someone more competent can contradict me, please do!)</p>\n" }, { "answer_id": 259919, "score": 35, "text": "<p>Existence of <a href=\"https://en.wikipedia.org/wiki/Siegel_zero\" rel=\"noreferrer\">Siegel zeros</a>.</p>\n\n<p>1) If we are to believe (like most mathematicians do) in the generalized Riemann hypothesis, this is completely false. I wouldn't necessarily call this ridiculous or outrageous, but within the evidence we have it is rather unlikely to hold.</p>\n\n<p>2) Nonexistence of Siegel zeros is a problem wide, wide open, nowhere near close to being resolved.</p>\n\n<p>3) According to the Wikipedia article, this type of zeros was considered back in 1930s, and earlier by Landau, but I don't know if they have explicitly stated the conjecture. GRH was posed <a href=\"https://en.wikipedia.org/wiki/Generalized_Riemann_hypothesis#Generalized_Riemann_hypothesis_.28GRH.29\" rel=\"noreferrer\">back in 1884</a> though.</p>\n\n<p>4) They are immensely useful in many applications, since if they exist, primes in certain arithmetic progressions \"conspire\" to have certain non-uniform distribution. I'm no expert, but <a href=\"https://mathoverflow.net/questions/125276/good-uses-of-siegel-zeros\">here</a> some uses are listed (see also <a href=\"https://terrytao.wordpress.com/2015/08/26/heath-browns-theorem-on-prime-twins-and-siegel-zeroes/\" rel=\"noreferrer\">this blog post</a> by Terry Tao).</p>\n\n<p>5) It <em>implies</em> the negation of GRH, but the negation of a statement itself is quite an awkward statement, saying \"yeah, zeros might exist, but not too close to $1$\".</p>\n" }, { "answer_id": 259977, "score": 34, "text": "<p>The &quot;conjecture&quot; in algebraic geometry that all rationally connected varieties are unirational comes to mind. It's usually thrown around as a way of saying, &quot;See, we know so little about what varieties can be unirational that we can't prove a single rationally connected variety isn't.&quot; Unirationality implies rational connectedness, but I think almost everyone believes the converse should be false.</p>\n<p>Some background: Algebraic geometers have been interested for a long time in proving that certain varieties are or are not rational (very roughly, figuring out which systems of polynomial equations can have their solutions parametrized). Clemens and Griffiths showed in 1972 that a cubic hypersurface in <span class=\"math-container\">$\\mathbb{P}^4$</span> is irrational. Since then, there's been a lot of progress in rationality obstructions e.g., Artin–Mumford's obstruction via torsion in <span class=\"math-container\">$H^3$</span>, Iskovskikh–Manin on quartic threefolds, Kollár's work on rationality of hypersurfaces, and most recently, Voisin's new decomposition of the diagonal invariants which have led to major breakthroughs.</p>\n<p>On the other hand, unirationality has proved a far harder notion to control, and to my mind the biggest open question in this area is to find any obstruction to unirationality.</p>\n" }, { "answer_id": 260189, "score": 29, "text": "<p>I was giving a talk several years ago about the conjectured linear independence (over $\\Bbb Q$) of the ordinates of the zeros of the Riemann zeta function, and Lior Silberman crystallized our current lack of knowledge into a \"Refute this!\" statement:</p>\n\n<blockquote>\n <p>If $\\zeta(x+iy)=0$, then $y\\in\\Bbb Q$.</p>\n</blockquote>\n\n<p>(In other words, even though the imaginary parts of the nontrivial zeros of $\\zeta(s)$ are believed to be transcendental, algebraically independent, and generally unrelated to any other constants we've ever seen ... we currently can't even prove that a single one of those imaginary parts is irrational!)</p>\n\n<p>This \"conjecture\" can be extended to Dirichlet $L$-functions, and perhaps even further (though one needs to be careful that we don't allow a class of $L$-functions that includes arbitrary vertical shifts of some of its members).</p>\n" }, { "answer_id": 333105, "score": 24, "text": "<p>Here is my favorite.</p>\n\n<p>Spherical simplex is called rational, if all its dihedral angles are rational multiples of <span class=\"math-container\">$\\pi.$</span></p>\n\n<p><strong>Rational Volume Conjecture:</strong>\nVolume of every rational spherical simplex is a rational multiple of <span class=\"math-container\">$\\pi^2.$</span></p>\n\n<p>It is true in all known examples, but is expected to be false. The conjecture first appeared in \"<a href=\"https://pdfs.semanticscholar.org/879a/91356bcaac5e3605dbc18778a869d09cb96a.pdf\" rel=\"noreferrer\">Differential characters and geometric invariants</a>\" paper of Cheeger and Simons, p.73. (Dis)proving this conjecture would lead to interesting results in scissor congruence theory and group homology.</p>\n" }, { "answer_id": 260028, "score": 23, "text": "<p>For a prime $p$, an infinite group $G$ is a <a href=\"https://en.wikipedia.org/wiki/Tarski_monster_group\">Tarski monster</a> if each of its proper subgroups has order $p$.</p>\n\n<p>If I am correctly informed, then the Tarski monster was defined to demonstrate our poor understanding of infinite groups, because such monsters obviously don't exist, it should be easy to prove that they don't exist, but we cannot prove it.</p>\n\n<p>Then Olshanskii proved that Tarski monsters do exist for all large primes, and by now many people believe that \"large\" means something like $p\\geq 11$.</p>\n" }, { "answer_id": 284260, "score": 20, "text": "<p>Seen this, I believe really striking, example yesterday here on MO: although the question <a href=\"https://mathoverflow.net/q/7304/41291\">Complex vector bundles that are not holomorphic</a> is from 2009, a <a href=\"https://mathoverflow.net/a/37205/41291\">recent post</a> by <a href=\"https://mathoverflow.net/users/2349\">algori</a> suggests that this is still open.</p>\n\n<p>And the really ridiculous conjecture is that <em>all topological vector bundles on $\\mathbb C\\mathbf P^n$ are algebraic</em>.</p>\n\n<p>The question is described as an open problem in Okonek-Schneider-Spindler (1980) but I believe must have been asked much earlier.</p>\n" }, { "answer_id": 260132, "score": 18, "text": "<p><span class=\"math-container\">$\\DeclareMathOperator\\Sp{Sp}\\DeclareMathOperator\\cat{cat}$</span>The Lusternik–Schnirelmann category of the Lie groups <span class=\"math-container\">$\\Sp(n)$</span>. Since <span class=\"math-container\">$\\Sp(1) = S^3$</span>, <span class=\"math-container\">$\\cat(\\Sp(1)) = 1$</span>. In the 1960s, P. Schweitzer proved that <span class=\"math-container\">$\\cat(\\Sp(2)) = 3$</span>. <em>Based on this</em>, a folklore conjecture emerged that in general <span class=\"math-container\">$\\cat(\\Sp(n)) = 2n-1$</span>. In 2001, it was proved that <span class=\"math-container\">$\\cat(\\Sp(3)) = 5$</span>, so maybe it's true?</p>\n" }, { "answer_id": 259915, "score": 17, "text": "<p>If the holomorphic map $f:\\mathbb{C}\\to\\mathbb{C}$\nhas a fixed point $p$, and the derivative $\\lambda := f'(p)$ equals $e^{2\\pi i \\theta}$ (with irrational $\\theta$), one can ask if $f$ is conjugate to $z\\mapsto\\lambda\\cdot z$ in a neighborhood of $p$. If it exists, the largest domain of conjugacy is called a 'Siegel disk'. Two properties to keep in mind are:</p>\n\n<ul>\n<li>A Siegel disk cannot contain a critical point in its interior (boundary is ok).</li>\n<li>The boundary of a Siegel disk belongs to the Julia set of $f$.</li>\n</ul>\n\n<p>Quadratic maps can have Siegel disks, but not for just any $\\theta$; the number theoretical properties of this 'rotation number' are relevant. However, if $\\theta$ is Diophantine, the boundary of the disk is well behaved (Jordan curve, quasi-circle...)</p>\n\n<p>... But the boundary of a Siegel disk can also be wild; for instance, it can be non-locally connected. The <strong>outrageous conjecture</strong> that has been floating around is that:</p>\n\n<ul>\n<li>There is a quadratic polynomial with a Siegel disk whose boundary equals the Julia set.</li>\n</ul>\n\n<p>Since a quadratic Julia set is symmetric with respect to the critical point, a quadratic Siegel disk would have a symmetric preimage whose boundary <em>also</em> equals the Julia set, but the unbounded component of the complement (Fatou set) also has boundary equal to the Julia set, so our conjectured Siegel disk would form part of a 'lakes of Wada' configuration.</p>\n" }, { "answer_id": 264543, "score": 17, "text": "<p><span class=\"math-container\">$S^6$</span> has a complex structure.</p>\n<p>I don’t know if this apply, but this has a nice story. It has been &quot;published&quot; to to be true and now Atiyah has a short paper <a href=\"https://arxiv.org/abs/1610.09366\" rel=\"nofollow noreferrer\">The non-existent complex 6-sphere</a> on arXiv claiming to be false, other important mathematicians has also work on this problem. According to LeBrun this would be a minor disaster.</p>\n" }, { "answer_id": 259955, "score": 15, "text": "<p>Let us say a graph class $\\mathcal{C}$ is small if it has at most $n^{O(n)}$ graphs on $n$ vertices. The <strong>implicit graph conjecture</strong> states that every small, hereditary graph class has an adjacency labeling scheme (ALS) with a label decoder that can be computed in polynomial time (a formal definition of ALS is given at the end of the answer).</p>\n\n<p>Initially, this was posed as question by Kannan, Naor and Rudich in their paper Implicit Representation of Graphs which appeared at STOC '88. It was restated as conjecture by Spinrad in the book Efficient Graph Representations (2003). </p>\n\n<p>It is important because it would imply the existence of space-efficient representations for all small, hereditary graph classes where querying an edge requires only polylogarithmic time with respect to the number of vertices of the graph. </p>\n\n<p>As far as I know there is no consensus about whether this conjecture should be true or not. However, from my perspective it would be an immense surprise if it holds for the following reason. The concept of adjacency labeling schemes can be defined with respect to arbitrary complexity classes. For a complexity class $\\text{C}$ (more formally, a set of languages over the binary alphabet) we can define the class of graph classes $\\text{GC}$ as the set of all graph classes that have an ALS with a label decoder that can be computed in $\\text{C}$. It can be shown that $\\text{G1EXP} \\subsetneq \\text{G2EXP} \\subsetneq \\text{G3EXP} \\dots \\subsetneq \\text{GR} \\subsetneq \\text{GALL}$ where $\\text{kEXP}$ is the set of languages that can be computed in time $\\exp^k(\\text{poly}(n))$, $\\text{R}$ is the set of all decidable languages and $\\text{ALL}$ is the set of all languages. I find it hard to believe that <strong>every</strong> small, hereditary graph class falls down through all these classes and just happens to sit in $\\text{GP}$ (the choice just seems too arbitrary and weak). In fact, there are natural graph classes such as disk graph or line segment graphs for which it is not even known whether they are in $\\text{GALL}$. Additionally, a graph class $\\mathcal{C}$ is in $\\text{GALL}$ iff $\\mathcal{C}$ has a polynomial universal graph, i.e. a family of graphs $(G_n)_{n \\in \\mathbb{N}}$ such that $|V(G_n)|$ is polynomially bounded and $G_n$ contains all graphs from $\\mathcal{C}$ on $n$ vertices as induced subgraph. It already seems doubtful to me that every small, hereditary graph class has such polynomial universal graphs. </p>\n\n<p>An ALS is a tuple $S=(F,c)$ where $F \\subseteq \\{0,1\\}^* \\times \\{0,1\\}^*$ is called label decoder and $c \\in \\mathbb{N}$ is the label length. A graph $G$ with $n$ vertices is represented by $S$ if there exists a labeling $\\ell \\colon V(G) \\rightarrow \\{0,1\\}^{c \\lceil \\log n \\rceil}$ such that for all $u,v \\in V(G)$ it holds that $(u,v) \\in E(G) \\Leftrightarrow (\\ell(u),\\ell(v)) \\in F$. A graph class $\\mathcal{C}$ has an ALS $S$ if every graph in $\\mathcal{C}$ can be represented by $S$ (but not necessarily every graph represented by $S$ must be in $\\mathcal{C}$). </p>\n" }, { "answer_id": 350559, "score": 14, "text": "<p>Let me promote a comment of Terry Tao into an answer and mention two conjectures.\n(See, e.g., <a href=\"https://arxiv.org/abs/1711.10238\" rel=\"nofollow noreferrer\">Thom - Finitary approximations of groups and their applications</a> and <a href=\"https://arxiv.org/abs/1711.10238\" rel=\"nofollow noreferrer\">De Chiffre, Glebsky, Lubotzky, Thom - Stability, cohomology vanishing, and non-approximable groups</a> for more information.)</p>\n<p><strong>Conjecture 1:</strong> (a question by Gromov): Every group is sofic.</p>\n<p><strong>Conjecture 2:</strong> (a question by Connes): Every group is hyperlinear</p>\n<p>Conjecture 1 and 2 as well as several other problems can be asked in the context of approximate homomorphisms of a group <span class=\"math-container\">$\\Gamma$</span>. If the target groups are symmetric groups, and the approximation is in terms of normalized Hamming norm then this leads to Conjecture 1. If the target groups are <span class=\"math-container\">$U(n)$</span> and the notion of approximation is in terms of the normalized Hilbert Schmidt norm then this leads to Conjecture 2. A counter example to Conjecture 2 leads also to a counter example to <a href=\"https://en.wikipedia.org/wiki/Connes_embedding_problem\" rel=\"nofollow noreferrer\">Connes 1976 embedding conjecture</a> that was refuted <a href=\"https://arxiv.org/abs/2001.04383\" rel=\"nofollow noreferrer\">a few days ago by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen in their paper <span class=\"math-container\">$\\mathrm{MIP}^* = \\mathrm{RE}$</span></a>, which gives us additional reasons to consider Conjecture 1 and 2 as outrageous conjectures!</p>\n" }, { "answer_id": 387439, "score": 11, "text": "<p>The so-called <a href=\"https://en.wikipedia.org/wiki/Lov%C3%A1sz_conjecture\" rel=\"noreferrer\">Lovász conjecture</a> states:</p>\n<blockquote>\n<p>Every finite connected vertex-transitive graph contains a Hamiltonian path.</p>\n</blockquote>\n<p>One reason I think that this conjecture fits what you're asking for is that, even though it is standardly called a conjecture, it is not clear how many people believe it. László Lovász himself originally phrased the question negatively, inviting the reader to find a counterexample. László Babai has vocally argued that there is no reason to believe it is true, and that the value of the question lies in how it highlights our poor understanding of obstructions to Hamiltonicity. For further information, see <a href=\"https://www.math.ucla.edu/~pak/papers/hamcayley9.pdf\" rel=\"noreferrer\">Hamiltonian paths in Cayley graphs</a> by Pak and Radoičić.</p>\n" }, { "answer_id": 259917, "score": 10, "text": "<p>The finitistic dimension conjecture for finite dimensional algebras states that the supremum of all projective dimensions of modules having finite projective dimension is finite.\nIt it just proven for some very special classes of algebras and in general there seems to be no reason why this should be true.\nReferences: <a href=\"https://arxiv.org/pdf/1407.2383v1.pdf\" rel=\"nofollow noreferrer\">https://arxiv.org/pdf/1407.2383v1.pdf</a>\n<a href=\"http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf\" rel=\"nofollow noreferrer\">http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf</a> (<a href=\"https://web.archive.org/web/20210506191407/https://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf\" rel=\"nofollow noreferrer\">Wayback Machine</a>)</p>\n" }, { "answer_id": 260226, "score": 10, "text": "<p>It seems to me that <a href=\"https://en.wikipedia.org/wiki/Zeeman_conjecture\" rel=\"nofollow noreferrer\">Zeeman's collapsing conjecture</a> satisfies the criteria given. The Zeeman conjecture implies both the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture\" rel=\"nofollow noreferrer\">Poincaré conjecture</a> (proved in 2003) and the <a href=\"https://en.wikipedia.org/wiki/Andrews%E2%80%93Curtis_conjecture\" rel=\"nofollow noreferrer\">Andrews-Curtis conjecture</a>.</p>\n\n<p>The following is a quote from Matveev's <a href=\"https://books.google.com/books?id=vFLgAyeVSqAC&amp;pg=PA46\" rel=\"nofollow noreferrer\">book</a>, where it is proved that ZC restricted to special polyhedra is equivalent to the union of PC and AC.</p>\n\n<blockquote>\n <p>Theorem 1.3.58 may cast a doubt on the widespread belief that ZC is\n false. If a counterexample indeed exists, then either it has a “bad”\n local structure (is not a special polyhedron) or it is a\n counterexample to either AC or PC.</p>\n</blockquote>\n" }, { "answer_id": 264502, "score": 4, "text": "<p>This one is due to Errett Bishop: \"all meaningful mathematics is reducible to finite calculations with strings of $0$s and $1$s\" (imho Bishop formulated this not as a conjecture but as an article of faith but that doesn't necessarily affect the truth or falsity thereof).</p>\n\n<p>A reference for Bishop's claim is his article <a href=\"http://www.ams.org/mathscinet-getitem?mr=498014\" rel=\"nofollow noreferrer\">\"Crisis in contemporary mathematics\"</a> (the link is to the mathscinet review of the article) which discusses the constructivist opposition to a principle called LPO (\"limited principle of omniscience\") related to the law of excluded middle. The LPO is discussed starting on page 511 of the article.</p>\n" }, { "answer_id": 387473, "score": 4, "text": "<p>I think the Erdős-Szekeres conjecture (see the Wikipedia article on the <a href=\"https://en.wikipedia.org/wiki/Happy_ending_problem\" rel=\"nofollow noreferrer\">Happy Ending Problem</a>), which says that the smallest <span class=\"math-container\">$N$</span> for which any <span class=\"math-container\">$N$</span> points in general position in the plane contain the vertices of some convex <span class=\"math-container\">$n$</span>-gon is <span class=\"math-container\">$N=2^{n-2}+1$</span>, is a little bit ridiculous in that there is not a lot of reason to believe it. Indeed, only the cases <span class=\"math-container\">$n=3,4,5,6$</span> are known, and while it is known that <span class=\"math-container\">$2^{n-2}+1$</span> is a lower bound for this <span class=\"math-container\">$N$</span>, I really am not aware of any other substantial evidence in favor of this conjecture other than that it's a very nice pattern.</p>\n" }, { "answer_id": 457228, "score": 3, "text": "<p><a href=\"https://en.wikipedia.org/wiki/K%C3%B6the_conjecture#:%7E:text=In%20mathematics%2C%20the%20K%C3%B6the%20conjecture,%2C%20other%20than%20%7B0%7D.\" rel=\"nofollow noreferrer\">Köthe's conjecture</a> is a hugely important problem in ring theory, it was posed in 1930, it is still wide open, and it is also widely believed to be false.</p>\n" }, { "answer_id": 259862, "score": 132, "text": "<p>A long-standing conjecture in Number Theory is that for each positive integer <span class=\"math-container\">$n$</span> there is no stretch of <span class=\"math-container\">$n$</span> consecutive integers containing more primes than the stretch from 2 to <span class=\"math-container\">$n+1$</span>. Just looking at a table of primes and seeing how they thin out is enough to make the conjecture plausible. </p>\n\n<p>But Hensley and Richards (<a href=\"https://eudml.org/doc/205282\" rel=\"noreferrer\">Primes in intervals</a>, Acta Arith 25 (1973/74) 375-391, <a href=\"https://mathscinet.ams.org/mathscinet-getitem?mr=0396440\" rel=\"noreferrer\">MR0396440</a>) proved that this conjecture is incompatible with an equally long-standing conjecture, the prime <span class=\"math-container\">$k$</span>-tuples conjecture. </p>\n\n<p>The current consensus, I believe, is that prime <span class=\"math-container\">$k$</span>-tuples is true, while the first conjecture is false (but not proved to be false). </p>\n" }, { "answer_id": 259846, "score": 125, "text": "<p>W. Hugh Woodin, at a 1992 seminar in Berkeley at which I was present, proposed a new and ridiculously strong large cardinal concept, now called the <a href=\"http://cantorsattic.info/Berkeley\">Berkeley cardinals</a>, and challenged the seminar audience to refute their existence. </p>\n\n<p>He ridiculed the cardinals as overly strong, stronger than Reinhardt cardinals, and proposed them in a \"Refute this!\" manner that seems to be in exactly the spirit of your question. </p>\n\n<p>Meanwhile, no-one has yet succeeded in refuting the Berkeley cardinals. </p>\n" }, { "answer_id": 259845, "score": 89, "text": "<h1><a href=\"https://stackoverflow.com/questions/111307/whats-p-np-and-why-is-it-such-a-famous-question\"><span class=\"math-container\">$P=NP$</span></a></h1>\n<p>Let me tick the list:</p>\n<ol>\n<li><p>Most likely false, because, as Scott Aaronson said <em>&quot;If <span class=\"math-container\">$P = NP$</span>, then the world would be a profoundly different place than we usually assume it to be.&quot;</em></p>\n</li>\n<li><p>Yes, it's The Open Problem in computational complexity theory</p>\n</li>\n<li><p>Yes, it's old</p>\n</li>\n<li><p>It's important, again quoting Scott: <em>&quot;[because if it were true], there would be no special value in &quot;creative leaps,&quot; no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss...&quot;</em></p>\n</li>\n<li><p>It's an equality rather than a negation</p>\n</li>\n</ol>\n" }, { "answer_id": 259851, "score": 68, "text": "<p>I've heard that Roger Heath-Brown has presented the following \"conjecture\" at several conferences, most likely to illustrate our poor understanding of the topic more than because he actually believes it to be true.</p>\n\n<p>Let $\\pi_2(x)$ denote the number of twin primes less than $x$. Then</p>\n\n<blockquote>\n <blockquote>\n <p>$\\pi_2(x) = c \\int_{0}^{x}\\frac{dt}{\\log^2 t} + O(1)$</p>\n </blockquote>\n</blockquote>\n\n<p>where $c$ is the twin prime constant. </p>\n\n<p>In other words, the twin prime asymptotic holds with error term is $O(1)$.</p>\n" }, { "answer_id": 259880, "score": 59, "text": "<p>I propose Edward Nelson's \"conjecture\" that Peano's arithmetic is inconsistent. </p>\n\n<p>First, to be honest, I am not aware that he stated it as \"conjecture\", using that word, but this is something he said he believed to be true, and that he wasn't able to prove (except for a little while but a mistake was discovered by Terry Tao and others independently) though he tried a lot. So a conjecture it is, in the sense this word currently has. </p>\n\n<p>It is also certainly \"outrageous\", in the usual sense of the word -- to many mathematicians, according to my own experience, the simple mention of it provokes disbelief, sarcasm, sometimes outright hostility. </p>\n\n<p>But let's check that it is also \"outrageous\" in the sense of this question. 1) It is most certainly false, or at least this is what most mathematicians, including myself, think. 2) But it is certainly not known to be false -- not now, not ever. 3) Nelson made his program public much before 2006. 4) it is obviously extremely important. 5) The negation, that is the assertion that \"Peano's arithmetic is consistent\" was once a conjecture by Hilbert, but since Gödel it cannot be called a conjecture anymore, since we know it cannot be proven (in a system etc.)</p>\n\n<p>Let me add that it also satisfies something Gil Kalai added in comment, namely \"I prefer examples where [...] the proposer proposes the conjecture genuinely suggesting that it is true\".</p>\n" }, { "answer_id": 259878, "score": 48, "text": "<p>From this <a href=\"https://mathoverflow.net/a/218926/12978\">Math Overflow question</a>, Joel David Hamkins wrote:</p>\n\n<blockquote>\n <p>I once heard Harvey Friedman suggest that the set of prime-differences, that is, the set of all natural numbers $n$ for which there are primes $p,q$ with $p-q=n$, as a possible candidate for all we knew for an intermediate Turing degree &mdash; a noncomputable set between $0$ and $0'$ &mdash; that was <em>natural</em>, not specifically constructed to have that feature.</p>\n</blockquote>\n\n<p>I've also heard others (albeit more recently than 2006) conjecture that Hilbert's 10th problem for rationals is an intermediate degree. </p>\n\n<p>Really, any conjecture that there is a <em>natural</em> intermediate degree is outrageous (although not exactly formal enough to refute). </p>\n" }, { "answer_id": 259943, "score": 42, "text": "<p><strong><a href=\"https://en.wikipedia.org/wiki/Vop%C4%9Bnka%27s_principle\" rel=\"nofollow noreferrer\">Vopěnka's Principle</a></strong></p>\n<p>It fits here perfectly except that it has never been called a conjecture. Vopěnka himself was convinced it was wrong! But I will better just post a section from page 279 of Adámek and Rosický “<a href=\"https://doi.org/10.1017/CBO9780511600579\" rel=\"nofollow noreferrer\">Locally presentable and accessible categories.</a>”:</p>\n<blockquote>\n<p>The story of Vopěnka's principle (as related to the authors by Petr Vopěnka)\nis that of a practical joke which misfired: In the 1960's P. Vopěnka was repelled by the multitude of large cardinals which emerged in set theory.\nWhen he constructed, in collaboration with Z. Hedrlín and A. Pultr,\na rigid graph on every set (see Lemma 2.64), he came to the conclusion\nthat, with some more effort, a large rigid class of graphs must surely be\nalso constructible. He then decided to tease set-theorists: he introduced\na new principle (known today as Vopěnka's principle), and proved some\nconsequences concerning large cardinals. He hoped that some set-theorists\nwould continue this line of research (which they did) until somebody showed\nthat the principle is nonsense. However, the latter never materialized - after\na number of unsuccessful attempts at constructing a large rigid class\nof graphs, Vopěnka's principle received its name from Vopěnka's disciples.\nOne of them, T. J. Jech, made Vopěnka's principle widely known. Later\nthe consistency of this principle was derived from the existence of huge\ncardinals: see [Powell 1972]; our account (in the Appendix) is taken from\n[Jech 1978]. Thus, today this principle has a firm position in the theory of\nlarge cardinals. Petr Vopěnka himself never published anything related to\nthat principle.</p>\n</blockquote>\n" }, { "answer_id": 259855, "score": 37, "text": "<p>I don't know about \"ridiculous\", but there is <a href=\"https://en.wikipedia.org/wiki/Hall&#39;s_conjecture\" rel=\"noreferrer\">Hall's Conjecture</a> in its original form:</p>\n\n<blockquote>\n <p>There is a positive constant $C$ such that for any two integers $a$ and $b$ with $a^2 \\neq b^3$, one has $$|a^2-b^3|&gt; C \\sqrt{|b|}\\;.$$</p>\n</blockquote>\n" }, { "answer_id": 260197, "score": 37, "text": "<p>I'm surprised no one has mentioned it, but the first one that comes to my mind is this.</p>\n\n<blockquote>\n <p>$e+\\pi$ is rational.</p>\n</blockquote>\n\n<p>I think most mathematicians would agree that it is ridiculous. It would follow from Schanuel's conjecture that it is false, but as far as I know, the conjecture is wide open, and when it comes to (ir)rationality of $e+\\pi$, more or less all that is known is the (elementary) fact that either $e+\\pi$ or $e\\cdot\\pi$ is transcendental (naturally, we expect <em>both</em> of them to be transcendental, so it doesn't really get us any closer to a proof).</p>\n\n<p>I'm not sure when it was made publicly, but it is very natural and unlikely to not have been considered before (I heard about it as an undergrad around 2010). I think it is quite important in that it is an obvious test case for Schanuel's conjecture, and in that it would certainly be quite shocking if it was true.</p>\n\n<p>(Caveat: I am not a specialist, so if someone more competent can contradict me, please do!)</p>\n" }, { "answer_id": 259919, "score": 35, "text": "<p>Existence of <a href=\"https://en.wikipedia.org/wiki/Siegel_zero\" rel=\"noreferrer\">Siegel zeros</a>.</p>\n\n<p>1) If we are to believe (like most mathematicians do) in the generalized Riemann hypothesis, this is completely false. I wouldn't necessarily call this ridiculous or outrageous, but within the evidence we have it is rather unlikely to hold.</p>\n\n<p>2) Nonexistence of Siegel zeros is a problem wide, wide open, nowhere near close to being resolved.</p>\n\n<p>3) According to the Wikipedia article, this type of zeros was considered back in 1930s, and earlier by Landau, but I don't know if they have explicitly stated the conjecture. GRH was posed <a href=\"https://en.wikipedia.org/wiki/Generalized_Riemann_hypothesis#Generalized_Riemann_hypothesis_.28GRH.29\" rel=\"noreferrer\">back in 1884</a> though.</p>\n\n<p>4) They are immensely useful in many applications, since if they exist, primes in certain arithmetic progressions \"conspire\" to have certain non-uniform distribution. I'm no expert, but <a href=\"https://mathoverflow.net/questions/125276/good-uses-of-siegel-zeros\">here</a> some uses are listed (see also <a href=\"https://terrytao.wordpress.com/2015/08/26/heath-browns-theorem-on-prime-twins-and-siegel-zeroes/\" rel=\"noreferrer\">this blog post</a> by Terry Tao).</p>\n\n<p>5) It <em>implies</em> the negation of GRH, but the negation of a statement itself is quite an awkward statement, saying \"yeah, zeros might exist, but not too close to $1$\".</p>\n" }, { "answer_id": 259977, "score": 34, "text": "<p>The &quot;conjecture&quot; in algebraic geometry that all rationally connected varieties are unirational comes to mind. It's usually thrown around as a way of saying, &quot;See, we know so little about what varieties can be unirational that we can't prove a single rationally connected variety isn't.&quot; Unirationality implies rational connectedness, but I think almost everyone believes the converse should be false.</p>\n<p>Some background: Algebraic geometers have been interested for a long time in proving that certain varieties are or are not rational (very roughly, figuring out which systems of polynomial equations can have their solutions parametrized). Clemens and Griffiths showed in 1972 that a cubic hypersurface in <span class=\"math-container\">$\\mathbb{P}^4$</span> is irrational. Since then, there's been a lot of progress in rationality obstructions e.g., Artin–Mumford's obstruction via torsion in <span class=\"math-container\">$H^3$</span>, Iskovskikh–Manin on quartic threefolds, Kollár's work on rationality of hypersurfaces, and most recently, Voisin's new decomposition of the diagonal invariants which have led to major breakthroughs.</p>\n<p>On the other hand, unirationality has proved a far harder notion to control, and to my mind the biggest open question in this area is to find any obstruction to unirationality.</p>\n" }, { "answer_id": 260189, "score": 29, "text": "<p>I was giving a talk several years ago about the conjectured linear independence (over $\\Bbb Q$) of the ordinates of the zeros of the Riemann zeta function, and Lior Silberman crystallized our current lack of knowledge into a \"Refute this!\" statement:</p>\n\n<blockquote>\n <p>If $\\zeta(x+iy)=0$, then $y\\in\\Bbb Q$.</p>\n</blockquote>\n\n<p>(In other words, even though the imaginary parts of the nontrivial zeros of $\\zeta(s)$ are believed to be transcendental, algebraically independent, and generally unrelated to any other constants we've ever seen ... we currently can't even prove that a single one of those imaginary parts is irrational!)</p>\n\n<p>This \"conjecture\" can be extended to Dirichlet $L$-functions, and perhaps even further (though one needs to be careful that we don't allow a class of $L$-functions that includes arbitrary vertical shifts of some of its members).</p>\n" }, { "answer_id": 333105, "score": 24, "text": "<p>Here is my favorite.</p>\n\n<p>Spherical simplex is called rational, if all its dihedral angles are rational multiples of <span class=\"math-container\">$\\pi.$</span></p>\n\n<p><strong>Rational Volume Conjecture:</strong>\nVolume of every rational spherical simplex is a rational multiple of <span class=\"math-container\">$\\pi^2.$</span></p>\n\n<p>It is true in all known examples, but is expected to be false. The conjecture first appeared in \"<a href=\"https://pdfs.semanticscholar.org/879a/91356bcaac5e3605dbc18778a869d09cb96a.pdf\" rel=\"noreferrer\">Differential characters and geometric invariants</a>\" paper of Cheeger and Simons, p.73. (Dis)proving this conjecture would lead to interesting results in scissor congruence theory and group homology.</p>\n" }, { "answer_id": 260028, "score": 23, "text": "<p>For a prime $p$, an infinite group $G$ is a <a href=\"https://en.wikipedia.org/wiki/Tarski_monster_group\">Tarski monster</a> if each of its proper subgroups has order $p$.</p>\n\n<p>If I am correctly informed, then the Tarski monster was defined to demonstrate our poor understanding of infinite groups, because such monsters obviously don't exist, it should be easy to prove that they don't exist, but we cannot prove it.</p>\n\n<p>Then Olshanskii proved that Tarski monsters do exist for all large primes, and by now many people believe that \"large\" means something like $p\\geq 11$.</p>\n" }, { "answer_id": 284260, "score": 20, "text": "<p>Seen this, I believe really striking, example yesterday here on MO: although the question <a href=\"https://mathoverflow.net/q/7304/41291\">Complex vector bundles that are not holomorphic</a> is from 2009, a <a href=\"https://mathoverflow.net/a/37205/41291\">recent post</a> by <a href=\"https://mathoverflow.net/users/2349\">algori</a> suggests that this is still open.</p>\n\n<p>And the really ridiculous conjecture is that <em>all topological vector bundles on $\\mathbb C\\mathbf P^n$ are algebraic</em>.</p>\n\n<p>The question is described as an open problem in Okonek-Schneider-Spindler (1980) but I believe must have been asked much earlier.</p>\n" }, { "answer_id": 260132, "score": 18, "text": "<p><span class=\"math-container\">$\\DeclareMathOperator\\Sp{Sp}\\DeclareMathOperator\\cat{cat}$</span>The Lusternik–Schnirelmann category of the Lie groups <span class=\"math-container\">$\\Sp(n)$</span>. Since <span class=\"math-container\">$\\Sp(1) = S^3$</span>, <span class=\"math-container\">$\\cat(\\Sp(1)) = 1$</span>. In the 1960s, P. Schweitzer proved that <span class=\"math-container\">$\\cat(\\Sp(2)) = 3$</span>. <em>Based on this</em>, a folklore conjecture emerged that in general <span class=\"math-container\">$\\cat(\\Sp(n)) = 2n-1$</span>. In 2001, it was proved that <span class=\"math-container\">$\\cat(\\Sp(3)) = 5$</span>, so maybe it's true?</p>\n" }, { "answer_id": 259915, "score": 17, "text": "<p>If the holomorphic map $f:\\mathbb{C}\\to\\mathbb{C}$\nhas a fixed point $p$, and the derivative $\\lambda := f'(p)$ equals $e^{2\\pi i \\theta}$ (with irrational $\\theta$), one can ask if $f$ is conjugate to $z\\mapsto\\lambda\\cdot z$ in a neighborhood of $p$. If it exists, the largest domain of conjugacy is called a 'Siegel disk'. Two properties to keep in mind are:</p>\n\n<ul>\n<li>A Siegel disk cannot contain a critical point in its interior (boundary is ok).</li>\n<li>The boundary of a Siegel disk belongs to the Julia set of $f$.</li>\n</ul>\n\n<p>Quadratic maps can have Siegel disks, but not for just any $\\theta$; the number theoretical properties of this 'rotation number' are relevant. However, if $\\theta$ is Diophantine, the boundary of the disk is well behaved (Jordan curve, quasi-circle...)</p>\n\n<p>... But the boundary of a Siegel disk can also be wild; for instance, it can be non-locally connected. The <strong>outrageous conjecture</strong> that has been floating around is that:</p>\n\n<ul>\n<li>There is a quadratic polynomial with a Siegel disk whose boundary equals the Julia set.</li>\n</ul>\n\n<p>Since a quadratic Julia set is symmetric with respect to the critical point, a quadratic Siegel disk would have a symmetric preimage whose boundary <em>also</em> equals the Julia set, but the unbounded component of the complement (Fatou set) also has boundary equal to the Julia set, so our conjectured Siegel disk would form part of a 'lakes of Wada' configuration.</p>\n" }, { "answer_id": 264543, "score": 17, "text": "<p><span class=\"math-container\">$S^6$</span> has a complex structure.</p>\n<p>I don’t know if this apply, but this has a nice story. It has been &quot;published&quot; to to be true and now Atiyah has a short paper <a href=\"https://arxiv.org/abs/1610.09366\" rel=\"nofollow noreferrer\">The non-existent complex 6-sphere</a> on arXiv claiming to be false, other important mathematicians has also work on this problem. According to LeBrun this would be a minor disaster.</p>\n" }, { "answer_id": 259955, "score": 15, "text": "<p>Let us say a graph class $\\mathcal{C}$ is small if it has at most $n^{O(n)}$ graphs on $n$ vertices. The <strong>implicit graph conjecture</strong> states that every small, hereditary graph class has an adjacency labeling scheme (ALS) with a label decoder that can be computed in polynomial time (a formal definition of ALS is given at the end of the answer).</p>\n\n<p>Initially, this was posed as question by Kannan, Naor and Rudich in their paper Implicit Representation of Graphs which appeared at STOC '88. It was restated as conjecture by Spinrad in the book Efficient Graph Representations (2003). </p>\n\n<p>It is important because it would imply the existence of space-efficient representations for all small, hereditary graph classes where querying an edge requires only polylogarithmic time with respect to the number of vertices of the graph. </p>\n\n<p>As far as I know there is no consensus about whether this conjecture should be true or not. However, from my perspective it would be an immense surprise if it holds for the following reason. The concept of adjacency labeling schemes can be defined with respect to arbitrary complexity classes. For a complexity class $\\text{C}$ (more formally, a set of languages over the binary alphabet) we can define the class of graph classes $\\text{GC}$ as the set of all graph classes that have an ALS with a label decoder that can be computed in $\\text{C}$. It can be shown that $\\text{G1EXP} \\subsetneq \\text{G2EXP} \\subsetneq \\text{G3EXP} \\dots \\subsetneq \\text{GR} \\subsetneq \\text{GALL}$ where $\\text{kEXP}$ is the set of languages that can be computed in time $\\exp^k(\\text{poly}(n))$, $\\text{R}$ is the set of all decidable languages and $\\text{ALL}$ is the set of all languages. I find it hard to believe that <strong>every</strong> small, hereditary graph class falls down through all these classes and just happens to sit in $\\text{GP}$ (the choice just seems too arbitrary and weak). In fact, there are natural graph classes such as disk graph or line segment graphs for which it is not even known whether they are in $\\text{GALL}$. Additionally, a graph class $\\mathcal{C}$ is in $\\text{GALL}$ iff $\\mathcal{C}$ has a polynomial universal graph, i.e. a family of graphs $(G_n)_{n \\in \\mathbb{N}}$ such that $|V(G_n)|$ is polynomially bounded and $G_n$ contains all graphs from $\\mathcal{C}$ on $n$ vertices as induced subgraph. It already seems doubtful to me that every small, hereditary graph class has such polynomial universal graphs. </p>\n\n<p>An ALS is a tuple $S=(F,c)$ where $F \\subseteq \\{0,1\\}^* \\times \\{0,1\\}^*$ is called label decoder and $c \\in \\mathbb{N}$ is the label length. A graph $G$ with $n$ vertices is represented by $S$ if there exists a labeling $\\ell \\colon V(G) \\rightarrow \\{0,1\\}^{c \\lceil \\log n \\rceil}$ such that for all $u,v \\in V(G)$ it holds that $(u,v) \\in E(G) \\Leftrightarrow (\\ell(u),\\ell(v)) \\in F$. A graph class $\\mathcal{C}$ has an ALS $S$ if every graph in $\\mathcal{C}$ can be represented by $S$ (but not necessarily every graph represented by $S$ must be in $\\mathcal{C}$). </p>\n" }, { "answer_id": 350559, "score": 14, "text": "<p>Let me promote a comment of Terry Tao into an answer and mention two conjectures.\n(See, e.g., <a href=\"https://arxiv.org/abs/1711.10238\" rel=\"nofollow noreferrer\">Thom - Finitary approximations of groups and their applications</a> and <a href=\"https://arxiv.org/abs/1711.10238\" rel=\"nofollow noreferrer\">De Chiffre, Glebsky, Lubotzky, Thom - Stability, cohomology vanishing, and non-approximable groups</a> for more information.)</p>\n<p><strong>Conjecture 1:</strong> (a question by Gromov): Every group is sofic.</p>\n<p><strong>Conjecture 2:</strong> (a question by Connes): Every group is hyperlinear</p>\n<p>Conjecture 1 and 2 as well as several other problems can be asked in the context of approximate homomorphisms of a group <span class=\"math-container\">$\\Gamma$</span>. If the target groups are symmetric groups, and the approximation is in terms of normalized Hamming norm then this leads to Conjecture 1. If the target groups are <span class=\"math-container\">$U(n)$</span> and the notion of approximation is in terms of the normalized Hilbert Schmidt norm then this leads to Conjecture 2. A counter example to Conjecture 2 leads also to a counter example to <a href=\"https://en.wikipedia.org/wiki/Connes_embedding_problem\" rel=\"nofollow noreferrer\">Connes 1976 embedding conjecture</a> that was refuted <a href=\"https://arxiv.org/abs/2001.04383\" rel=\"nofollow noreferrer\">a few days ago by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen in their paper <span class=\"math-container\">$\\mathrm{MIP}^* = \\mathrm{RE}$</span></a>, which gives us additional reasons to consider Conjecture 1 and 2 as outrageous conjectures!</p>\n" }, { "answer_id": 387439, "score": 11, "text": "<p>The so-called <a href=\"https://en.wikipedia.org/wiki/Lov%C3%A1sz_conjecture\" rel=\"noreferrer\">Lovász conjecture</a> states:</p>\n<blockquote>\n<p>Every finite connected vertex-transitive graph contains a Hamiltonian path.</p>\n</blockquote>\n<p>One reason I think that this conjecture fits what you're asking for is that, even though it is standardly called a conjecture, it is not clear how many people believe it. László Lovász himself originally phrased the question negatively, inviting the reader to find a counterexample. László Babai has vocally argued that there is no reason to believe it is true, and that the value of the question lies in how it highlights our poor understanding of obstructions to Hamiltonicity. For further information, see <a href=\"https://www.math.ucla.edu/~pak/papers/hamcayley9.pdf\" rel=\"noreferrer\">Hamiltonian paths in Cayley graphs</a> by Pak and Radoičić.</p>\n" }, { "answer_id": 259917, "score": 10, "text": "<p>The finitistic dimension conjecture for finite dimensional algebras states that the supremum of all projective dimensions of modules having finite projective dimension is finite.\nIt it just proven for some very special classes of algebras and in general there seems to be no reason why this should be true.\nReferences: <a href=\"https://arxiv.org/pdf/1407.2383v1.pdf\" rel=\"nofollow noreferrer\">https://arxiv.org/pdf/1407.2383v1.pdf</a>\n<a href=\"http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf\" rel=\"nofollow noreferrer\">http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf</a> (<a href=\"https://web.archive.org/web/20210506191407/https://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf\" rel=\"nofollow noreferrer\">Wayback Machine</a>)</p>\n" }, { "answer_id": 260226, "score": 10, "text": "<p>It seems to me that <a href=\"https://en.wikipedia.org/wiki/Zeeman_conjecture\" rel=\"nofollow noreferrer\">Zeeman's collapsing conjecture</a> satisfies the criteria given. The Zeeman conjecture implies both the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture\" rel=\"nofollow noreferrer\">Poincaré conjecture</a> (proved in 2003) and the <a href=\"https://en.wikipedia.org/wiki/Andrews%E2%80%93Curtis_conjecture\" rel=\"nofollow noreferrer\">Andrews-Curtis conjecture</a>.</p>\n\n<p>The following is a quote from Matveev's <a href=\"https://books.google.com/books?id=vFLgAyeVSqAC&amp;pg=PA46\" rel=\"nofollow noreferrer\">book</a>, where it is proved that ZC restricted to special polyhedra is equivalent to the union of PC and AC.</p>\n\n<blockquote>\n <p>Theorem 1.3.58 may cast a doubt on the widespread belief that ZC is\n false. If a counterexample indeed exists, then either it has a “bad”\n local structure (is not a special polyhedron) or it is a\n counterexample to either AC or PC.</p>\n</blockquote>\n" }, { "answer_id": 264502, "score": 4, "text": "<p>This one is due to Errett Bishop: \"all meaningful mathematics is reducible to finite calculations with strings of $0$s and $1$s\" (imho Bishop formulated this not as a conjecture but as an article of faith but that doesn't necessarily affect the truth or falsity thereof).</p>\n\n<p>A reference for Bishop's claim is his article <a href=\"http://www.ams.org/mathscinet-getitem?mr=498014\" rel=\"nofollow noreferrer\">\"Crisis in contemporary mathematics\"</a> (the link is to the mathscinet review of the article) which discusses the constructivist opposition to a principle called LPO (\"limited principle of omniscience\") related to the law of excluded middle. The LPO is discussed starting on page 511 of the article.</p>\n" }, { "answer_id": 387473, "score": 4, "text": "<p>I think the Erdős-Szekeres conjecture (see the Wikipedia article on the <a href=\"https://en.wikipedia.org/wiki/Happy_ending_problem\" rel=\"nofollow noreferrer\">Happy Ending Problem</a>), which says that the smallest <span class=\"math-container\">$N$</span> for which any <span class=\"math-container\">$N$</span> points in general position in the plane contain the vertices of some convex <span class=\"math-container\">$n$</span>-gon is <span class=\"math-container\">$N=2^{n-2}+1$</span>, is a little bit ridiculous in that there is not a lot of reason to believe it. Indeed, only the cases <span class=\"math-container\">$n=3,4,5,6$</span> are known, and while it is known that <span class=\"math-container\">$2^{n-2}+1$</span> is a lower bound for this <span class=\"math-container\">$N$</span>, I really am not aware of any other substantial evidence in favor of this conjecture other than that it's a very nice pattern.</p>\n" }, { "answer_id": 457228, "score": 3, "text": "<p><a href=\"https://en.wikipedia.org/wiki/K%C3%B6the_conjecture#:%7E:text=In%20mathematics%2C%20the%20K%C3%B6the%20conjecture,%2C%20other%20than%20%7B0%7D.\" rel=\"nofollow noreferrer\">Köthe's conjecture</a> is a hugely important problem in ring theory, it was posed in 1930, it is still wide open, and it is also widely believed to be false.</p>\n" } ]
Question: The most outrageous (or ridiculous) conjectures in mathematics Answer 1: <p>A long-standing conjecture in Number Theory is that for each positive integer <span class="math-container">$n$</span> there is no stretch of <span class="math-container">$n$</span> consecutive integers containing more primes than the stretch from 2 to <span class="math-container">$n+1$</span>. Just looking at a table of primes and seeing how they thin out is enough to make the conjecture plausible. </p> <p>But Hensley and Richards (<a href="https://eudml.org/doc/205282" rel="noreferrer">Primes in intervals</a>, Acta Arith 25 (1973/74) 375-391, <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=0396440" rel="noreferrer">MR0396440</a>) proved that this conjecture is incompatible with an equally long-standing conjecture, the prime <span class="math-container">$k$</span>-tuples conjecture. </p> <p>The current consensus, I believe, is that prime <span class="math-container">$k$</span>-tuples is true, while the first conjecture is false (but not proved to be false). </p> Answer 2: <p>W. Hugh Woodin, at a 1992 seminar in Berkeley at which I was present, proposed a new and ridiculously strong large cardinal concept, now called the <a href="http://cantorsattic.info/Berkeley">Berkeley cardinals</a>, and challenged the seminar audience to refute their existence. </p> <p>He ridiculed the cardinals as overly strong, stronger than Reinhardt cardinals, and proposed them in a "Refute this!" manner that seems to be in exactly the spirit of your question. </p> <p>Meanwhile, no-one has yet succeeded in refuting the Berkeley cardinals. </p> Answer 3: <h1><a href="https://stackoverflow.com/questions/111307/whats-p-np-and-why-is-it-such-a-famous-question"><span class="math-container">$P=NP$</span></a></h1> <p>Let me tick the list:</p> <ol> <li><p>Most likely false, because, as Scott Aaronson said <em>&quot;If <span class="math-container">$P = NP$</span>, then the world would be a profoundly different place than we usually assume it to be.&quot;</em></p> </li> <li><p>Yes, it's The Open Problem in computational complexity theory</p> </li> <li><p>Yes, it's old</p> </li> <li><p>It's important, again quoting Scott: <em>&quot;[because if it were true], there would be no special value in &quot;creative leaps,&quot; no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss...&quot;</em></p> </li> <li><p>It's an equality rather than a negation</p> </li> </ol> Answer 4: <p>I've heard that Roger Heath-Brown has presented the following "conjecture" at several conferences, most likely to illustrate our poor understanding of the topic more than because he actually believes it to be true.</p> <p>Let $\pi_2(x)$ denote the number of twin primes less than $x$. Then</p> <blockquote> <blockquote> <p>$\pi_2(x) = c \int_{0}^{x}\frac{dt}{\log^2 t} + O(1)$</p> </blockquote> </blockquote> <p>where $c$ is the twin prime constant. </p> <p>In other words, the twin prime asymptotic holds with error term is $O(1)$.</p> Answer 5: <p>I propose Edward Nelson's "conjecture" that Peano's arithmetic is inconsistent. </p> <p>First, to be honest, I am not aware that he stated it as "conjecture", using that word, but this is something he said he believed to be true, and that he wasn't able to prove (except for a little while but a mistake was discovered by Terry Tao and others independently) though he tried a lot. So a conjecture it is, in the sense this word currently has. </p> <p>It is also certainly "outrageous", in the usual sense of the word -- to many mathematicians, according to my own experience, the simple mention of it provokes disbelief, sarcasm, sometimes outright hostility. </p> <p>But let's check that it is also "outrageous" in the sense of this question. 1) It is most certainly false, or at least this is what most mathematicians, including myself, think. 2) But it is certainly not known to be false -- not now, not ever. 3) Nelson made his program public much before 2006. 4) it is obviously extremely important. 5) The negation, that is the assertion that "Peano's arithmetic is consistent" was once a conjecture by Hilbert, but since Gödel it cannot be called a conjecture anymore, since we know it cannot be proven (in a system etc.)</p> <p>Let me add that it also satisfies something Gil Kalai added in comment, namely "I prefer examples where [...] the proposer proposes the conjecture genuinely suggesting that it is true".</p> Answer 6: <p>From this <a href="https://mathoverflow.net/a/218926/12978">Math Overflow question</a>, Joel David Hamkins wrote:</p> <blockquote> <p>I once heard Harvey Friedman suggest that the set of prime-differences, that is, the set of all natural numbers $n$ for which there are primes $p,q$ with $p-q=n$, as a possible candidate for all we knew for an intermediate Turing degree &mdash; a noncomputable set between $0$ and $0'$ &mdash; that was <em>natural</em>, not specifically constructed to have that feature.</p> </blockquote> <p>I've also heard others (albeit more recently than 2006) conjecture that Hilbert's 10th problem for rationals is an intermediate degree. </p> <p>Really, any conjecture that there is a <em>natural</em> intermediate degree is outrageous (although not exactly formal enough to refute). </p> Answer 7: <p><strong><a href="https://en.wikipedia.org/wiki/Vop%C4%9Bnka%27s_principle" rel="nofollow noreferrer">Vopěnka's Principle</a></strong></p> <p>It fits here perfectly except that it has never been called a conjecture. Vopěnka himself was convinced it was wrong! But I will better just post a section from page 279 of Adámek and Rosický “<a href="https://doi.org/10.1017/CBO9780511600579" rel="nofollow noreferrer">Locally presentable and accessible categories.</a>”:</p> <blockquote> <p>The story of Vopěnka's principle (as related to the authors by Petr Vopěnka) is that of a practical joke which misfired: In the 1960's P. Vopěnka was repelled by the multitude of large cardinals which emerged in set theory. When he constructed, in collaboration with Z. Hedrlín and A. Pultr, a rigid graph on every set (see Lemma 2.64), he came to the conclusion that, with some more effort, a large rigid class of graphs must surely be also constructible. He then decided to tease set-theorists: he introduced a new principle (known today as Vopěnka's principle), and proved some consequences concerning large cardinals. He hoped that some set-theorists would continue this line of research (which they did) until somebody showed that the principle is nonsense. However, the latter never materialized - after a number of unsuccessful attempts at constructing a large rigid class of graphs, Vopěnka's principle received its name from Vopěnka's disciples. One of them, T. J. Jech, made Vopěnka's principle widely known. Later the consistency of this principle was derived from the existence of huge cardinals: see [Powell 1972]; our account (in the Appendix) is taken from [Jech 1978]. Thus, today this principle has a firm position in the theory of large cardinals. Petr Vopěnka himself never published anything related to that principle.</p> </blockquote> Answer 8: <p>I don't know about "ridiculous", but there is <a href="https://en.wikipedia.org/wiki/Hall&#39;s_conjecture" rel="noreferrer">Hall's Conjecture</a> in its original form:</p> <blockquote> <p>There is a positive constant $C$ such that for any two integers $a$ and $b$ with $a^2 \neq b^3$, one has $$|a^2-b^3|&gt; C \sqrt{|b|}\;.$$</p> </blockquote> Answer 9: <p>I'm surprised no one has mentioned it, but the first one that comes to my mind is this.</p> <blockquote> <p>$e+\pi$ is rational.</p> </blockquote> <p>I think most mathematicians would agree that it is ridiculous. It would follow from Schanuel's conjecture that it is false, but as far as I know, the conjecture is wide open, and when it comes to (ir)rationality of $e+\pi$, more or less all that is known is the (elementary) fact that either $e+\pi$ or $e\cdot\pi$ is transcendental (naturally, we expect <em>both</em> of them to be transcendental, so it doesn't really get us any closer to a proof).</p> <p>I'm not sure when it was made publicly, but it is very natural and unlikely to not have been considered before (I heard about it as an undergrad around 2010). I think it is quite important in that it is an obvious test case for Schanuel's conjecture, and in that it would certainly be quite shocking if it was true.</p> <p>(Caveat: I am not a specialist, so if someone more competent can contradict me, please do!)</p> Answer 10: <p>Existence of <a href="https://en.wikipedia.org/wiki/Siegel_zero" rel="noreferrer">Siegel zeros</a>.</p> <p>1) If we are to believe (like most mathematicians do) in the generalized Riemann hypothesis, this is completely false. I wouldn't necessarily call this ridiculous or outrageous, but within the evidence we have it is rather unlikely to hold.</p> <p>2) Nonexistence of Siegel zeros is a problem wide, wide open, nowhere near close to being resolved.</p> <p>3) According to the Wikipedia article, this type of zeros was considered back in 1930s, and earlier by Landau, but I don't know if they have explicitly stated the conjecture. GRH was posed <a href="https://en.wikipedia.org/wiki/Generalized_Riemann_hypothesis#Generalized_Riemann_hypothesis_.28GRH.29" rel="noreferrer">back in 1884</a> though.</p> <p>4) They are immensely useful in many applications, since if they exist, primes in certain arithmetic progressions "conspire" to have certain non-uniform distribution. I'm no expert, but <a href="https://mathoverflow.net/questions/125276/good-uses-of-siegel-zeros">here</a> some uses are listed (see also <a href="https://terrytao.wordpress.com/2015/08/26/heath-browns-theorem-on-prime-twins-and-siegel-zeroes/" rel="noreferrer">this blog post</a> by Terry Tao).</p> <p>5) It <em>implies</em> the negation of GRH, but the negation of a statement itself is quite an awkward statement, saying "yeah, zeros might exist, but not too close to $1$".</p> Answer 11: <p>The &quot;conjecture&quot; in algebraic geometry that all rationally connected varieties are unirational comes to mind. It's usually thrown around as a way of saying, &quot;See, we know so little about what varieties can be unirational that we can't prove a single rationally connected variety isn't.&quot; Unirationality implies rational connectedness, but I think almost everyone believes the converse should be false.</p> <p>Some background: Algebraic geometers have been interested for a long time in proving that certain varieties are or are not rational (very roughly, figuring out which systems of polynomial equations can have their solutions parametrized). Clemens and Griffiths showed in 1972 that a cubic hypersurface in <span class="math-container">$\mathbb{P}^4$</span> is irrational. Since then, there's been a lot of progress in rationality obstructions e.g., Artin–Mumford's obstruction via torsion in <span class="math-container">$H^3$</span>, Iskovskikh–Manin on quartic threefolds, Kollár's work on rationality of hypersurfaces, and most recently, Voisin's new decomposition of the diagonal invariants which have led to major breakthroughs.</p> <p>On the other hand, unirationality has proved a far harder notion to control, and to my mind the biggest open question in this area is to find any obstruction to unirationality.</p> Answer 12: <p>I was giving a talk several years ago about the conjectured linear independence (over $\Bbb Q$) of the ordinates of the zeros of the Riemann zeta function, and Lior Silberman crystallized our current lack of knowledge into a "Refute this!" statement:</p> <blockquote> <p>If $\zeta(x+iy)=0$, then $y\in\Bbb Q$.</p> </blockquote> <p>(In other words, even though the imaginary parts of the nontrivial zeros of $\zeta(s)$ are believed to be transcendental, algebraically independent, and generally unrelated to any other constants we've ever seen ... we currently can't even prove that a single one of those imaginary parts is irrational!)</p> <p>This "conjecture" can be extended to Dirichlet $L$-functions, and perhaps even further (though one needs to be careful that we don't allow a class of $L$-functions that includes arbitrary vertical shifts of some of its members).</p> Answer 13: <p>Here is my favorite.</p> <p>Spherical simplex is called rational, if all its dihedral angles are rational multiples of <span class="math-container">$\pi.$</span></p> <p><strong>Rational Volume Conjecture:</strong> Volume of every rational spherical simplex is a rational multiple of <span class="math-container">$\pi^2.$</span></p> <p>It is true in all known examples, but is expected to be false. The conjecture first appeared in "<a href="https://pdfs.semanticscholar.org/879a/91356bcaac5e3605dbc18778a869d09cb96a.pdf" rel="noreferrer">Differential characters and geometric invariants</a>" paper of Cheeger and Simons, p.73. (Dis)proving this conjecture would lead to interesting results in scissor congruence theory and group homology.</p> Answer 14: <p>For a prime $p$, an infinite group $G$ is a <a href="https://en.wikipedia.org/wiki/Tarski_monster_group">Tarski monster</a> if each of its proper subgroups has order $p$.</p> <p>If I am correctly informed, then the Tarski monster was defined to demonstrate our poor understanding of infinite groups, because such monsters obviously don't exist, it should be easy to prove that they don't exist, but we cannot prove it.</p> <p>Then Olshanskii proved that Tarski monsters do exist for all large primes, and by now many people believe that "large" means something like $p\geq 11$.</p> Answer 15: <p>Seen this, I believe really striking, example yesterday here on MO: although the question <a href="https://mathoverflow.net/q/7304/41291">Complex vector bundles that are not holomorphic</a> is from 2009, a <a href="https://mathoverflow.net/a/37205/41291">recent post</a> by <a href="https://mathoverflow.net/users/2349">algori</a> suggests that this is still open.</p> <p>And the really ridiculous conjecture is that <em>all topological vector bundles on $\mathbb C\mathbf P^n$ are algebraic</em>.</p> <p>The question is described as an open problem in Okonek-Schneider-Spindler (1980) but I believe must have been asked much earlier.</p> Answer 16: <p><span class="math-container">$\DeclareMathOperator\Sp{Sp}\DeclareMathOperator\cat{cat}$</span>The Lusternik–Schnirelmann category of the Lie groups <span class="math-container">$\Sp(n)$</span>. Since <span class="math-container">$\Sp(1) = S^3$</span>, <span class="math-container">$\cat(\Sp(1)) = 1$</span>. In the 1960s, P. Schweitzer proved that <span class="math-container">$\cat(\Sp(2)) = 3$</span>. <em>Based on this</em>, a folklore conjecture emerged that in general <span class="math-container">$\cat(\Sp(n)) = 2n-1$</span>. In 2001, it was proved that <span class="math-container">$\cat(\Sp(3)) = 5$</span>, so maybe it's true?</p> Answer 17: <p>If the holomorphic map $f:\mathbb{C}\to\mathbb{C}$ has a fixed point $p$, and the derivative $\lambda := f'(p)$ equals $e^{2\pi i \theta}$ (with irrational $\theta$), one can ask if $f$ is conjugate to $z\mapsto\lambda\cdot z$ in a neighborhood of $p$. If it exists, the largest domain of conjugacy is called a 'Siegel disk'. Two properties to keep in mind are:</p> <ul> <li>A Siegel disk cannot contain a critical point in its interior (boundary is ok).</li> <li>The boundary of a Siegel disk belongs to the Julia set of $f$.</li> </ul> <p>Quadratic maps can have Siegel disks, but not for just any $\theta$; the number theoretical properties of this 'rotation number' are relevant. However, if $\theta$ is Diophantine, the boundary of the disk is well behaved (Jordan curve, quasi-circle...)</p> <p>... But the boundary of a Siegel disk can also be wild; for instance, it can be non-locally connected. The <strong>outrageous conjecture</strong> that has been floating around is that:</p> <ul> <li>There is a quadratic polynomial with a Siegel disk whose boundary equals the Julia set.</li> </ul> <p>Since a quadratic Julia set is symmetric with respect to the critical point, a quadratic Siegel disk would have a symmetric preimage whose boundary <em>also</em> equals the Julia set, but the unbounded component of the complement (Fatou set) also has boundary equal to the Julia set, so our conjectured Siegel disk would form part of a 'lakes of Wada' configuration.</p> Answer 18: <p><span class="math-container">$S^6$</span> has a complex structure.</p> <p>I don’t know if this apply, but this has a nice story. It has been &quot;published&quot; to to be true and now Atiyah has a short paper <a href="https://arxiv.org/abs/1610.09366" rel="nofollow noreferrer">The non-existent complex 6-sphere</a> on arXiv claiming to be false, other important mathematicians has also work on this problem. According to LeBrun this would be a minor disaster.</p> Answer 19: <p>Let us say a graph class $\mathcal{C}$ is small if it has at most $n^{O(n)}$ graphs on $n$ vertices. The <strong>implicit graph conjecture</strong> states that every small, hereditary graph class has an adjacency labeling scheme (ALS) with a label decoder that can be computed in polynomial time (a formal definition of ALS is given at the end of the answer).</p> <p>Initially, this was posed as question by Kannan, Naor and Rudich in their paper Implicit Representation of Graphs which appeared at STOC '88. It was restated as conjecture by Spinrad in the book Efficient Graph Representations (2003). </p> <p>It is important because it would imply the existence of space-efficient representations for all small, hereditary graph classes where querying an edge requires only polylogarithmic time with respect to the number of vertices of the graph. </p> <p>As far as I know there is no consensus about whether this conjecture should be true or not. However, from my perspective it would be an immense surprise if it holds for the following reason. The concept of adjacency labeling schemes can be defined with respect to arbitrary complexity classes. For a complexity class $\text{C}$ (more formally, a set of languages over the binary alphabet) we can define the class of graph classes $\text{GC}$ as the set of all graph classes that have an ALS with a label decoder that can be computed in $\text{C}$. It can be shown that $\text{G1EXP} \subsetneq \text{G2EXP} \subsetneq \text{G3EXP} \dots \subsetneq \text{GR} \subsetneq \text{GALL}$ where $\text{kEXP}$ is the set of languages that can be computed in time $\exp^k(\text{poly}(n))$, $\text{R}$ is the set of all decidable languages and $\text{ALL}$ is the set of all languages. I find it hard to believe that <strong>every</strong> small, hereditary graph class falls down through all these classes and just happens to sit in $\text{GP}$ (the choice just seems too arbitrary and weak). In fact, there are natural graph classes such as disk graph or line segment graphs for which it is not even known whether they are in $\text{GALL}$. Additionally, a graph class $\mathcal{C}$ is in $\text{GALL}$ iff $\mathcal{C}$ has a polynomial universal graph, i.e. a family of graphs $(G_n)_{n \in \mathbb{N}}$ such that $|V(G_n)|$ is polynomially bounded and $G_n$ contains all graphs from $\mathcal{C}$ on $n$ vertices as induced subgraph. It already seems doubtful to me that every small, hereditary graph class has such polynomial universal graphs. </p> <p>An ALS is a tuple $S=(F,c)$ where $F \subseteq \{0,1\}^* \times \{0,1\}^*$ is called label decoder and $c \in \mathbb{N}$ is the label length. A graph $G$ with $n$ vertices is represented by $S$ if there exists a labeling $\ell \colon V(G) \rightarrow \{0,1\}^{c \lceil \log n \rceil}$ such that for all $u,v \in V(G)$ it holds that $(u,v) \in E(G) \Leftrightarrow (\ell(u),\ell(v)) \in F$. A graph class $\mathcal{C}$ has an ALS $S$ if every graph in $\mathcal{C}$ can be represented by $S$ (but not necessarily every graph represented by $S$ must be in $\mathcal{C}$). </p> Answer 20: <p>Let me promote a comment of Terry Tao into an answer and mention two conjectures. (See, e.g., <a href="https://arxiv.org/abs/1711.10238" rel="nofollow noreferrer">Thom - Finitary approximations of groups and their applications</a> and <a href="https://arxiv.org/abs/1711.10238" rel="nofollow noreferrer">De Chiffre, Glebsky, Lubotzky, Thom - Stability, cohomology vanishing, and non-approximable groups</a> for more information.)</p> <p><strong>Conjecture 1:</strong> (a question by Gromov): Every group is sofic.</p> <p><strong>Conjecture 2:</strong> (a question by Connes): Every group is hyperlinear</p> <p>Conjecture 1 and 2 as well as several other problems can be asked in the context of approximate homomorphisms of a group <span class="math-container">$\Gamma$</span>. If the target groups are symmetric groups, and the approximation is in terms of normalized Hamming norm then this leads to Conjecture 1. If the target groups are <span class="math-container">$U(n)$</span> and the notion of approximation is in terms of the normalized Hilbert Schmidt norm then this leads to Conjecture 2. A counter example to Conjecture 2 leads also to a counter example to <a href="https://en.wikipedia.org/wiki/Connes_embedding_problem" rel="nofollow noreferrer">Connes 1976 embedding conjecture</a> that was refuted <a href="https://arxiv.org/abs/2001.04383" rel="nofollow noreferrer">a few days ago by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen in their paper <span class="math-container">$\mathrm{MIP}^* = \mathrm{RE}$</span></a>, which gives us additional reasons to consider Conjecture 1 and 2 as outrageous conjectures!</p> Answer 21: <p>The so-called <a href="https://en.wikipedia.org/wiki/Lov%C3%A1sz_conjecture" rel="noreferrer">Lovász conjecture</a> states:</p> <blockquote> <p>Every finite connected vertex-transitive graph contains a Hamiltonian path.</p> </blockquote> <p>One reason I think that this conjecture fits what you're asking for is that, even though it is standardly called a conjecture, it is not clear how many people believe it. László Lovász himself originally phrased the question negatively, inviting the reader to find a counterexample. László Babai has vocally argued that there is no reason to believe it is true, and that the value of the question lies in how it highlights our poor understanding of obstructions to Hamiltonicity. For further information, see <a href="https://www.math.ucla.edu/~pak/papers/hamcayley9.pdf" rel="noreferrer">Hamiltonian paths in Cayley graphs</a> by Pak and Radoičić.</p> Answer 22: <p>The finitistic dimension conjecture for finite dimensional algebras states that the supremum of all projective dimensions of modules having finite projective dimension is finite. It it just proven for some very special classes of algebras and in general there seems to be no reason why this should be true. References: <a href="https://arxiv.org/pdf/1407.2383v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1407.2383v1.pdf</a> <a href="http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf" rel="nofollow noreferrer">http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf</a> (<a href="https://web.archive.org/web/20210506191407/https://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf" rel="nofollow noreferrer">Wayback Machine</a>)</p> Answer 23: <p>It seems to me that <a href="https://en.wikipedia.org/wiki/Zeeman_conjecture" rel="nofollow noreferrer">Zeeman's collapsing conjecture</a> satisfies the criteria given. The Zeeman conjecture implies both the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture" rel="nofollow noreferrer">Poincaré conjecture</a> (proved in 2003) and the <a href="https://en.wikipedia.org/wiki/Andrews%E2%80%93Curtis_conjecture" rel="nofollow noreferrer">Andrews-Curtis conjecture</a>.</p> <p>The following is a quote from Matveev's <a href="https://books.google.com/books?id=vFLgAyeVSqAC&amp;pg=PA46" rel="nofollow noreferrer">book</a>, where it is proved that ZC restricted to special polyhedra is equivalent to the union of PC and AC.</p> <blockquote> <p>Theorem 1.3.58 may cast a doubt on the widespread belief that ZC is false. If a counterexample indeed exists, then either it has a “bad” local structure (is not a special polyhedron) or it is a counterexample to either AC or PC.</p> </blockquote> Answer 24: <p>This one is due to Errett Bishop: "all meaningful mathematics is reducible to finite calculations with strings of $0$s and $1$s" (imho Bishop formulated this not as a conjecture but as an article of faith but that doesn't necessarily affect the truth or falsity thereof).</p> <p>A reference for Bishop's claim is his article <a href="http://www.ams.org/mathscinet-getitem?mr=498014" rel="nofollow noreferrer">"Crisis in contemporary mathematics"</a> (the link is to the mathscinet review of the article) which discusses the constructivist opposition to a principle called LPO ("limited principle of omniscience") related to the law of excluded middle. The LPO is discussed starting on page 511 of the article.</p> Answer 25: <p>I think the Erdős-Szekeres conjecture (see the Wikipedia article on the <a href="https://en.wikipedia.org/wiki/Happy_ending_problem" rel="nofollow noreferrer">Happy Ending Problem</a>), which says that the smallest <span class="math-container">$N$</span> for which any <span class="math-container">$N$</span> points in general position in the plane contain the vertices of some convex <span class="math-container">$n$</span>-gon is <span class="math-container">$N=2^{n-2}+1$</span>, is a little bit ridiculous in that there is not a lot of reason to believe it. Indeed, only the cases <span class="math-container">$n=3,4,5,6$</span> are known, and while it is known that <span class="math-container">$2^{n-2}+1$</span> is a lower bound for this <span class="math-container">$N$</span>, I really am not aware of any other substantial evidence in favor of this conjecture other than that it's a very nice pattern.</p> Answer 26: <p><a href="https://en.wikipedia.org/wiki/K%C3%B6the_conjecture#:%7E:text=In%20mathematics%2C%20the%20K%C3%B6the%20conjecture,%2C%20other%20than%20%7B0%7D." rel="nofollow noreferrer">Köthe's conjecture</a> is a hugely important problem in ring theory, it was posed in 1930, it is still wide open, and it is also widely believed to be false.</p> Answer 27: <p>A long-standing conjecture in Number Theory is that for each positive integer <span class="math-container">$n$</span> there is no stretch of <span class="math-container">$n$</span> consecutive integers containing more primes than the stretch from 2 to <span class="math-container">$n+1$</span>. Just looking at a table of primes and seeing how they thin out is enough to make the conjecture plausible. </p> <p>But Hensley and Richards (<a href="https://eudml.org/doc/205282" rel="noreferrer">Primes in intervals</a>, Acta Arith 25 (1973/74) 375-391, <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=0396440" rel="noreferrer">MR0396440</a>) proved that this conjecture is incompatible with an equally long-standing conjecture, the prime <span class="math-container">$k$</span>-tuples conjecture. </p> <p>The current consensus, I believe, is that prime <span class="math-container">$k$</span>-tuples is true, while the first conjecture is false (but not proved to be false). </p> Answer 28: <p>W. Hugh Woodin, at a 1992 seminar in Berkeley at which I was present, proposed a new and ridiculously strong large cardinal concept, now called the <a href="http://cantorsattic.info/Berkeley">Berkeley cardinals</a>, and challenged the seminar audience to refute their existence. </p> <p>He ridiculed the cardinals as overly strong, stronger than Reinhardt cardinals, and proposed them in a "Refute this!" manner that seems to be in exactly the spirit of your question. </p> <p>Meanwhile, no-one has yet succeeded in refuting the Berkeley cardinals. </p> Answer 29: <h1><a href="https://stackoverflow.com/questions/111307/whats-p-np-and-why-is-it-such-a-famous-question"><span class="math-container">$P=NP$</span></a></h1> <p>Let me tick the list:</p> <ol> <li><p>Most likely false, because, as Scott Aaronson said <em>&quot;If <span class="math-container">$P = NP$</span>, then the world would be a profoundly different place than we usually assume it to be.&quot;</em></p> </li> <li><p>Yes, it's The Open Problem in computational complexity theory</p> </li> <li><p>Yes, it's old</p> </li> <li><p>It's important, again quoting Scott: <em>&quot;[because if it were true], there would be no special value in &quot;creative leaps,&quot; no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss...&quot;</em></p> </li> <li><p>It's an equality rather than a negation</p> </li> </ol> Answer 30: <p>I've heard that Roger Heath-Brown has presented the following "conjecture" at several conferences, most likely to illustrate our poor understanding of the topic more than because he actually believes it to be true.</p> <p>Let $\pi_2(x)$ denote the number of twin primes less than $x$. Then</p> <blockquote> <blockquote> <p>$\pi_2(x) = c \int_{0}^{x}\frac{dt}{\log^2 t} + O(1)$</p> </blockquote> </blockquote> <p>where $c$ is the twin prime constant. </p> <p>In other words, the twin prime asymptotic holds with error term is $O(1)$.</p> Answer 31: <p>I propose Edward Nelson's "conjecture" that Peano's arithmetic is inconsistent. </p> <p>First, to be honest, I am not aware that he stated it as "conjecture", using that word, but this is something he said he believed to be true, and that he wasn't able to prove (except for a little while but a mistake was discovered by Terry Tao and others independently) though he tried a lot. So a conjecture it is, in the sense this word currently has. </p> <p>It is also certainly "outrageous", in the usual sense of the word -- to many mathematicians, according to my own experience, the simple mention of it provokes disbelief, sarcasm, sometimes outright hostility. </p> <p>But let's check that it is also "outrageous" in the sense of this question. 1) It is most certainly false, or at least this is what most mathematicians, including myself, think. 2) But it is certainly not known to be false -- not now, not ever. 3) Nelson made his program public much before 2006. 4) it is obviously extremely important. 5) The negation, that is the assertion that "Peano's arithmetic is consistent" was once a conjecture by Hilbert, but since Gödel it cannot be called a conjecture anymore, since we know it cannot be proven (in a system etc.)</p> <p>Let me add that it also satisfies something Gil Kalai added in comment, namely "I prefer examples where [...] the proposer proposes the conjecture genuinely suggesting that it is true".</p> Answer 32: <p>From this <a href="https://mathoverflow.net/a/218926/12978">Math Overflow question</a>, Joel David Hamkins wrote:</p> <blockquote> <p>I once heard Harvey Friedman suggest that the set of prime-differences, that is, the set of all natural numbers $n$ for which there are primes $p,q$ with $p-q=n$, as a possible candidate for all we knew for an intermediate Turing degree &mdash; a noncomputable set between $0$ and $0'$ &mdash; that was <em>natural</em>, not specifically constructed to have that feature.</p> </blockquote> <p>I've also heard others (albeit more recently than 2006) conjecture that Hilbert's 10th problem for rationals is an intermediate degree. </p> <p>Really, any conjecture that there is a <em>natural</em> intermediate degree is outrageous (although not exactly formal enough to refute). </p> Answer 33: <p><strong><a href="https://en.wikipedia.org/wiki/Vop%C4%9Bnka%27s_principle" rel="nofollow noreferrer">Vopěnka's Principle</a></strong></p> <p>It fits here perfectly except that it has never been called a conjecture. Vopěnka himself was convinced it was wrong! But I will better just post a section from page 279 of Adámek and Rosický “<a href="https://doi.org/10.1017/CBO9780511600579" rel="nofollow noreferrer">Locally presentable and accessible categories.</a>”:</p> <blockquote> <p>The story of Vopěnka's principle (as related to the authors by Petr Vopěnka) is that of a practical joke which misfired: In the 1960's P. Vopěnka was repelled by the multitude of large cardinals which emerged in set theory. When he constructed, in collaboration with Z. Hedrlín and A. Pultr, a rigid graph on every set (see Lemma 2.64), he came to the conclusion that, with some more effort, a large rigid class of graphs must surely be also constructible. He then decided to tease set-theorists: he introduced a new principle (known today as Vopěnka's principle), and proved some consequences concerning large cardinals. He hoped that some set-theorists would continue this line of research (which they did) until somebody showed that the principle is nonsense. However, the latter never materialized - after a number of unsuccessful attempts at constructing a large rigid class of graphs, Vopěnka's principle received its name from Vopěnka's disciples. One of them, T. J. Jech, made Vopěnka's principle widely known. Later the consistency of this principle was derived from the existence of huge cardinals: see [Powell 1972]; our account (in the Appendix) is taken from [Jech 1978]. Thus, today this principle has a firm position in the theory of large cardinals. Petr Vopěnka himself never published anything related to that principle.</p> </blockquote> Answer 34: <p>I don't know about "ridiculous", but there is <a href="https://en.wikipedia.org/wiki/Hall&#39;s_conjecture" rel="noreferrer">Hall's Conjecture</a> in its original form:</p> <blockquote> <p>There is a positive constant $C$ such that for any two integers $a$ and $b$ with $a^2 \neq b^3$, one has $$|a^2-b^3|&gt; C \sqrt{|b|}\;.$$</p> </blockquote> Answer 35: <p>I'm surprised no one has mentioned it, but the first one that comes to my mind is this.</p> <blockquote> <p>$e+\pi$ is rational.</p> </blockquote> <p>I think most mathematicians would agree that it is ridiculous. It would follow from Schanuel's conjecture that it is false, but as far as I know, the conjecture is wide open, and when it comes to (ir)rationality of $e+\pi$, more or less all that is known is the (elementary) fact that either $e+\pi$ or $e\cdot\pi$ is transcendental (naturally, we expect <em>both</em> of them to be transcendental, so it doesn't really get us any closer to a proof).</p> <p>I'm not sure when it was made publicly, but it is very natural and unlikely to not have been considered before (I heard about it as an undergrad around 2010). I think it is quite important in that it is an obvious test case for Schanuel's conjecture, and in that it would certainly be quite shocking if it was true.</p> <p>(Caveat: I am not a specialist, so if someone more competent can contradict me, please do!)</p> Answer 36: <p>Existence of <a href="https://en.wikipedia.org/wiki/Siegel_zero" rel="noreferrer">Siegel zeros</a>.</p> <p>1) If we are to believe (like most mathematicians do) in the generalized Riemann hypothesis, this is completely false. I wouldn't necessarily call this ridiculous or outrageous, but within the evidence we have it is rather unlikely to hold.</p> <p>2) Nonexistence of Siegel zeros is a problem wide, wide open, nowhere near close to being resolved.</p> <p>3) According to the Wikipedia article, this type of zeros was considered back in 1930s, and earlier by Landau, but I don't know if they have explicitly stated the conjecture. GRH was posed <a href="https://en.wikipedia.org/wiki/Generalized_Riemann_hypothesis#Generalized_Riemann_hypothesis_.28GRH.29" rel="noreferrer">back in 1884</a> though.</p> <p>4) They are immensely useful in many applications, since if they exist, primes in certain arithmetic progressions "conspire" to have certain non-uniform distribution. I'm no expert, but <a href="https://mathoverflow.net/questions/125276/good-uses-of-siegel-zeros">here</a> some uses are listed (see also <a href="https://terrytao.wordpress.com/2015/08/26/heath-browns-theorem-on-prime-twins-and-siegel-zeroes/" rel="noreferrer">this blog post</a> by Terry Tao).</p> <p>5) It <em>implies</em> the negation of GRH, but the negation of a statement itself is quite an awkward statement, saying "yeah, zeros might exist, but not too close to $1$".</p> Answer 37: <p>The &quot;conjecture&quot; in algebraic geometry that all rationally connected varieties are unirational comes to mind. It's usually thrown around as a way of saying, &quot;See, we know so little about what varieties can be unirational that we can't prove a single rationally connected variety isn't.&quot; Unirationality implies rational connectedness, but I think almost everyone believes the converse should be false.</p> <p>Some background: Algebraic geometers have been interested for a long time in proving that certain varieties are or are not rational (very roughly, figuring out which systems of polynomial equations can have their solutions parametrized). Clemens and Griffiths showed in 1972 that a cubic hypersurface in <span class="math-container">$\mathbb{P}^4$</span> is irrational. Since then, there's been a lot of progress in rationality obstructions e.g., Artin–Mumford's obstruction via torsion in <span class="math-container">$H^3$</span>, Iskovskikh–Manin on quartic threefolds, Kollár's work on rationality of hypersurfaces, and most recently, Voisin's new decomposition of the diagonal invariants which have led to major breakthroughs.</p> <p>On the other hand, unirationality has proved a far harder notion to control, and to my mind the biggest open question in this area is to find any obstruction to unirationality.</p> Answer 38: <p>I was giving a talk several years ago about the conjectured linear independence (over $\Bbb Q$) of the ordinates of the zeros of the Riemann zeta function, and Lior Silberman crystallized our current lack of knowledge into a "Refute this!" statement:</p> <blockquote> <p>If $\zeta(x+iy)=0$, then $y\in\Bbb Q$.</p> </blockquote> <p>(In other words, even though the imaginary parts of the nontrivial zeros of $\zeta(s)$ are believed to be transcendental, algebraically independent, and generally unrelated to any other constants we've ever seen ... we currently can't even prove that a single one of those imaginary parts is irrational!)</p> <p>This "conjecture" can be extended to Dirichlet $L$-functions, and perhaps even further (though one needs to be careful that we don't allow a class of $L$-functions that includes arbitrary vertical shifts of some of its members).</p> Answer 39: <p>Here is my favorite.</p> <p>Spherical simplex is called rational, if all its dihedral angles are rational multiples of <span class="math-container">$\pi.$</span></p> <p><strong>Rational Volume Conjecture:</strong> Volume of every rational spherical simplex is a rational multiple of <span class="math-container">$\pi^2.$</span></p> <p>It is true in all known examples, but is expected to be false. The conjecture first appeared in "<a href="https://pdfs.semanticscholar.org/879a/91356bcaac5e3605dbc18778a869d09cb96a.pdf" rel="noreferrer">Differential characters and geometric invariants</a>" paper of Cheeger and Simons, p.73. (Dis)proving this conjecture would lead to interesting results in scissor congruence theory and group homology.</p> Answer 40: <p>For a prime $p$, an infinite group $G$ is a <a href="https://en.wikipedia.org/wiki/Tarski_monster_group">Tarski monster</a> if each of its proper subgroups has order $p$.</p> <p>If I am correctly informed, then the Tarski monster was defined to demonstrate our poor understanding of infinite groups, because such monsters obviously don't exist, it should be easy to prove that they don't exist, but we cannot prove it.</p> <p>Then Olshanskii proved that Tarski monsters do exist for all large primes, and by now many people believe that "large" means something like $p\geq 11$.</p> Answer 41: <p>Seen this, I believe really striking, example yesterday here on MO: although the question <a href="https://mathoverflow.net/q/7304/41291">Complex vector bundles that are not holomorphic</a> is from 2009, a <a href="https://mathoverflow.net/a/37205/41291">recent post</a> by <a href="https://mathoverflow.net/users/2349">algori</a> suggests that this is still open.</p> <p>And the really ridiculous conjecture is that <em>all topological vector bundles on $\mathbb C\mathbf P^n$ are algebraic</em>.</p> <p>The question is described as an open problem in Okonek-Schneider-Spindler (1980) but I believe must have been asked much earlier.</p> Answer 42: <p><span class="math-container">$\DeclareMathOperator\Sp{Sp}\DeclareMathOperator\cat{cat}$</span>The Lusternik–Schnirelmann category of the Lie groups <span class="math-container">$\Sp(n)$</span>. Since <span class="math-container">$\Sp(1) = S^3$</span>, <span class="math-container">$\cat(\Sp(1)) = 1$</span>. In the 1960s, P. Schweitzer proved that <span class="math-container">$\cat(\Sp(2)) = 3$</span>. <em>Based on this</em>, a folklore conjecture emerged that in general <span class="math-container">$\cat(\Sp(n)) = 2n-1$</span>. In 2001, it was proved that <span class="math-container">$\cat(\Sp(3)) = 5$</span>, so maybe it's true?</p> Answer 43: <p>If the holomorphic map $f:\mathbb{C}\to\mathbb{C}$ has a fixed point $p$, and the derivative $\lambda := f'(p)$ equals $e^{2\pi i \theta}$ (with irrational $\theta$), one can ask if $f$ is conjugate to $z\mapsto\lambda\cdot z$ in a neighborhood of $p$. If it exists, the largest domain of conjugacy is called a 'Siegel disk'. Two properties to keep in mind are:</p> <ul> <li>A Siegel disk cannot contain a critical point in its interior (boundary is ok).</li> <li>The boundary of a Siegel disk belongs to the Julia set of $f$.</li> </ul> <p>Quadratic maps can have Siegel disks, but not for just any $\theta$; the number theoretical properties of this 'rotation number' are relevant. However, if $\theta$ is Diophantine, the boundary of the disk is well behaved (Jordan curve, quasi-circle...)</p> <p>... But the boundary of a Siegel disk can also be wild; for instance, it can be non-locally connected. The <strong>outrageous conjecture</strong> that has been floating around is that:</p> <ul> <li>There is a quadratic polynomial with a Siegel disk whose boundary equals the Julia set.</li> </ul> <p>Since a quadratic Julia set is symmetric with respect to the critical point, a quadratic Siegel disk would have a symmetric preimage whose boundary <em>also</em> equals the Julia set, but the unbounded component of the complement (Fatou set) also has boundary equal to the Julia set, so our conjectured Siegel disk would form part of a 'lakes of Wada' configuration.</p> Answer 44: <p><span class="math-container">$S^6$</span> has a complex structure.</p> <p>I don’t know if this apply, but this has a nice story. It has been &quot;published&quot; to to be true and now Atiyah has a short paper <a href="https://arxiv.org/abs/1610.09366" rel="nofollow noreferrer">The non-existent complex 6-sphere</a> on arXiv claiming to be false, other important mathematicians has also work on this problem. According to LeBrun this would be a minor disaster.</p> Answer 45: <p>Let us say a graph class $\mathcal{C}$ is small if it has at most $n^{O(n)}$ graphs on $n$ vertices. The <strong>implicit graph conjecture</strong> states that every small, hereditary graph class has an adjacency labeling scheme (ALS) with a label decoder that can be computed in polynomial time (a formal definition of ALS is given at the end of the answer).</p> <p>Initially, this was posed as question by Kannan, Naor and Rudich in their paper Implicit Representation of Graphs which appeared at STOC '88. It was restated as conjecture by Spinrad in the book Efficient Graph Representations (2003). </p> <p>It is important because it would imply the existence of space-efficient representations for all small, hereditary graph classes where querying an edge requires only polylogarithmic time with respect to the number of vertices of the graph. </p> <p>As far as I know there is no consensus about whether this conjecture should be true or not. However, from my perspective it would be an immense surprise if it holds for the following reason. The concept of adjacency labeling schemes can be defined with respect to arbitrary complexity classes. For a complexity class $\text{C}$ (more formally, a set of languages over the binary alphabet) we can define the class of graph classes $\text{GC}$ as the set of all graph classes that have an ALS with a label decoder that can be computed in $\text{C}$. It can be shown that $\text{G1EXP} \subsetneq \text{G2EXP} \subsetneq \text{G3EXP} \dots \subsetneq \text{GR} \subsetneq \text{GALL}$ where $\text{kEXP}$ is the set of languages that can be computed in time $\exp^k(\text{poly}(n))$, $\text{R}$ is the set of all decidable languages and $\text{ALL}$ is the set of all languages. I find it hard to believe that <strong>every</strong> small, hereditary graph class falls down through all these classes and just happens to sit in $\text{GP}$ (the choice just seems too arbitrary and weak). In fact, there are natural graph classes such as disk graph or line segment graphs for which it is not even known whether they are in $\text{GALL}$. Additionally, a graph class $\mathcal{C}$ is in $\text{GALL}$ iff $\mathcal{C}$ has a polynomial universal graph, i.e. a family of graphs $(G_n)_{n \in \mathbb{N}}$ such that $|V(G_n)|$ is polynomially bounded and $G_n$ contains all graphs from $\mathcal{C}$ on $n$ vertices as induced subgraph. It already seems doubtful to me that every small, hereditary graph class has such polynomial universal graphs. </p> <p>An ALS is a tuple $S=(F,c)$ where $F \subseteq \{0,1\}^* \times \{0,1\}^*$ is called label decoder and $c \in \mathbb{N}$ is the label length. A graph $G$ with $n$ vertices is represented by $S$ if there exists a labeling $\ell \colon V(G) \rightarrow \{0,1\}^{c \lceil \log n \rceil}$ such that for all $u,v \in V(G)$ it holds that $(u,v) \in E(G) \Leftrightarrow (\ell(u),\ell(v)) \in F$. A graph class $\mathcal{C}$ has an ALS $S$ if every graph in $\mathcal{C}$ can be represented by $S$ (but not necessarily every graph represented by $S$ must be in $\mathcal{C}$). </p> Answer 46: <p>Let me promote a comment of Terry Tao into an answer and mention two conjectures. (See, e.g., <a href="https://arxiv.org/abs/1711.10238" rel="nofollow noreferrer">Thom - Finitary approximations of groups and their applications</a> and <a href="https://arxiv.org/abs/1711.10238" rel="nofollow noreferrer">De Chiffre, Glebsky, Lubotzky, Thom - Stability, cohomology vanishing, and non-approximable groups</a> for more information.)</p> <p><strong>Conjecture 1:</strong> (a question by Gromov): Every group is sofic.</p> <p><strong>Conjecture 2:</strong> (a question by Connes): Every group is hyperlinear</p> <p>Conjecture 1 and 2 as well as several other problems can be asked in the context of approximate homomorphisms of a group <span class="math-container">$\Gamma$</span>. If the target groups are symmetric groups, and the approximation is in terms of normalized Hamming norm then this leads to Conjecture 1. If the target groups are <span class="math-container">$U(n)$</span> and the notion of approximation is in terms of the normalized Hilbert Schmidt norm then this leads to Conjecture 2. A counter example to Conjecture 2 leads also to a counter example to <a href="https://en.wikipedia.org/wiki/Connes_embedding_problem" rel="nofollow noreferrer">Connes 1976 embedding conjecture</a> that was refuted <a href="https://arxiv.org/abs/2001.04383" rel="nofollow noreferrer">a few days ago by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen in their paper <span class="math-container">$\mathrm{MIP}^* = \mathrm{RE}$</span></a>, which gives us additional reasons to consider Conjecture 1 and 2 as outrageous conjectures!</p> Answer 47: <p>The so-called <a href="https://en.wikipedia.org/wiki/Lov%C3%A1sz_conjecture" rel="noreferrer">Lovász conjecture</a> states:</p> <blockquote> <p>Every finite connected vertex-transitive graph contains a Hamiltonian path.</p> </blockquote> <p>One reason I think that this conjecture fits what you're asking for is that, even though it is standardly called a conjecture, it is not clear how many people believe it. László Lovász himself originally phrased the question negatively, inviting the reader to find a counterexample. László Babai has vocally argued that there is no reason to believe it is true, and that the value of the question lies in how it highlights our poor understanding of obstructions to Hamiltonicity. For further information, see <a href="https://www.math.ucla.edu/~pak/papers/hamcayley9.pdf" rel="noreferrer">Hamiltonian paths in Cayley graphs</a> by Pak and Radoičić.</p> Answer 48: <p>The finitistic dimension conjecture for finite dimensional algebras states that the supremum of all projective dimensions of modules having finite projective dimension is finite. It it just proven for some very special classes of algebras and in general there seems to be no reason why this should be true. References: <a href="https://arxiv.org/pdf/1407.2383v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1407.2383v1.pdf</a> <a href="http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf" rel="nofollow noreferrer">http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf</a> (<a href="https://web.archive.org/web/20210506191407/https://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf" rel="nofollow noreferrer">Wayback Machine</a>)</p> Answer 49: <p>It seems to me that <a href="https://en.wikipedia.org/wiki/Zeeman_conjecture" rel="nofollow noreferrer">Zeeman's collapsing conjecture</a> satisfies the criteria given. The Zeeman conjecture implies both the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture" rel="nofollow noreferrer">Poincaré conjecture</a> (proved in 2003) and the <a href="https://en.wikipedia.org/wiki/Andrews%E2%80%93Curtis_conjecture" rel="nofollow noreferrer">Andrews-Curtis conjecture</a>.</p> <p>The following is a quote from Matveev's <a href="https://books.google.com/books?id=vFLgAyeVSqAC&amp;pg=PA46" rel="nofollow noreferrer">book</a>, where it is proved that ZC restricted to special polyhedra is equivalent to the union of PC and AC.</p> <blockquote> <p>Theorem 1.3.58 may cast a doubt on the widespread belief that ZC is false. If a counterexample indeed exists, then either it has a “bad” local structure (is not a special polyhedron) or it is a counterexample to either AC or PC.</p> </blockquote> Answer 50: <p>This one is due to Errett Bishop: "all meaningful mathematics is reducible to finite calculations with strings of $0$s and $1$s" (imho Bishop formulated this not as a conjecture but as an article of faith but that doesn't necessarily affect the truth or falsity thereof).</p> <p>A reference for Bishop's claim is his article <a href="http://www.ams.org/mathscinet-getitem?mr=498014" rel="nofollow noreferrer">"Crisis in contemporary mathematics"</a> (the link is to the mathscinet review of the article) which discusses the constructivist opposition to a principle called LPO ("limited principle of omniscience") related to the law of excluded middle. The LPO is discussed starting on page 511 of the article.</p> Answer 51: <p>I think the Erdős-Szekeres conjecture (see the Wikipedia article on the <a href="https://en.wikipedia.org/wiki/Happy_ending_problem" rel="nofollow noreferrer">Happy Ending Problem</a>), which says that the smallest <span class="math-container">$N$</span> for which any <span class="math-container">$N$</span> points in general position in the plane contain the vertices of some convex <span class="math-container">$n$</span>-gon is <span class="math-container">$N=2^{n-2}+1$</span>, is a little bit ridiculous in that there is not a lot of reason to believe it. Indeed, only the cases <span class="math-container">$n=3,4,5,6$</span> are known, and while it is known that <span class="math-container">$2^{n-2}+1$</span> is a lower bound for this <span class="math-container">$N$</span>, I really am not aware of any other substantial evidence in favor of this conjecture other than that it's a very nice pattern.</p> Answer 52: <p><a href="https://en.wikipedia.org/wiki/K%C3%B6the_conjecture#:%7E:text=In%20mathematics%2C%20the%20K%C3%B6the%20conjecture,%2C%20other%20than%20%7B0%7D." rel="nofollow noreferrer">Köthe's conjecture</a> is a hugely important problem in ring theory, it was posed in 1930, it is still wide open, and it is also widely believed to be false.</p> Answer 53: <p>A long-standing conjecture in Number Theory is that for each positive integer <span class="math-container">$n$</span> there is no stretch of <span class="math-container">$n$</span> consecutive integers containing more primes than the stretch from 2 to <span class="math-container">$n+1$</span>. Just looking at a table of primes and seeing how they thin out is enough to make the conjecture plausible. </p> <p>But Hensley and Richards (<a href="https://eudml.org/doc/205282" rel="noreferrer">Primes in intervals</a>, Acta Arith 25 (1973/74) 375-391, <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=0396440" rel="noreferrer">MR0396440</a>) proved that this conjecture is incompatible with an equally long-standing conjecture, the prime <span class="math-container">$k$</span>-tuples conjecture. </p> <p>The current consensus, I believe, is that prime <span class="math-container">$k$</span>-tuples is true, while the first conjecture is false (but not proved to be false). </p> Answer 54: <p>W. Hugh Woodin, at a 1992 seminar in Berkeley at which I was present, proposed a new and ridiculously strong large cardinal concept, now called the <a href="http://cantorsattic.info/Berkeley">Berkeley cardinals</a>, and challenged the seminar audience to refute their existence. </p> <p>He ridiculed the cardinals as overly strong, stronger than Reinhardt cardinals, and proposed them in a "Refute this!" manner that seems to be in exactly the spirit of your question. </p> <p>Meanwhile, no-one has yet succeeded in refuting the Berkeley cardinals. </p> Answer 55: <h1><a href="https://stackoverflow.com/questions/111307/whats-p-np-and-why-is-it-such-a-famous-question"><span class="math-container">$P=NP$</span></a></h1> <p>Let me tick the list:</p> <ol> <li><p>Most likely false, because, as Scott Aaronson said <em>&quot;If <span class="math-container">$P = NP$</span>, then the world would be a profoundly different place than we usually assume it to be.&quot;</em></p> </li> <li><p>Yes, it's The Open Problem in computational complexity theory</p> </li> <li><p>Yes, it's old</p> </li> <li><p>It's important, again quoting Scott: <em>&quot;[because if it were true], there would be no special value in &quot;creative leaps,&quot; no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss...&quot;</em></p> </li> <li><p>It's an equality rather than a negation</p> </li> </ol> Answer 56: <p>I've heard that Roger Heath-Brown has presented the following "conjecture" at several conferences, most likely to illustrate our poor understanding of the topic more than because he actually believes it to be true.</p> <p>Let $\pi_2(x)$ denote the number of twin primes less than $x$. Then</p> <blockquote> <blockquote> <p>$\pi_2(x) = c \int_{0}^{x}\frac{dt}{\log^2 t} + O(1)$</p> </blockquote> </blockquote> <p>where $c$ is the twin prime constant. </p> <p>In other words, the twin prime asymptotic holds with error term is $O(1)$.</p> Answer 57: <p>I propose Edward Nelson's "conjecture" that Peano's arithmetic is inconsistent. </p> <p>First, to be honest, I am not aware that he stated it as "conjecture", using that word, but this is something he said he believed to be true, and that he wasn't able to prove (except for a little while but a mistake was discovered by Terry Tao and others independently) though he tried a lot. So a conjecture it is, in the sense this word currently has. </p> <p>It is also certainly "outrageous", in the usual sense of the word -- to many mathematicians, according to my own experience, the simple mention of it provokes disbelief, sarcasm, sometimes outright hostility. </p> <p>But let's check that it is also "outrageous" in the sense of this question. 1) It is most certainly false, or at least this is what most mathematicians, including myself, think. 2) But it is certainly not known to be false -- not now, not ever. 3) Nelson made his program public much before 2006. 4) it is obviously extremely important. 5) The negation, that is the assertion that "Peano's arithmetic is consistent" was once a conjecture by Hilbert, but since Gödel it cannot be called a conjecture anymore, since we know it cannot be proven (in a system etc.)</p> <p>Let me add that it also satisfies something Gil Kalai added in comment, namely "I prefer examples where [...] the proposer proposes the conjecture genuinely suggesting that it is true".</p> Answer 58: <p>From this <a href="https://mathoverflow.net/a/218926/12978">Math Overflow question</a>, Joel David Hamkins wrote:</p> <blockquote> <p>I once heard Harvey Friedman suggest that the set of prime-differences, that is, the set of all natural numbers $n$ for which there are primes $p,q$ with $p-q=n$, as a possible candidate for all we knew for an intermediate Turing degree &mdash; a noncomputable set between $0$ and $0'$ &mdash; that was <em>natural</em>, not specifically constructed to have that feature.</p> </blockquote> <p>I've also heard others (albeit more recently than 2006) conjecture that Hilbert's 10th problem for rationals is an intermediate degree. </p> <p>Really, any conjecture that there is a <em>natural</em> intermediate degree is outrageous (although not exactly formal enough to refute). </p> Answer 59: <p><strong><a href="https://en.wikipedia.org/wiki/Vop%C4%9Bnka%27s_principle" rel="nofollow noreferrer">Vopěnka's Principle</a></strong></p> <p>It fits here perfectly except that it has never been called a conjecture. Vopěnka himself was convinced it was wrong! But I will better just post a section from page 279 of Adámek and Rosický “<a href="https://doi.org/10.1017/CBO9780511600579" rel="nofollow noreferrer">Locally presentable and accessible categories.</a>”:</p> <blockquote> <p>The story of Vopěnka's principle (as related to the authors by Petr Vopěnka) is that of a practical joke which misfired: In the 1960's P. Vopěnka was repelled by the multitude of large cardinals which emerged in set theory. When he constructed, in collaboration with Z. Hedrlín and A. Pultr, a rigid graph on every set (see Lemma 2.64), he came to the conclusion that, with some more effort, a large rigid class of graphs must surely be also constructible. He then decided to tease set-theorists: he introduced a new principle (known today as Vopěnka's principle), and proved some consequences concerning large cardinals. He hoped that some set-theorists would continue this line of research (which they did) until somebody showed that the principle is nonsense. However, the latter never materialized - after a number of unsuccessful attempts at constructing a large rigid class of graphs, Vopěnka's principle received its name from Vopěnka's disciples. One of them, T. J. Jech, made Vopěnka's principle widely known. Later the consistency of this principle was derived from the existence of huge cardinals: see [Powell 1972]; our account (in the Appendix) is taken from [Jech 1978]. Thus, today this principle has a firm position in the theory of large cardinals. Petr Vopěnka himself never published anything related to that principle.</p> </blockquote> Answer 60: <p>I don't know about "ridiculous", but there is <a href="https://en.wikipedia.org/wiki/Hall&#39;s_conjecture" rel="noreferrer">Hall's Conjecture</a> in its original form:</p> <blockquote> <p>There is a positive constant $C$ such that for any two integers $a$ and $b$ with $a^2 \neq b^3$, one has $$|a^2-b^3|&gt; C \sqrt{|b|}\;.$$</p> </blockquote> Answer 61: <p>I'm surprised no one has mentioned it, but the first one that comes to my mind is this.</p> <blockquote> <p>$e+\pi$ is rational.</p> </blockquote> <p>I think most mathematicians would agree that it is ridiculous. It would follow from Schanuel's conjecture that it is false, but as far as I know, the conjecture is wide open, and when it comes to (ir)rationality of $e+\pi$, more or less all that is known is the (elementary) fact that either $e+\pi$ or $e\cdot\pi$ is transcendental (naturally, we expect <em>both</em> of them to be transcendental, so it doesn't really get us any closer to a proof).</p> <p>I'm not sure when it was made publicly, but it is very natural and unlikely to not have been considered before (I heard about it as an undergrad around 2010). I think it is quite important in that it is an obvious test case for Schanuel's conjecture, and in that it would certainly be quite shocking if it was true.</p> <p>(Caveat: I am not a specialist, so if someone more competent can contradict me, please do!)</p> Answer 62: <p>Existence of <a href="https://en.wikipedia.org/wiki/Siegel_zero" rel="noreferrer">Siegel zeros</a>.</p> <p>1) If we are to believe (like most mathematicians do) in the generalized Riemann hypothesis, this is completely false. I wouldn't necessarily call this ridiculous or outrageous, but within the evidence we have it is rather unlikely to hold.</p> <p>2) Nonexistence of Siegel zeros is a problem wide, wide open, nowhere near close to being resolved.</p> <p>3) According to the Wikipedia article, this type of zeros was considered back in 1930s, and earlier by Landau, but I don't know if they have explicitly stated the conjecture. GRH was posed <a href="https://en.wikipedia.org/wiki/Generalized_Riemann_hypothesis#Generalized_Riemann_hypothesis_.28GRH.29" rel="noreferrer">back in 1884</a> though.</p> <p>4) They are immensely useful in many applications, since if they exist, primes in certain arithmetic progressions "conspire" to have certain non-uniform distribution. I'm no expert, but <a href="https://mathoverflow.net/questions/125276/good-uses-of-siegel-zeros">here</a> some uses are listed (see also <a href="https://terrytao.wordpress.com/2015/08/26/heath-browns-theorem-on-prime-twins-and-siegel-zeroes/" rel="noreferrer">this blog post</a> by Terry Tao).</p> <p>5) It <em>implies</em> the negation of GRH, but the negation of a statement itself is quite an awkward statement, saying "yeah, zeros might exist, but not too close to $1$".</p> Answer 63: <p>The &quot;conjecture&quot; in algebraic geometry that all rationally connected varieties are unirational comes to mind. It's usually thrown around as a way of saying, &quot;See, we know so little about what varieties can be unirational that we can't prove a single rationally connected variety isn't.&quot; Unirationality implies rational connectedness, but I think almost everyone believes the converse should be false.</p> <p>Some background: Algebraic geometers have been interested for a long time in proving that certain varieties are or are not rational (very roughly, figuring out which systems of polynomial equations can have their solutions parametrized). Clemens and Griffiths showed in 1972 that a cubic hypersurface in <span class="math-container">$\mathbb{P}^4$</span> is irrational. Since then, there's been a lot of progress in rationality obstructions e.g., Artin–Mumford's obstruction via torsion in <span class="math-container">$H^3$</span>, Iskovskikh–Manin on quartic threefolds, Kollár's work on rationality of hypersurfaces, and most recently, Voisin's new decomposition of the diagonal invariants which have led to major breakthroughs.</p> <p>On the other hand, unirationality has proved a far harder notion to control, and to my mind the biggest open question in this area is to find any obstruction to unirationality.</p> Answer 64: <p>I was giving a talk several years ago about the conjectured linear independence (over $\Bbb Q$) of the ordinates of the zeros of the Riemann zeta function, and Lior Silberman crystallized our current lack of knowledge into a "Refute this!" statement:</p> <blockquote> <p>If $\zeta(x+iy)=0$, then $y\in\Bbb Q$.</p> </blockquote> <p>(In other words, even though the imaginary parts of the nontrivial zeros of $\zeta(s)$ are believed to be transcendental, algebraically independent, and generally unrelated to any other constants we've ever seen ... we currently can't even prove that a single one of those imaginary parts is irrational!)</p> <p>This "conjecture" can be extended to Dirichlet $L$-functions, and perhaps even further (though one needs to be careful that we don't allow a class of $L$-functions that includes arbitrary vertical shifts of some of its members).</p> Answer 65: <p>Here is my favorite.</p> <p>Spherical simplex is called rational, if all its dihedral angles are rational multiples of <span class="math-container">$\pi.$</span></p> <p><strong>Rational Volume Conjecture:</strong> Volume of every rational spherical simplex is a rational multiple of <span class="math-container">$\pi^2.$</span></p> <p>It is true in all known examples, but is expected to be false. The conjecture first appeared in "<a href="https://pdfs.semanticscholar.org/879a/91356bcaac5e3605dbc18778a869d09cb96a.pdf" rel="noreferrer">Differential characters and geometric invariants</a>" paper of Cheeger and Simons, p.73. (Dis)proving this conjecture would lead to interesting results in scissor congruence theory and group homology.</p> Answer 66: <p>For a prime $p$, an infinite group $G$ is a <a href="https://en.wikipedia.org/wiki/Tarski_monster_group">Tarski monster</a> if each of its proper subgroups has order $p$.</p> <p>If I am correctly informed, then the Tarski monster was defined to demonstrate our poor understanding of infinite groups, because such monsters obviously don't exist, it should be easy to prove that they don't exist, but we cannot prove it.</p> <p>Then Olshanskii proved that Tarski monsters do exist for all large primes, and by now many people believe that "large" means something like $p\geq 11$.</p> Answer 67: <p>Seen this, I believe really striking, example yesterday here on MO: although the question <a href="https://mathoverflow.net/q/7304/41291">Complex vector bundles that are not holomorphic</a> is from 2009, a <a href="https://mathoverflow.net/a/37205/41291">recent post</a> by <a href="https://mathoverflow.net/users/2349">algori</a> suggests that this is still open.</p> <p>And the really ridiculous conjecture is that <em>all topological vector bundles on $\mathbb C\mathbf P^n$ are algebraic</em>.</p> <p>The question is described as an open problem in Okonek-Schneider-Spindler (1980) but I believe must have been asked much earlier.</p> Answer 68: <p><span class="math-container">$\DeclareMathOperator\Sp{Sp}\DeclareMathOperator\cat{cat}$</span>The Lusternik–Schnirelmann category of the Lie groups <span class="math-container">$\Sp(n)$</span>. Since <span class="math-container">$\Sp(1) = S^3$</span>, <span class="math-container">$\cat(\Sp(1)) = 1$</span>. In the 1960s, P. Schweitzer proved that <span class="math-container">$\cat(\Sp(2)) = 3$</span>. <em>Based on this</em>, a folklore conjecture emerged that in general <span class="math-container">$\cat(\Sp(n)) = 2n-1$</span>. In 2001, it was proved that <span class="math-container">$\cat(\Sp(3)) = 5$</span>, so maybe it's true?</p> Answer 69: <p>If the holomorphic map $f:\mathbb{C}\to\mathbb{C}$ has a fixed point $p$, and the derivative $\lambda := f'(p)$ equals $e^{2\pi i \theta}$ (with irrational $\theta$), one can ask if $f$ is conjugate to $z\mapsto\lambda\cdot z$ in a neighborhood of $p$. If it exists, the largest domain of conjugacy is called a 'Siegel disk'. Two properties to keep in mind are:</p> <ul> <li>A Siegel disk cannot contain a critical point in its interior (boundary is ok).</li> <li>The boundary of a Siegel disk belongs to the Julia set of $f$.</li> </ul> <p>Quadratic maps can have Siegel disks, but not for just any $\theta$; the number theoretical properties of this 'rotation number' are relevant. However, if $\theta$ is Diophantine, the boundary of the disk is well behaved (Jordan curve, quasi-circle...)</p> <p>... But the boundary of a Siegel disk can also be wild; for instance, it can be non-locally connected. The <strong>outrageous conjecture</strong> that has been floating around is that:</p> <ul> <li>There is a quadratic polynomial with a Siegel disk whose boundary equals the Julia set.</li> </ul> <p>Since a quadratic Julia set is symmetric with respect to the critical point, a quadratic Siegel disk would have a symmetric preimage whose boundary <em>also</em> equals the Julia set, but the unbounded component of the complement (Fatou set) also has boundary equal to the Julia set, so our conjectured Siegel disk would form part of a 'lakes of Wada' configuration.</p> Answer 70: <p><span class="math-container">$S^6$</span> has a complex structure.</p> <p>I don’t know if this apply, but this has a nice story. It has been &quot;published&quot; to to be true and now Atiyah has a short paper <a href="https://arxiv.org/abs/1610.09366" rel="nofollow noreferrer">The non-existent complex 6-sphere</a> on arXiv claiming to be false, other important mathematicians has also work on this problem. According to LeBrun this would be a minor disaster.</p> Answer 71: <p>Let us say a graph class $\mathcal{C}$ is small if it has at most $n^{O(n)}$ graphs on $n$ vertices. The <strong>implicit graph conjecture</strong> states that every small, hereditary graph class has an adjacency labeling scheme (ALS) with a label decoder that can be computed in polynomial time (a formal definition of ALS is given at the end of the answer).</p> <p>Initially, this was posed as question by Kannan, Naor and Rudich in their paper Implicit Representation of Graphs which appeared at STOC '88. It was restated as conjecture by Spinrad in the book Efficient Graph Representations (2003). </p> <p>It is important because it would imply the existence of space-efficient representations for all small, hereditary graph classes where querying an edge requires only polylogarithmic time with respect to the number of vertices of the graph. </p> <p>As far as I know there is no consensus about whether this conjecture should be true or not. However, from my perspective it would be an immense surprise if it holds for the following reason. The concept of adjacency labeling schemes can be defined with respect to arbitrary complexity classes. For a complexity class $\text{C}$ (more formally, a set of languages over the binary alphabet) we can define the class of graph classes $\text{GC}$ as the set of all graph classes that have an ALS with a label decoder that can be computed in $\text{C}$. It can be shown that $\text{G1EXP} \subsetneq \text{G2EXP} \subsetneq \text{G3EXP} \dots \subsetneq \text{GR} \subsetneq \text{GALL}$ where $\text{kEXP}$ is the set of languages that can be computed in time $\exp^k(\text{poly}(n))$, $\text{R}$ is the set of all decidable languages and $\text{ALL}$ is the set of all languages. I find it hard to believe that <strong>every</strong> small, hereditary graph class falls down through all these classes and just happens to sit in $\text{GP}$ (the choice just seems too arbitrary and weak). In fact, there are natural graph classes such as disk graph or line segment graphs for which it is not even known whether they are in $\text{GALL}$. Additionally, a graph class $\mathcal{C}$ is in $\text{GALL}$ iff $\mathcal{C}$ has a polynomial universal graph, i.e. a family of graphs $(G_n)_{n \in \mathbb{N}}$ such that $|V(G_n)|$ is polynomially bounded and $G_n$ contains all graphs from $\mathcal{C}$ on $n$ vertices as induced subgraph. It already seems doubtful to me that every small, hereditary graph class has such polynomial universal graphs. </p> <p>An ALS is a tuple $S=(F,c)$ where $F \subseteq \{0,1\}^* \times \{0,1\}^*$ is called label decoder and $c \in \mathbb{N}$ is the label length. A graph $G$ with $n$ vertices is represented by $S$ if there exists a labeling $\ell \colon V(G) \rightarrow \{0,1\}^{c \lceil \log n \rceil}$ such that for all $u,v \in V(G)$ it holds that $(u,v) \in E(G) \Leftrightarrow (\ell(u),\ell(v)) \in F$. A graph class $\mathcal{C}$ has an ALS $S$ if every graph in $\mathcal{C}$ can be represented by $S$ (but not necessarily every graph represented by $S$ must be in $\mathcal{C}$). </p> Answer 72: <p>Let me promote a comment of Terry Tao into an answer and mention two conjectures. (See, e.g., <a href="https://arxiv.org/abs/1711.10238" rel="nofollow noreferrer">Thom - Finitary approximations of groups and their applications</a> and <a href="https://arxiv.org/abs/1711.10238" rel="nofollow noreferrer">De Chiffre, Glebsky, Lubotzky, Thom - Stability, cohomology vanishing, and non-approximable groups</a> for more information.)</p> <p><strong>Conjecture 1:</strong> (a question by Gromov): Every group is sofic.</p> <p><strong>Conjecture 2:</strong> (a question by Connes): Every group is hyperlinear</p> <p>Conjecture 1 and 2 as well as several other problems can be asked in the context of approximate homomorphisms of a group <span class="math-container">$\Gamma$</span>. If the target groups are symmetric groups, and the approximation is in terms of normalized Hamming norm then this leads to Conjecture 1. If the target groups are <span class="math-container">$U(n)$</span> and the notion of approximation is in terms of the normalized Hilbert Schmidt norm then this leads to Conjecture 2. A counter example to Conjecture 2 leads also to a counter example to <a href="https://en.wikipedia.org/wiki/Connes_embedding_problem" rel="nofollow noreferrer">Connes 1976 embedding conjecture</a> that was refuted <a href="https://arxiv.org/abs/2001.04383" rel="nofollow noreferrer">a few days ago by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen in their paper <span class="math-container">$\mathrm{MIP}^* = \mathrm{RE}$</span></a>, which gives us additional reasons to consider Conjecture 1 and 2 as outrageous conjectures!</p> Answer 73: <p>The so-called <a href="https://en.wikipedia.org/wiki/Lov%C3%A1sz_conjecture" rel="noreferrer">Lovász conjecture</a> states:</p> <blockquote> <p>Every finite connected vertex-transitive graph contains a Hamiltonian path.</p> </blockquote> <p>One reason I think that this conjecture fits what you're asking for is that, even though it is standardly called a conjecture, it is not clear how many people believe it. László Lovász himself originally phrased the question negatively, inviting the reader to find a counterexample. László Babai has vocally argued that there is no reason to believe it is true, and that the value of the question lies in how it highlights our poor understanding of obstructions to Hamiltonicity. For further information, see <a href="https://www.math.ucla.edu/~pak/papers/hamcayley9.pdf" rel="noreferrer">Hamiltonian paths in Cayley graphs</a> by Pak and Radoičić.</p> Answer 74: <p>The finitistic dimension conjecture for finite dimensional algebras states that the supremum of all projective dimensions of modules having finite projective dimension is finite. It it just proven for some very special classes of algebras and in general there seems to be no reason why this should be true. References: <a href="https://arxiv.org/pdf/1407.2383v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1407.2383v1.pdf</a> <a href="http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf" rel="nofollow noreferrer">http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf</a> (<a href="https://web.archive.org/web/20210506191407/https://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf" rel="nofollow noreferrer">Wayback Machine</a>)</p> Answer 75: <p>It seems to me that <a href="https://en.wikipedia.org/wiki/Zeeman_conjecture" rel="nofollow noreferrer">Zeeman's collapsing conjecture</a> satisfies the criteria given. The Zeeman conjecture implies both the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture" rel="nofollow noreferrer">Poincaré conjecture</a> (proved in 2003) and the <a href="https://en.wikipedia.org/wiki/Andrews%E2%80%93Curtis_conjecture" rel="nofollow noreferrer">Andrews-Curtis conjecture</a>.</p> <p>The following is a quote from Matveev's <a href="https://books.google.com/books?id=vFLgAyeVSqAC&amp;pg=PA46" rel="nofollow noreferrer">book</a>, where it is proved that ZC restricted to special polyhedra is equivalent to the union of PC and AC.</p> <blockquote> <p>Theorem 1.3.58 may cast a doubt on the widespread belief that ZC is false. If a counterexample indeed exists, then either it has a “bad” local structure (is not a special polyhedron) or it is a counterexample to either AC or PC.</p> </blockquote> Answer 76: <p>This one is due to Errett Bishop: "all meaningful mathematics is reducible to finite calculations with strings of $0$s and $1$s" (imho Bishop formulated this not as a conjecture but as an article of faith but that doesn't necessarily affect the truth or falsity thereof).</p> <p>A reference for Bishop's claim is his article <a href="http://www.ams.org/mathscinet-getitem?mr=498014" rel="nofollow noreferrer">"Crisis in contemporary mathematics"</a> (the link is to the mathscinet review of the article) which discusses the constructivist opposition to a principle called LPO ("limited principle of omniscience") related to the law of excluded middle. The LPO is discussed starting on page 511 of the article.</p> Answer 77: <p>I think the Erdős-Szekeres conjecture (see the Wikipedia article on the <a href="https://en.wikipedia.org/wiki/Happy_ending_problem" rel="nofollow noreferrer">Happy Ending Problem</a>), which says that the smallest <span class="math-container">$N$</span> for which any <span class="math-container">$N$</span> points in general position in the plane contain the vertices of some convex <span class="math-container">$n$</span>-gon is <span class="math-container">$N=2^{n-2}+1$</span>, is a little bit ridiculous in that there is not a lot of reason to believe it. Indeed, only the cases <span class="math-container">$n=3,4,5,6$</span> are known, and while it is known that <span class="math-container">$2^{n-2}+1$</span> is a lower bound for this <span class="math-container">$N$</span>, I really am not aware of any other substantial evidence in favor of this conjecture other than that it's a very nice pattern.</p> Answer 78: <p><a href="https://en.wikipedia.org/wiki/K%C3%B6the_conjecture#:%7E:text=In%20mathematics%2C%20the%20K%C3%B6the%20conjecture,%2C%20other%20than%20%7B0%7D." rel="nofollow noreferrer">Köthe's conjecture</a> is a hugely important problem in ring theory, it was posed in 1930, it is still wide open, and it is also widely believed to be false.</p> Answer 79: <p>A long-standing conjecture in Number Theory is that for each positive integer <span class="math-container">$n$</span> there is no stretch of <span class="math-container">$n$</span> consecutive integers containing more primes than the stretch from 2 to <span class="math-container">$n+1$</span>. Just looking at a table of primes and seeing how they thin out is enough to make the conjecture plausible. </p> <p>But Hensley and Richards (<a href="https://eudml.org/doc/205282" rel="noreferrer">Primes in intervals</a>, Acta Arith 25 (1973/74) 375-391, <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=0396440" rel="noreferrer">MR0396440</a>) proved that this conjecture is incompatible with an equally long-standing conjecture, the prime <span class="math-container">$k$</span>-tuples conjecture. </p> <p>The current consensus, I believe, is that prime <span class="math-container">$k$</span>-tuples is true, while the first conjecture is false (but not proved to be false). </p> Answer 80: <p>W. Hugh Woodin, at a 1992 seminar in Berkeley at which I was present, proposed a new and ridiculously strong large cardinal concept, now called the <a href="http://cantorsattic.info/Berkeley">Berkeley cardinals</a>, and challenged the seminar audience to refute their existence. </p> <p>He ridiculed the cardinals as overly strong, stronger than Reinhardt cardinals, and proposed them in a "Refute this!" manner that seems to be in exactly the spirit of your question. </p> <p>Meanwhile, no-one has yet succeeded in refuting the Berkeley cardinals. </p> Answer 81: <h1><a href="https://stackoverflow.com/questions/111307/whats-p-np-and-why-is-it-such-a-famous-question"><span class="math-container">$P=NP$</span></a></h1> <p>Let me tick the list:</p> <ol> <li><p>Most likely false, because, as Scott Aaronson said <em>&quot;If <span class="math-container">$P = NP$</span>, then the world would be a profoundly different place than we usually assume it to be.&quot;</em></p> </li> <li><p>Yes, it's The Open Problem in computational complexity theory</p> </li> <li><p>Yes, it's old</p> </li> <li><p>It's important, again quoting Scott: <em>&quot;[because if it were true], there would be no special value in &quot;creative leaps,&quot; no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss...&quot;</em></p> </li> <li><p>It's an equality rather than a negation</p> </li> </ol> Answer 82: <p>I've heard that Roger Heath-Brown has presented the following "conjecture" at several conferences, most likely to illustrate our poor understanding of the topic more than because he actually believes it to be true.</p> <p>Let $\pi_2(x)$ denote the number of twin primes less than $x$. Then</p> <blockquote> <blockquote> <p>$\pi_2(x) = c \int_{0}^{x}\frac{dt}{\log^2 t} + O(1)$</p> </blockquote> </blockquote> <p>where $c$ is the twin prime constant. </p> <p>In other words, the twin prime asymptotic holds with error term is $O(1)$.</p> Answer 83: <p>I propose Edward Nelson's "conjecture" that Peano's arithmetic is inconsistent. </p> <p>First, to be honest, I am not aware that he stated it as "conjecture", using that word, but this is something he said he believed to be true, and that he wasn't able to prove (except for a little while but a mistake was discovered by Terry Tao and others independently) though he tried a lot. So a conjecture it is, in the sense this word currently has. </p> <p>It is also certainly "outrageous", in the usual sense of the word -- to many mathematicians, according to my own experience, the simple mention of it provokes disbelief, sarcasm, sometimes outright hostility. </p> <p>But let's check that it is also "outrageous" in the sense of this question. 1) It is most certainly false, or at least this is what most mathematicians, including myself, think. 2) But it is certainly not known to be false -- not now, not ever. 3) Nelson made his program public much before 2006. 4) it is obviously extremely important. 5) The negation, that is the assertion that "Peano's arithmetic is consistent" was once a conjecture by Hilbert, but since Gödel it cannot be called a conjecture anymore, since we know it cannot be proven (in a system etc.)</p> <p>Let me add that it also satisfies something Gil Kalai added in comment, namely "I prefer examples where [...] the proposer proposes the conjecture genuinely suggesting that it is true".</p> Answer 84: <p>From this <a href="https://mathoverflow.net/a/218926/12978">Math Overflow question</a>, Joel David Hamkins wrote:</p> <blockquote> <p>I once heard Harvey Friedman suggest that the set of prime-differences, that is, the set of all natural numbers $n$ for which there are primes $p,q$ with $p-q=n$, as a possible candidate for all we knew for an intermediate Turing degree &mdash; a noncomputable set between $0$ and $0'$ &mdash; that was <em>natural</em>, not specifically constructed to have that feature.</p> </blockquote> <p>I've also heard others (albeit more recently than 2006) conjecture that Hilbert's 10th problem for rationals is an intermediate degree. </p> <p>Really, any conjecture that there is a <em>natural</em> intermediate degree is outrageous (although not exactly formal enough to refute). </p> Answer 85: <p><strong><a href="https://en.wikipedia.org/wiki/Vop%C4%9Bnka%27s_principle" rel="nofollow noreferrer">Vopěnka's Principle</a></strong></p> <p>It fits here perfectly except that it has never been called a conjecture. Vopěnka himself was convinced it was wrong! But I will better just post a section from page 279 of Adámek and Rosický “<a href="https://doi.org/10.1017/CBO9780511600579" rel="nofollow noreferrer">Locally presentable and accessible categories.</a>”:</p> <blockquote> <p>The story of Vopěnka's principle (as related to the authors by Petr Vopěnka) is that of a practical joke which misfired: In the 1960's P. Vopěnka was repelled by the multitude of large cardinals which emerged in set theory. When he constructed, in collaboration with Z. Hedrlín and A. Pultr, a rigid graph on every set (see Lemma 2.64), he came to the conclusion that, with some more effort, a large rigid class of graphs must surely be also constructible. He then decided to tease set-theorists: he introduced a new principle (known today as Vopěnka's principle), and proved some consequences concerning large cardinals. He hoped that some set-theorists would continue this line of research (which they did) until somebody showed that the principle is nonsense. However, the latter never materialized - after a number of unsuccessful attempts at constructing a large rigid class of graphs, Vopěnka's principle received its name from Vopěnka's disciples. One of them, T. J. Jech, made Vopěnka's principle widely known. Later the consistency of this principle was derived from the existence of huge cardinals: see [Powell 1972]; our account (in the Appendix) is taken from [Jech 1978]. Thus, today this principle has a firm position in the theory of large cardinals. Petr Vopěnka himself never published anything related to that principle.</p> </blockquote> Answer 86: <p>I don't know about "ridiculous", but there is <a href="https://en.wikipedia.org/wiki/Hall&#39;s_conjecture" rel="noreferrer">Hall's Conjecture</a> in its original form:</p> <blockquote> <p>There is a positive constant $C$ such that for any two integers $a$ and $b$ with $a^2 \neq b^3$, one has $$|a^2-b^3|&gt; C \sqrt{|b|}\;.$$</p> </blockquote> Answer 87: <p>I'm surprised no one has mentioned it, but the first one that comes to my mind is this.</p> <blockquote> <p>$e+\pi$ is rational.</p> </blockquote> <p>I think most mathematicians would agree that it is ridiculous. It would follow from Schanuel's conjecture that it is false, but as far as I know, the conjecture is wide open, and when it comes to (ir)rationality of $e+\pi$, more or less all that is known is the (elementary) fact that either $e+\pi$ or $e\cdot\pi$ is transcendental (naturally, we expect <em>both</em> of them to be transcendental, so it doesn't really get us any closer to a proof).</p> <p>I'm not sure when it was made publicly, but it is very natural and unlikely to not have been considered before (I heard about it as an undergrad around 2010). I think it is quite important in that it is an obvious test case for Schanuel's conjecture, and in that it would certainly be quite shocking if it was true.</p> <p>(Caveat: I am not a specialist, so if someone more competent can contradict me, please do!)</p> Answer 88: <p>Existence of <a href="https://en.wikipedia.org/wiki/Siegel_zero" rel="noreferrer">Siegel zeros</a>.</p> <p>1) If we are to believe (like most mathematicians do) in the generalized Riemann hypothesis, this is completely false. I wouldn't necessarily call this ridiculous or outrageous, but within the evidence we have it is rather unlikely to hold.</p> <p>2) Nonexistence of Siegel zeros is a problem wide, wide open, nowhere near close to being resolved.</p> <p>3) According to the Wikipedia article, this type of zeros was considered back in 1930s, and earlier by Landau, but I don't know if they have explicitly stated the conjecture. GRH was posed <a href="https://en.wikipedia.org/wiki/Generalized_Riemann_hypothesis#Generalized_Riemann_hypothesis_.28GRH.29" rel="noreferrer">back in 1884</a> though.</p> <p>4) They are immensely useful in many applications, since if they exist, primes in certain arithmetic progressions "conspire" to have certain non-uniform distribution. I'm no expert, but <a href="https://mathoverflow.net/questions/125276/good-uses-of-siegel-zeros">here</a> some uses are listed (see also <a href="https://terrytao.wordpress.com/2015/08/26/heath-browns-theorem-on-prime-twins-and-siegel-zeroes/" rel="noreferrer">this blog post</a> by Terry Tao).</p> <p>5) It <em>implies</em> the negation of GRH, but the negation of a statement itself is quite an awkward statement, saying "yeah, zeros might exist, but not too close to $1$".</p> Answer 89: <p>The &quot;conjecture&quot; in algebraic geometry that all rationally connected varieties are unirational comes to mind. It's usually thrown around as a way of saying, &quot;See, we know so little about what varieties can be unirational that we can't prove a single rationally connected variety isn't.&quot; Unirationality implies rational connectedness, but I think almost everyone believes the converse should be false.</p> <p>Some background: Algebraic geometers have been interested for a long time in proving that certain varieties are or are not rational (very roughly, figuring out which systems of polynomial equations can have their solutions parametrized). Clemens and Griffiths showed in 1972 that a cubic hypersurface in <span class="math-container">$\mathbb{P}^4$</span> is irrational. Since then, there's been a lot of progress in rationality obstructions e.g., Artin–Mumford's obstruction via torsion in <span class="math-container">$H^3$</span>, Iskovskikh–Manin on quartic threefolds, Kollár's work on rationality of hypersurfaces, and most recently, Voisin's new decomposition of the diagonal invariants which have led to major breakthroughs.</p> <p>On the other hand, unirationality has proved a far harder notion to control, and to my mind the biggest open question in this area is to find any obstruction to unirationality.</p> Answer 90: <p>I was giving a talk several years ago about the conjectured linear independence (over $\Bbb Q$) of the ordinates of the zeros of the Riemann zeta function, and Lior Silberman crystallized our current lack of knowledge into a "Refute this!" statement:</p> <blockquote> <p>If $\zeta(x+iy)=0$, then $y\in\Bbb Q$.</p> </blockquote> <p>(In other words, even though the imaginary parts of the nontrivial zeros of $\zeta(s)$ are believed to be transcendental, algebraically independent, and generally unrelated to any other constants we've ever seen ... we currently can't even prove that a single one of those imaginary parts is irrational!)</p> <p>This "conjecture" can be extended to Dirichlet $L$-functions, and perhaps even further (though one needs to be careful that we don't allow a class of $L$-functions that includes arbitrary vertical shifts of some of its members).</p> Answer 91: <p>Here is my favorite.</p> <p>Spherical simplex is called rational, if all its dihedral angles are rational multiples of <span class="math-container">$\pi.$</span></p> <p><strong>Rational Volume Conjecture:</strong> Volume of every rational spherical simplex is a rational multiple of <span class="math-container">$\pi^2.$</span></p> <p>It is true in all known examples, but is expected to be false. The conjecture first appeared in "<a href="https://pdfs.semanticscholar.org/879a/91356bcaac5e3605dbc18778a869d09cb96a.pdf" rel="noreferrer">Differential characters and geometric invariants</a>" paper of Cheeger and Simons, p.73. (Dis)proving this conjecture would lead to interesting results in scissor congruence theory and group homology.</p> Answer 92: <p>For a prime $p$, an infinite group $G$ is a <a href="https://en.wikipedia.org/wiki/Tarski_monster_group">Tarski monster</a> if each of its proper subgroups has order $p$.</p> <p>If I am correctly informed, then the Tarski monster was defined to demonstrate our poor understanding of infinite groups, because such monsters obviously don't exist, it should be easy to prove that they don't exist, but we cannot prove it.</p> <p>Then Olshanskii proved that Tarski monsters do exist for all large primes, and by now many people believe that "large" means something like $p\geq 11$.</p> Answer 93: <p>Seen this, I believe really striking, example yesterday here on MO: although the question <a href="https://mathoverflow.net/q/7304/41291">Complex vector bundles that are not holomorphic</a> is from 2009, a <a href="https://mathoverflow.net/a/37205/41291">recent post</a> by <a href="https://mathoverflow.net/users/2349">algori</a> suggests that this is still open.</p> <p>And the really ridiculous conjecture is that <em>all topological vector bundles on $\mathbb C\mathbf P^n$ are algebraic</em>.</p> <p>The question is described as an open problem in Okonek-Schneider-Spindler (1980) but I believe must have been asked much earlier.</p> Answer 94: <p><span class="math-container">$\DeclareMathOperator\Sp{Sp}\DeclareMathOperator\cat{cat}$</span>The Lusternik–Schnirelmann category of the Lie groups <span class="math-container">$\Sp(n)$</span>. Since <span class="math-container">$\Sp(1) = S^3$</span>, <span class="math-container">$\cat(\Sp(1)) = 1$</span>. In the 1960s, P. Schweitzer proved that <span class="math-container">$\cat(\Sp(2)) = 3$</span>. <em>Based on this</em>, a folklore conjecture emerged that in general <span class="math-container">$\cat(\Sp(n)) = 2n-1$</span>. In 2001, it was proved that <span class="math-container">$\cat(\Sp(3)) = 5$</span>, so maybe it's true?</p> Answer 95: <p>If the holomorphic map $f:\mathbb{C}\to\mathbb{C}$ has a fixed point $p$, and the derivative $\lambda := f'(p)$ equals $e^{2\pi i \theta}$ (with irrational $\theta$), one can ask if $f$ is conjugate to $z\mapsto\lambda\cdot z$ in a neighborhood of $p$. If it exists, the largest domain of conjugacy is called a 'Siegel disk'. Two properties to keep in mind are:</p> <ul> <li>A Siegel disk cannot contain a critical point in its interior (boundary is ok).</li> <li>The boundary of a Siegel disk belongs to the Julia set of $f$.</li> </ul> <p>Quadratic maps can have Siegel disks, but not for just any $\theta$; the number theoretical properties of this 'rotation number' are relevant. However, if $\theta$ is Diophantine, the boundary of the disk is well behaved (Jordan curve, quasi-circle...)</p> <p>... But the boundary of a Siegel disk can also be wild; for instance, it can be non-locally connected. The <strong>outrageous conjecture</strong> that has been floating around is that:</p> <ul> <li>There is a quadratic polynomial with a Siegel disk whose boundary equals the Julia set.</li> </ul> <p>Since a quadratic Julia set is symmetric with respect to the critical point, a quadratic Siegel disk would have a symmetric preimage whose boundary <em>also</em> equals the Julia set, but the unbounded component of the complement (Fatou set) also has boundary equal to the Julia set, so our conjectured Siegel disk would form part of a 'lakes of Wada' configuration.</p> Answer 96: <p><span class="math-container">$S^6$</span> has a complex structure.</p> <p>I don’t know if this apply, but this has a nice story. It has been &quot;published&quot; to to be true and now Atiyah has a short paper <a href="https://arxiv.org/abs/1610.09366" rel="nofollow noreferrer">The non-existent complex 6-sphere</a> on arXiv claiming to be false, other important mathematicians has also work on this problem. According to LeBrun this would be a minor disaster.</p> Answer 97: <p>Let us say a graph class $\mathcal{C}$ is small if it has at most $n^{O(n)}$ graphs on $n$ vertices. The <strong>implicit graph conjecture</strong> states that every small, hereditary graph class has an adjacency labeling scheme (ALS) with a label decoder that can be computed in polynomial time (a formal definition of ALS is given at the end of the answer).</p> <p>Initially, this was posed as question by Kannan, Naor and Rudich in their paper Implicit Representation of Graphs which appeared at STOC '88. It was restated as conjecture by Spinrad in the book Efficient Graph Representations (2003). </p> <p>It is important because it would imply the existence of space-efficient representations for all small, hereditary graph classes where querying an edge requires only polylogarithmic time with respect to the number of vertices of the graph. </p> <p>As far as I know there is no consensus about whether this conjecture should be true or not. However, from my perspective it would be an immense surprise if it holds for the following reason. The concept of adjacency labeling schemes can be defined with respect to arbitrary complexity classes. For a complexity class $\text{C}$ (more formally, a set of languages over the binary alphabet) we can define the class of graph classes $\text{GC}$ as the set of all graph classes that have an ALS with a label decoder that can be computed in $\text{C}$. It can be shown that $\text{G1EXP} \subsetneq \text{G2EXP} \subsetneq \text{G3EXP} \dots \subsetneq \text{GR} \subsetneq \text{GALL}$ where $\text{kEXP}$ is the set of languages that can be computed in time $\exp^k(\text{poly}(n))$, $\text{R}$ is the set of all decidable languages and $\text{ALL}$ is the set of all languages. I find it hard to believe that <strong>every</strong> small, hereditary graph class falls down through all these classes and just happens to sit in $\text{GP}$ (the choice just seems too arbitrary and weak). In fact, there are natural graph classes such as disk graph or line segment graphs for which it is not even known whether they are in $\text{GALL}$. Additionally, a graph class $\mathcal{C}$ is in $\text{GALL}$ iff $\mathcal{C}$ has a polynomial universal graph, i.e. a family of graphs $(G_n)_{n \in \mathbb{N}}$ such that $|V(G_n)|$ is polynomially bounded and $G_n$ contains all graphs from $\mathcal{C}$ on $n$ vertices as induced subgraph. It already seems doubtful to me that every small, hereditary graph class has such polynomial universal graphs. </p> <p>An ALS is a tuple $S=(F,c)$ where $F \subseteq \{0,1\}^* \times \{0,1\}^*$ is called label decoder and $c \in \mathbb{N}$ is the label length. A graph $G$ with $n$ vertices is represented by $S$ if there exists a labeling $\ell \colon V(G) \rightarrow \{0,1\}^{c \lceil \log n \rceil}$ such that for all $u,v \in V(G)$ it holds that $(u,v) \in E(G) \Leftrightarrow (\ell(u),\ell(v)) \in F$. A graph class $\mathcal{C}$ has an ALS $S$ if every graph in $\mathcal{C}$ can be represented by $S$ (but not necessarily every graph represented by $S$ must be in $\mathcal{C}$). </p> Answer 98: <p>Let me promote a comment of Terry Tao into an answer and mention two conjectures. (See, e.g., <a href="https://arxiv.org/abs/1711.10238" rel="nofollow noreferrer">Thom - Finitary approximations of groups and their applications</a> and <a href="https://arxiv.org/abs/1711.10238" rel="nofollow noreferrer">De Chiffre, Glebsky, Lubotzky, Thom - Stability, cohomology vanishing, and non-approximable groups</a> for more information.)</p> <p><strong>Conjecture 1:</strong> (a question by Gromov): Every group is sofic.</p> <p><strong>Conjecture 2:</strong> (a question by Connes): Every group is hyperlinear</p> <p>Conjecture 1 and 2 as well as several other problems can be asked in the context of approximate homomorphisms of a group <span class="math-container">$\Gamma$</span>. If the target groups are symmetric groups, and the approximation is in terms of normalized Hamming norm then this leads to Conjecture 1. If the target groups are <span class="math-container">$U(n)$</span> and the notion of approximation is in terms of the normalized Hilbert Schmidt norm then this leads to Conjecture 2. A counter example to Conjecture 2 leads also to a counter example to <a href="https://en.wikipedia.org/wiki/Connes_embedding_problem" rel="nofollow noreferrer">Connes 1976 embedding conjecture</a> that was refuted <a href="https://arxiv.org/abs/2001.04383" rel="nofollow noreferrer">a few days ago by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen in their paper <span class="math-container">$\mathrm{MIP}^* = \mathrm{RE}$</span></a>, which gives us additional reasons to consider Conjecture 1 and 2 as outrageous conjectures!</p> Answer 99: <p>The so-called <a href="https://en.wikipedia.org/wiki/Lov%C3%A1sz_conjecture" rel="noreferrer">Lovász conjecture</a> states:</p> <blockquote> <p>Every finite connected vertex-transitive graph contains a Hamiltonian path.</p> </blockquote> <p>One reason I think that this conjecture fits what you're asking for is that, even though it is standardly called a conjecture, it is not clear how many people believe it. László Lovász himself originally phrased the question negatively, inviting the reader to find a counterexample. László Babai has vocally argued that there is no reason to believe it is true, and that the value of the question lies in how it highlights our poor understanding of obstructions to Hamiltonicity. For further information, see <a href="https://www.math.ucla.edu/~pak/papers/hamcayley9.pdf" rel="noreferrer">Hamiltonian paths in Cayley graphs</a> by Pak and Radoičić.</p> Answer 100: <p>The finitistic dimension conjecture for finite dimensional algebras states that the supremum of all projective dimensions of modules having finite projective dimension is finite. It it just proven for some very special classes of algebras and in general there seems to be no reason why this should be true. References: <a href="https://arxiv.org/pdf/1407.2383v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1407.2383v1.pdf</a> <a href="http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf" rel="nofollow noreferrer">http://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf</a> (<a href="https://web.archive.org/web/20210506191407/https://www.math.uni-bonn.de/people/schroer/fd-problems-files/FD-FinitisticDimConj.pdf" rel="nofollow noreferrer">Wayback Machine</a>)</p> Answer 101: <p>It seems to me that <a href="https://en.wikipedia.org/wiki/Zeeman_conjecture" rel="nofollow noreferrer">Zeeman's collapsing conjecture</a> satisfies the criteria given. The Zeeman conjecture implies both the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture" rel="nofollow noreferrer">Poincaré conjecture</a> (proved in 2003) and the <a href="https://en.wikipedia.org/wiki/Andrews%E2%80%93Curtis_conjecture" rel="nofollow noreferrer">Andrews-Curtis conjecture</a>.</p> <p>The following is a quote from Matveev's <a href="https://books.google.com/books?id=vFLgAyeVSqAC&amp;pg=PA46" rel="nofollow noreferrer">book</a>, where it is proved that ZC restricted to special polyhedra is equivalent to the union of PC and AC.</p> <blockquote> <p>Theorem 1.3.58 may cast a doubt on the widespread belief that ZC is false. If a counterexample indeed exists, then either it has a “bad” local structure (is not a special polyhedron) or it is a counterexample to either AC or PC.</p> </blockquote> Answer 102: <p>This one is due to Errett Bishop: "all meaningful mathematics is reducible to finite calculations with strings of $0$s and $1$s" (imho Bishop formulated this not as a conjecture but as an article of faith but that doesn't necessarily affect the truth or falsity thereof).</p> <p>A reference for Bishop's claim is his article <a href="http://www.ams.org/mathscinet-getitem?mr=498014" rel="nofollow noreferrer">"Crisis in contemporary mathematics"</a> (the link is to the mathscinet review of the article) which discusses the constructivist opposition to a principle called LPO ("limited principle of omniscience") related to the law of excluded middle. The LPO is discussed starting on page 511 of the article.</p> Answer 103: <p>I think the Erdős-Szekeres conjecture (see the Wikipedia article on the <a href="https://en.wikipedia.org/wiki/Happy_ending_problem" rel="nofollow noreferrer">Happy Ending Problem</a>), which says that the smallest <span class="math-container">$N$</span> for which any <span class="math-container">$N$</span> points in general position in the plane contain the vertices of some convex <span class="math-container">$n$</span>-gon is <span class="math-container">$N=2^{n-2}+1$</span>, is a little bit ridiculous in that there is not a lot of reason to believe it. Indeed, only the cases <span class="math-container">$n=3,4,5,6$</span> are known, and while it is known that <span class="math-container">$2^{n-2}+1$</span> is a lower bound for this <span class="math-container">$N$</span>, I really am not aware of any other substantial evidence in favor of this conjecture other than that it's a very nice pattern.</p> Answer 104: <p><a href="https://en.wikipedia.org/wiki/K%C3%B6the_conjecture#:%7E:text=In%20mathematics%2C%20the%20K%C3%B6the%20conjecture,%2C%20other%20than%20%7B0%7D." rel="nofollow noreferrer">Köthe's conjecture</a> is a hugely important problem in ring theory, it was posed in 1930, it is still wide open, and it is also widely believed to be false.</p>
When the automorphism group of an object determines the object
259,732
75
https://mathoverflow.net/questions/259732/when-the-automorphism-group-of-an-object-determines-the-object
<p>Let me start with three examples to illustrate my question (probably vague; I apologize in advance).</p> <ol> <li>$\mathbf{Man}$, the category of closed (compact without boundary) topological manifold. For any $M, N\in \mathbf{Man}$ there is the following theorem (<a href="http://dl.dropboxusercontent.com/u/5188175/whit.pdf">Whittaker</a>) which says that </li> </ol> <blockquote> <p>$$\mathrm{Homeo}(M)\cong_{\text{as groups}} \mathrm{Homeo}(N) \, \textit{ if and only if } \, M\cong_{\text{as manifold}} N$$</p> </blockquote> <ol start="2"> <li>$\mathbf{NFields}$, the category of Number fields. For any $F, K\in \mathbf{NFields}$ there is a theorem (<a href="https://en.wikipedia.org/wiki/Neukirch%E2%80%93Uchida_theorem">Neukirch-Uchida</a>) which says that </li> </ol> <blockquote> <p>$$\mathrm{Gal}(\overline{K}/K)\cong_{\text{as progroups}} \mathrm{Gal}(\overline{F}/F) \textit{ if and only if } K\cong_{\text{as fields}} F $$</p> </blockquote> <ol start="3"> <li>$\mathbf{Vect}$, the category of finite dimensional vector spaces. For any $V, W\in \mathbf{Vect}$ we have that </li> </ol> <blockquote> <p>$$\mathrm{GL}(V)\cong \mathrm{GL}(W) \textit{ if and only if } V\cong W $$</p> </blockquote> <p>In the first and the third examples we see that the automorphism group of an object determines the object. The second example seems to be similar in some sense however it does not admit the same naive interpretation.</p> <blockquote> <p>$\textbf{Question:}$ What are other non-trivial examples of interesting categories where the automorphism group of an object determines the object itself? Is there a name for such categories? Is there a way to compare and characterize these kind of categories? </p> </blockquote>
[ { "answer_id": 259742, "score": 18, "text": "<p>The following result holds.</p>\n<blockquote>\n<p><strong>Theorem.</strong></p>\n<p><strong>(1)</strong> <span class=\"math-container\">$\\,$</span> (Baer-Kaplanski) <span class=\"math-container\">$\\,$</span> If <span class=\"math-container\">$G$</span> and <span class=\"math-container\">$H$</span> are torsion groups with isomorphic endomorphism rings <span class=\"math-container\">$\\mathrm{End}(G)$</span> and <span class=\"math-container\">$\\mathrm{End}(H)$</span>, then <span class=\"math-container\">$G$</span> and <span class=\"math-container\">$H$</span> are isomorphic, and any ring isomorphism <span class=\"math-container\">$\\psi \\colon \\mathrm{End}(G) \\to \\mathrm{End}(H)$</span> is induced by some group isomorphism <span class=\"math-container\">$\\varphi \\colon G \\to H$</span>.</p>\n<p><strong>(2)</strong> <span class=\"math-container\">$\\,$</span> (Leptin-Liebert) If <span class=\"math-container\">$G$</span>, <span class=\"math-container\">$H$</span> are abelian <span class=\"math-container\">$p$</span>-groups <span class=\"math-container\">$(p &gt;3)$</span> and <span class=\"math-container\">$\\mathrm{Aut}(G)$</span> is isomorphic to <span class=\"math-container\">$\\mathrm{Aut}(H)$</span>, then <span class=\"math-container\">$G$</span> is isomorphic to <span class=\"math-container\">$H$</span>.</p>\n</blockquote>\n<p>See A. V. Mikhalev, G. Pilz: <a href=\"https://books.google.it/books?id=i2g2cstPDfEC&amp;printsec=frontcover&amp;dq=concise+handbook+of+algebra+page+74&amp;hl=it&amp;sa=X&amp;ved=0ahUKEwi40bW36sbRAhXHShQKHcSJC4gQ6AEIIzAA#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\"><em>The Concise Handbook of Algebra</em>, p.74</a> and the references given therein.</p>\n" }, { "answer_id": 259850, "score": 12, "text": "<p>If $M$ and $N$ are smooth manifolds, then their diffeomorphisms groups are isomorphic if and only if $M$ and $N$ are diffeomorphic. This smooth counterpart to Whittaker's theorem has been proved by Filipkiewicz (<a href=\"http://www.ams.org/mathscinet-getitem?mr=693972\" rel=\"nofollow noreferrer\">Ergodic Theory and Dynamical Systems, 1982</a>).</p>\n" }, { "answer_id": 259747, "score": 11, "text": "<p>If $M,N$ are two countable, $\\omega$-categorical and $\\omega$-stable structures, and $\\operatorname{Aut}(M)\\cong \\operatorname{Aut}(N)$ (as topological groups), then $M$ and $N$ are bi-interpretable.</p>\n\n<p>More generally, $M$ and $N$ only need to be countable, $\\omega$-categorical and satisfy the so-called small index property. As far as I can recall, an analogous result is true for metric structures (in continuous logic).</p>\n" }, { "answer_id": 259786, "score": 10, "text": "<p>Under the Generalized Continuum Hypothesis, $$2^{\\aleph_\\alpha}=\\aleph_{\\alpha+1}\\quad(\\forall\\alpha),$$\n<strong>sets</strong> with no structure (so automorphisms are just bijections) is an example.</p>\n\n<p>Namely, by</p>\n\n<p><a href=\"https://mathoverflow.net/questions/27785/cardinality-of-the-permutations-of-an-infinite-set/\">Cardinality of the permutations of an infinite set</a></p>\n\n<p>we have $\\text{card Aut}(X)=2^{\\text{card}(X)}$, and\n$$2^{\\aleph_\\alpha}=2^{\\aleph_\\beta}\\implies\\aleph_{\\alpha+1}=\\aleph_{\\beta+1}\\implies\\alpha+1=\\beta+1\\implies\\alpha=\\beta.$$</p>\n\n<p>(I suppose it's easy to check that we need more than ZFC but do not <em>need</em> GCH here.)</p>\n" }, { "answer_id": 260363, "score": 3, "text": "<p>Geometric Complexity Theory:</p>\n\n<blockquote>\n <p><strong>Theorem (Mulmuley and Sohoni <a href=\"http://epubs.siam.org/doi/pdf/10.1137/S009753970038715X\" rel=\"nofollow noreferrer\">[MS]</a>)</strong>\n The permanent (respectively the determinant) polynomial is characterized by its symmetry group. </p>\n</blockquote>\n\n<p>That is if $P$ is a homogeneous polynomial of degree $m$ in $m^2$ variables and its symmetry group $G_P$ also fixes the permanent (respectively the determinant), then $P$ must be a scalar multiple of the permanent (respectively the determinant). </p>\n\n<p>Landsberg and Ressayre <a href=\"https://arxiv.org/pdf/1508.05788v2.pdf\" rel=\"nofollow noreferrer\">[LR]</a> made progress on Valiant's version of P vs NP using this result.</p>\n" }, { "answer_id": 259742, "score": 18, "text": "<p>The following result holds.</p>\n<blockquote>\n<p><strong>Theorem.</strong></p>\n<p><strong>(1)</strong> <span class=\"math-container\">$\\,$</span> (Baer-Kaplanski) <span class=\"math-container\">$\\,$</span> If <span class=\"math-container\">$G$</span> and <span class=\"math-container\">$H$</span> are torsion groups with isomorphic endomorphism rings <span class=\"math-container\">$\\mathrm{End}(G)$</span> and <span class=\"math-container\">$\\mathrm{End}(H)$</span>, then <span class=\"math-container\">$G$</span> and <span class=\"math-container\">$H$</span> are isomorphic, and any ring isomorphism <span class=\"math-container\">$\\psi \\colon \\mathrm{End}(G) \\to \\mathrm{End}(H)$</span> is induced by some group isomorphism <span class=\"math-container\">$\\varphi \\colon G \\to H$</span>.</p>\n<p><strong>(2)</strong> <span class=\"math-container\">$\\,$</span> (Leptin-Liebert) If <span class=\"math-container\">$G$</span>, <span class=\"math-container\">$H$</span> are abelian <span class=\"math-container\">$p$</span>-groups <span class=\"math-container\">$(p &gt;3)$</span> and <span class=\"math-container\">$\\mathrm{Aut}(G)$</span> is isomorphic to <span class=\"math-container\">$\\mathrm{Aut}(H)$</span>, then <span class=\"math-container\">$G$</span> is isomorphic to <span class=\"math-container\">$H$</span>.</p>\n</blockquote>\n<p>See A. V. Mikhalev, G. Pilz: <a href=\"https://books.google.it/books?id=i2g2cstPDfEC&amp;printsec=frontcover&amp;dq=concise+handbook+of+algebra+page+74&amp;hl=it&amp;sa=X&amp;ved=0ahUKEwi40bW36sbRAhXHShQKHcSJC4gQ6AEIIzAA#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\"><em>The Concise Handbook of Algebra</em>, p.74</a> and the references given therein.</p>\n" }, { "answer_id": 259850, "score": 12, "text": "<p>If $M$ and $N$ are smooth manifolds, then their diffeomorphisms groups are isomorphic if and only if $M$ and $N$ are diffeomorphic. This smooth counterpart to Whittaker's theorem has been proved by Filipkiewicz (<a href=\"http://www.ams.org/mathscinet-getitem?mr=693972\" rel=\"nofollow noreferrer\">Ergodic Theory and Dynamical Systems, 1982</a>).</p>\n" }, { "answer_id": 259747, "score": 11, "text": "<p>If $M,N$ are two countable, $\\omega$-categorical and $\\omega$-stable structures, and $\\operatorname{Aut}(M)\\cong \\operatorname{Aut}(N)$ (as topological groups), then $M$ and $N$ are bi-interpretable.</p>\n\n<p>More generally, $M$ and $N$ only need to be countable, $\\omega$-categorical and satisfy the so-called small index property. As far as I can recall, an analogous result is true for metric structures (in continuous logic).</p>\n" }, { "answer_id": 259786, "score": 10, "text": "<p>Under the Generalized Continuum Hypothesis, $$2^{\\aleph_\\alpha}=\\aleph_{\\alpha+1}\\quad(\\forall\\alpha),$$\n<strong>sets</strong> with no structure (so automorphisms are just bijections) is an example.</p>\n\n<p>Namely, by</p>\n\n<p><a href=\"https://mathoverflow.net/questions/27785/cardinality-of-the-permutations-of-an-infinite-set/\">Cardinality of the permutations of an infinite set</a></p>\n\n<p>we have $\\text{card Aut}(X)=2^{\\text{card}(X)}$, and\n$$2^{\\aleph_\\alpha}=2^{\\aleph_\\beta}\\implies\\aleph_{\\alpha+1}=\\aleph_{\\beta+1}\\implies\\alpha+1=\\beta+1\\implies\\alpha=\\beta.$$</p>\n\n<p>(I suppose it's easy to check that we need more than ZFC but do not <em>need</em> GCH here.)</p>\n" }, { "answer_id": 260363, "score": 3, "text": "<p>Geometric Complexity Theory:</p>\n\n<blockquote>\n <p><strong>Theorem (Mulmuley and Sohoni <a href=\"http://epubs.siam.org/doi/pdf/10.1137/S009753970038715X\" rel=\"nofollow noreferrer\">[MS]</a>)</strong>\n The permanent (respectively the determinant) polynomial is characterized by its symmetry group. </p>\n</blockquote>\n\n<p>That is if $P$ is a homogeneous polynomial of degree $m$ in $m^2$ variables and its symmetry group $G_P$ also fixes the permanent (respectively the determinant), then $P$ must be a scalar multiple of the permanent (respectively the determinant). </p>\n\n<p>Landsberg and Ressayre <a href=\"https://arxiv.org/pdf/1508.05788v2.pdf\" rel=\"nofollow noreferrer\">[LR]</a> made progress on Valiant's version of P vs NP using this result.</p>\n" }, { "answer_id": 259742, "score": 18, "text": "<p>The following result holds.</p>\n<blockquote>\n<p><strong>Theorem.</strong></p>\n<p><strong>(1)</strong> <span class=\"math-container\">$\\,$</span> (Baer-Kaplanski) <span class=\"math-container\">$\\,$</span> If <span class=\"math-container\">$G$</span> and <span class=\"math-container\">$H$</span> are torsion groups with isomorphic endomorphism rings <span class=\"math-container\">$\\mathrm{End}(G)$</span> and <span class=\"math-container\">$\\mathrm{End}(H)$</span>, then <span class=\"math-container\">$G$</span> and <span class=\"math-container\">$H$</span> are isomorphic, and any ring isomorphism <span class=\"math-container\">$\\psi \\colon \\mathrm{End}(G) \\to \\mathrm{End}(H)$</span> is induced by some group isomorphism <span class=\"math-container\">$\\varphi \\colon G \\to H$</span>.</p>\n<p><strong>(2)</strong> <span class=\"math-container\">$\\,$</span> (Leptin-Liebert) If <span class=\"math-container\">$G$</span>, <span class=\"math-container\">$H$</span> are abelian <span class=\"math-container\">$p$</span>-groups <span class=\"math-container\">$(p &gt;3)$</span> and <span class=\"math-container\">$\\mathrm{Aut}(G)$</span> is isomorphic to <span class=\"math-container\">$\\mathrm{Aut}(H)$</span>, then <span class=\"math-container\">$G$</span> is isomorphic to <span class=\"math-container\">$H$</span>.</p>\n</blockquote>\n<p>See A. V. Mikhalev, G. Pilz: <a href=\"https://books.google.it/books?id=i2g2cstPDfEC&amp;printsec=frontcover&amp;dq=concise+handbook+of+algebra+page+74&amp;hl=it&amp;sa=X&amp;ved=0ahUKEwi40bW36sbRAhXHShQKHcSJC4gQ6AEIIzAA#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\"><em>The Concise Handbook of Algebra</em>, p.74</a> and the references given therein.</p>\n" }, { "answer_id": 259850, "score": 12, "text": "<p>If $M$ and $N$ are smooth manifolds, then their diffeomorphisms groups are isomorphic if and only if $M$ and $N$ are diffeomorphic. This smooth counterpart to Whittaker's theorem has been proved by Filipkiewicz (<a href=\"http://www.ams.org/mathscinet-getitem?mr=693972\" rel=\"nofollow noreferrer\">Ergodic Theory and Dynamical Systems, 1982</a>).</p>\n" }, { "answer_id": 259747, "score": 11, "text": "<p>If $M,N$ are two countable, $\\omega$-categorical and $\\omega$-stable structures, and $\\operatorname{Aut}(M)\\cong \\operatorname{Aut}(N)$ (as topological groups), then $M$ and $N$ are bi-interpretable.</p>\n\n<p>More generally, $M$ and $N$ only need to be countable, $\\omega$-categorical and satisfy the so-called small index property. As far as I can recall, an analogous result is true for metric structures (in continuous logic).</p>\n" }, { "answer_id": 259786, "score": 10, "text": "<p>Under the Generalized Continuum Hypothesis, $$2^{\\aleph_\\alpha}=\\aleph_{\\alpha+1}\\quad(\\forall\\alpha),$$\n<strong>sets</strong> with no structure (so automorphisms are just bijections) is an example.</p>\n\n<p>Namely, by</p>\n\n<p><a href=\"https://mathoverflow.net/questions/27785/cardinality-of-the-permutations-of-an-infinite-set/\">Cardinality of the permutations of an infinite set</a></p>\n\n<p>we have $\\text{card Aut}(X)=2^{\\text{card}(X)}$, and\n$$2^{\\aleph_\\alpha}=2^{\\aleph_\\beta}\\implies\\aleph_{\\alpha+1}=\\aleph_{\\beta+1}\\implies\\alpha+1=\\beta+1\\implies\\alpha=\\beta.$$</p>\n\n<p>(I suppose it's easy to check that we need more than ZFC but do not <em>need</em> GCH here.)</p>\n" }, { "answer_id": 260363, "score": 3, "text": "<p>Geometric Complexity Theory:</p>\n\n<blockquote>\n <p><strong>Theorem (Mulmuley and Sohoni <a href=\"http://epubs.siam.org/doi/pdf/10.1137/S009753970038715X\" rel=\"nofollow noreferrer\">[MS]</a>)</strong>\n The permanent (respectively the determinant) polynomial is characterized by its symmetry group. </p>\n</blockquote>\n\n<p>That is if $P$ is a homogeneous polynomial of degree $m$ in $m^2$ variables and its symmetry group $G_P$ also fixes the permanent (respectively the determinant), then $P$ must be a scalar multiple of the permanent (respectively the determinant). </p>\n\n<p>Landsberg and Ressayre <a href=\"https://arxiv.org/pdf/1508.05788v2.pdf\" rel=\"nofollow noreferrer\">[LR]</a> made progress on Valiant's version of P vs NP using this result.</p>\n" }, { "answer_id": 259742, "score": 18, "text": "<p>The following result holds.</p>\n<blockquote>\n<p><strong>Theorem.</strong></p>\n<p><strong>(1)</strong> <span class=\"math-container\">$\\,$</span> (Baer-Kaplanski) <span class=\"math-container\">$\\,$</span> If <span class=\"math-container\">$G$</span> and <span class=\"math-container\">$H$</span> are torsion groups with isomorphic endomorphism rings <span class=\"math-container\">$\\mathrm{End}(G)$</span> and <span class=\"math-container\">$\\mathrm{End}(H)$</span>, then <span class=\"math-container\">$G$</span> and <span class=\"math-container\">$H$</span> are isomorphic, and any ring isomorphism <span class=\"math-container\">$\\psi \\colon \\mathrm{End}(G) \\to \\mathrm{End}(H)$</span> is induced by some group isomorphism <span class=\"math-container\">$\\varphi \\colon G \\to H$</span>.</p>\n<p><strong>(2)</strong> <span class=\"math-container\">$\\,$</span> (Leptin-Liebert) If <span class=\"math-container\">$G$</span>, <span class=\"math-container\">$H$</span> are abelian <span class=\"math-container\">$p$</span>-groups <span class=\"math-container\">$(p &gt;3)$</span> and <span class=\"math-container\">$\\mathrm{Aut}(G)$</span> is isomorphic to <span class=\"math-container\">$\\mathrm{Aut}(H)$</span>, then <span class=\"math-container\">$G$</span> is isomorphic to <span class=\"math-container\">$H$</span>.</p>\n</blockquote>\n<p>See A. V. Mikhalev, G. Pilz: <a href=\"https://books.google.it/books?id=i2g2cstPDfEC&amp;printsec=frontcover&amp;dq=concise+handbook+of+algebra+page+74&amp;hl=it&amp;sa=X&amp;ved=0ahUKEwi40bW36sbRAhXHShQKHcSJC4gQ6AEIIzAA#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\"><em>The Concise Handbook of Algebra</em>, p.74</a> and the references given therein.</p>\n" }, { "answer_id": 259850, "score": 12, "text": "<p>If $M$ and $N$ are smooth manifolds, then their diffeomorphisms groups are isomorphic if and only if $M$ and $N$ are diffeomorphic. This smooth counterpart to Whittaker's theorem has been proved by Filipkiewicz (<a href=\"http://www.ams.org/mathscinet-getitem?mr=693972\" rel=\"nofollow noreferrer\">Ergodic Theory and Dynamical Systems, 1982</a>).</p>\n" }, { "answer_id": 259747, "score": 11, "text": "<p>If $M,N$ are two countable, $\\omega$-categorical and $\\omega$-stable structures, and $\\operatorname{Aut}(M)\\cong \\operatorname{Aut}(N)$ (as topological groups), then $M$ and $N$ are bi-interpretable.</p>\n\n<p>More generally, $M$ and $N$ only need to be countable, $\\omega$-categorical and satisfy the so-called small index property. As far as I can recall, an analogous result is true for metric structures (in continuous logic).</p>\n" }, { "answer_id": 259786, "score": 10, "text": "<p>Under the Generalized Continuum Hypothesis, $$2^{\\aleph_\\alpha}=\\aleph_{\\alpha+1}\\quad(\\forall\\alpha),$$\n<strong>sets</strong> with no structure (so automorphisms are just bijections) is an example.</p>\n\n<p>Namely, by</p>\n\n<p><a href=\"https://mathoverflow.net/questions/27785/cardinality-of-the-permutations-of-an-infinite-set/\">Cardinality of the permutations of an infinite set</a></p>\n\n<p>we have $\\text{card Aut}(X)=2^{\\text{card}(X)}$, and\n$$2^{\\aleph_\\alpha}=2^{\\aleph_\\beta}\\implies\\aleph_{\\alpha+1}=\\aleph_{\\beta+1}\\implies\\alpha+1=\\beta+1\\implies\\alpha=\\beta.$$</p>\n\n<p>(I suppose it's easy to check that we need more than ZFC but do not <em>need</em> GCH here.)</p>\n" }, { "answer_id": 260363, "score": 3, "text": "<p>Geometric Complexity Theory:</p>\n\n<blockquote>\n <p><strong>Theorem (Mulmuley and Sohoni <a href=\"http://epubs.siam.org/doi/pdf/10.1137/S009753970038715X\" rel=\"nofollow noreferrer\">[MS]</a>)</strong>\n The permanent (respectively the determinant) polynomial is characterized by its symmetry group. </p>\n</blockquote>\n\n<p>That is if $P$ is a homogeneous polynomial of degree $m$ in $m^2$ variables and its symmetry group $G_P$ also fixes the permanent (respectively the determinant), then $P$ must be a scalar multiple of the permanent (respectively the determinant). </p>\n\n<p>Landsberg and Ressayre <a href=\"https://arxiv.org/pdf/1508.05788v2.pdf\" rel=\"nofollow noreferrer\">[LR]</a> made progress on Valiant's version of P vs NP using this result.</p>\n" } ]
Question: When the automorphism group of an object determines the object Answer 1: <p>The following result holds.</p> <blockquote> <p><strong>Theorem.</strong></p> <p><strong>(1)</strong> <span class="math-container">$\,$</span> (Baer-Kaplanski) <span class="math-container">$\,$</span> If <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are torsion groups with isomorphic endomorphism rings <span class="math-container">$\mathrm{End}(G)$</span> and <span class="math-container">$\mathrm{End}(H)$</span>, then <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are isomorphic, and any ring isomorphism <span class="math-container">$\psi \colon \mathrm{End}(G) \to \mathrm{End}(H)$</span> is induced by some group isomorphism <span class="math-container">$\varphi \colon G \to H$</span>.</p> <p><strong>(2)</strong> <span class="math-container">$\,$</span> (Leptin-Liebert) If <span class="math-container">$G$</span>, <span class="math-container">$H$</span> are abelian <span class="math-container">$p$</span>-groups <span class="math-container">$(p &gt;3)$</span> and <span class="math-container">$\mathrm{Aut}(G)$</span> is isomorphic to <span class="math-container">$\mathrm{Aut}(H)$</span>, then <span class="math-container">$G$</span> is isomorphic to <span class="math-container">$H$</span>.</p> </blockquote> <p>See A. V. Mikhalev, G. Pilz: <a href="https://books.google.it/books?id=i2g2cstPDfEC&amp;printsec=frontcover&amp;dq=concise+handbook+of+algebra+page+74&amp;hl=it&amp;sa=X&amp;ved=0ahUKEwi40bW36sbRAhXHShQKHcSJC4gQ6AEIIzAA#v=onepage&amp;q&amp;f=false" rel="noreferrer"><em>The Concise Handbook of Algebra</em>, p.74</a> and the references given therein.</p> Answer 2: <p>If $M$ and $N$ are smooth manifolds, then their diffeomorphisms groups are isomorphic if and only if $M$ and $N$ are diffeomorphic. This smooth counterpart to Whittaker's theorem has been proved by Filipkiewicz (<a href="http://www.ams.org/mathscinet-getitem?mr=693972" rel="nofollow noreferrer">Ergodic Theory and Dynamical Systems, 1982</a>).</p> Answer 3: <p>If $M,N$ are two countable, $\omega$-categorical and $\omega$-stable structures, and $\operatorname{Aut}(M)\cong \operatorname{Aut}(N)$ (as topological groups), then $M$ and $N$ are bi-interpretable.</p> <p>More generally, $M$ and $N$ only need to be countable, $\omega$-categorical and satisfy the so-called small index property. As far as I can recall, an analogous result is true for metric structures (in continuous logic).</p> Answer 4: <p>Under the Generalized Continuum Hypothesis, $$2^{\aleph_\alpha}=\aleph_{\alpha+1}\quad(\forall\alpha),$$ <strong>sets</strong> with no structure (so automorphisms are just bijections) is an example.</p> <p>Namely, by</p> <p><a href="https://mathoverflow.net/questions/27785/cardinality-of-the-permutations-of-an-infinite-set/">Cardinality of the permutations of an infinite set</a></p> <p>we have $\text{card Aut}(X)=2^{\text{card}(X)}$, and $$2^{\aleph_\alpha}=2^{\aleph_\beta}\implies\aleph_{\alpha+1}=\aleph_{\beta+1}\implies\alpha+1=\beta+1\implies\alpha=\beta.$$</p> <p>(I suppose it's easy to check that we need more than ZFC but do not <em>need</em> GCH here.)</p> Answer 5: <p>Geometric Complexity Theory:</p> <blockquote> <p><strong>Theorem (Mulmuley and Sohoni <a href="http://epubs.siam.org/doi/pdf/10.1137/S009753970038715X" rel="nofollow noreferrer">[MS]</a>)</strong> The permanent (respectively the determinant) polynomial is characterized by its symmetry group. </p> </blockquote> <p>That is if $P$ is a homogeneous polynomial of degree $m$ in $m^2$ variables and its symmetry group $G_P$ also fixes the permanent (respectively the determinant), then $P$ must be a scalar multiple of the permanent (respectively the determinant). </p> <p>Landsberg and Ressayre <a href="https://arxiv.org/pdf/1508.05788v2.pdf" rel="nofollow noreferrer">[LR]</a> made progress on Valiant's version of P vs NP using this result.</p> Answer 6: <p>The following result holds.</p> <blockquote> <p><strong>Theorem.</strong></p> <p><strong>(1)</strong> <span class="math-container">$\,$</span> (Baer-Kaplanski) <span class="math-container">$\,$</span> If <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are torsion groups with isomorphic endomorphism rings <span class="math-container">$\mathrm{End}(G)$</span> and <span class="math-container">$\mathrm{End}(H)$</span>, then <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are isomorphic, and any ring isomorphism <span class="math-container">$\psi \colon \mathrm{End}(G) \to \mathrm{End}(H)$</span> is induced by some group isomorphism <span class="math-container">$\varphi \colon G \to H$</span>.</p> <p><strong>(2)</strong> <span class="math-container">$\,$</span> (Leptin-Liebert) If <span class="math-container">$G$</span>, <span class="math-container">$H$</span> are abelian <span class="math-container">$p$</span>-groups <span class="math-container">$(p &gt;3)$</span> and <span class="math-container">$\mathrm{Aut}(G)$</span> is isomorphic to <span class="math-container">$\mathrm{Aut}(H)$</span>, then <span class="math-container">$G$</span> is isomorphic to <span class="math-container">$H$</span>.</p> </blockquote> <p>See A. V. Mikhalev, G. Pilz: <a href="https://books.google.it/books?id=i2g2cstPDfEC&amp;printsec=frontcover&amp;dq=concise+handbook+of+algebra+page+74&amp;hl=it&amp;sa=X&amp;ved=0ahUKEwi40bW36sbRAhXHShQKHcSJC4gQ6AEIIzAA#v=onepage&amp;q&amp;f=false" rel="noreferrer"><em>The Concise Handbook of Algebra</em>, p.74</a> and the references given therein.</p> Answer 7: <p>If $M$ and $N$ are smooth manifolds, then their diffeomorphisms groups are isomorphic if and only if $M$ and $N$ are diffeomorphic. This smooth counterpart to Whittaker's theorem has been proved by Filipkiewicz (<a href="http://www.ams.org/mathscinet-getitem?mr=693972" rel="nofollow noreferrer">Ergodic Theory and Dynamical Systems, 1982</a>).</p> Answer 8: <p>If $M,N$ are two countable, $\omega$-categorical and $\omega$-stable structures, and $\operatorname{Aut}(M)\cong \operatorname{Aut}(N)$ (as topological groups), then $M$ and $N$ are bi-interpretable.</p> <p>More generally, $M$ and $N$ only need to be countable, $\omega$-categorical and satisfy the so-called small index property. As far as I can recall, an analogous result is true for metric structures (in continuous logic).</p> Answer 9: <p>Under the Generalized Continuum Hypothesis, $$2^{\aleph_\alpha}=\aleph_{\alpha+1}\quad(\forall\alpha),$$ <strong>sets</strong> with no structure (so automorphisms are just bijections) is an example.</p> <p>Namely, by</p> <p><a href="https://mathoverflow.net/questions/27785/cardinality-of-the-permutations-of-an-infinite-set/">Cardinality of the permutations of an infinite set</a></p> <p>we have $\text{card Aut}(X)=2^{\text{card}(X)}$, and $$2^{\aleph_\alpha}=2^{\aleph_\beta}\implies\aleph_{\alpha+1}=\aleph_{\beta+1}\implies\alpha+1=\beta+1\implies\alpha=\beta.$$</p> <p>(I suppose it's easy to check that we need more than ZFC but do not <em>need</em> GCH here.)</p> Answer 10: <p>Geometric Complexity Theory:</p> <blockquote> <p><strong>Theorem (Mulmuley and Sohoni <a href="http://epubs.siam.org/doi/pdf/10.1137/S009753970038715X" rel="nofollow noreferrer">[MS]</a>)</strong> The permanent (respectively the determinant) polynomial is characterized by its symmetry group. </p> </blockquote> <p>That is if $P$ is a homogeneous polynomial of degree $m$ in $m^2$ variables and its symmetry group $G_P$ also fixes the permanent (respectively the determinant), then $P$ must be a scalar multiple of the permanent (respectively the determinant). </p> <p>Landsberg and Ressayre <a href="https://arxiv.org/pdf/1508.05788v2.pdf" rel="nofollow noreferrer">[LR]</a> made progress on Valiant's version of P vs NP using this result.</p> Answer 11: <p>The following result holds.</p> <blockquote> <p><strong>Theorem.</strong></p> <p><strong>(1)</strong> <span class="math-container">$\,$</span> (Baer-Kaplanski) <span class="math-container">$\,$</span> If <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are torsion groups with isomorphic endomorphism rings <span class="math-container">$\mathrm{End}(G)$</span> and <span class="math-container">$\mathrm{End}(H)$</span>, then <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are isomorphic, and any ring isomorphism <span class="math-container">$\psi \colon \mathrm{End}(G) \to \mathrm{End}(H)$</span> is induced by some group isomorphism <span class="math-container">$\varphi \colon G \to H$</span>.</p> <p><strong>(2)</strong> <span class="math-container">$\,$</span> (Leptin-Liebert) If <span class="math-container">$G$</span>, <span class="math-container">$H$</span> are abelian <span class="math-container">$p$</span>-groups <span class="math-container">$(p &gt;3)$</span> and <span class="math-container">$\mathrm{Aut}(G)$</span> is isomorphic to <span class="math-container">$\mathrm{Aut}(H)$</span>, then <span class="math-container">$G$</span> is isomorphic to <span class="math-container">$H$</span>.</p> </blockquote> <p>See A. V. Mikhalev, G. Pilz: <a href="https://books.google.it/books?id=i2g2cstPDfEC&amp;printsec=frontcover&amp;dq=concise+handbook+of+algebra+page+74&amp;hl=it&amp;sa=X&amp;ved=0ahUKEwi40bW36sbRAhXHShQKHcSJC4gQ6AEIIzAA#v=onepage&amp;q&amp;f=false" rel="noreferrer"><em>The Concise Handbook of Algebra</em>, p.74</a> and the references given therein.</p> Answer 12: <p>If $M$ and $N$ are smooth manifolds, then their diffeomorphisms groups are isomorphic if and only if $M$ and $N$ are diffeomorphic. This smooth counterpart to Whittaker's theorem has been proved by Filipkiewicz (<a href="http://www.ams.org/mathscinet-getitem?mr=693972" rel="nofollow noreferrer">Ergodic Theory and Dynamical Systems, 1982</a>).</p> Answer 13: <p>If $M,N$ are two countable, $\omega$-categorical and $\omega$-stable structures, and $\operatorname{Aut}(M)\cong \operatorname{Aut}(N)$ (as topological groups), then $M$ and $N$ are bi-interpretable.</p> <p>More generally, $M$ and $N$ only need to be countable, $\omega$-categorical and satisfy the so-called small index property. As far as I can recall, an analogous result is true for metric structures (in continuous logic).</p> Answer 14: <p>Under the Generalized Continuum Hypothesis, $$2^{\aleph_\alpha}=\aleph_{\alpha+1}\quad(\forall\alpha),$$ <strong>sets</strong> with no structure (so automorphisms are just bijections) is an example.</p> <p>Namely, by</p> <p><a href="https://mathoverflow.net/questions/27785/cardinality-of-the-permutations-of-an-infinite-set/">Cardinality of the permutations of an infinite set</a></p> <p>we have $\text{card Aut}(X)=2^{\text{card}(X)}$, and $$2^{\aleph_\alpha}=2^{\aleph_\beta}\implies\aleph_{\alpha+1}=\aleph_{\beta+1}\implies\alpha+1=\beta+1\implies\alpha=\beta.$$</p> <p>(I suppose it's easy to check that we need more than ZFC but do not <em>need</em> GCH here.)</p> Answer 15: <p>Geometric Complexity Theory:</p> <blockquote> <p><strong>Theorem (Mulmuley and Sohoni <a href="http://epubs.siam.org/doi/pdf/10.1137/S009753970038715X" rel="nofollow noreferrer">[MS]</a>)</strong> The permanent (respectively the determinant) polynomial is characterized by its symmetry group. </p> </blockquote> <p>That is if $P$ is a homogeneous polynomial of degree $m$ in $m^2$ variables and its symmetry group $G_P$ also fixes the permanent (respectively the determinant), then $P$ must be a scalar multiple of the permanent (respectively the determinant). </p> <p>Landsberg and Ressayre <a href="https://arxiv.org/pdf/1508.05788v2.pdf" rel="nofollow noreferrer">[LR]</a> made progress on Valiant's version of P vs NP using this result.</p> Answer 16: <p>The following result holds.</p> <blockquote> <p><strong>Theorem.</strong></p> <p><strong>(1)</strong> <span class="math-container">$\,$</span> (Baer-Kaplanski) <span class="math-container">$\,$</span> If <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are torsion groups with isomorphic endomorphism rings <span class="math-container">$\mathrm{End}(G)$</span> and <span class="math-container">$\mathrm{End}(H)$</span>, then <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are isomorphic, and any ring isomorphism <span class="math-container">$\psi \colon \mathrm{End}(G) \to \mathrm{End}(H)$</span> is induced by some group isomorphism <span class="math-container">$\varphi \colon G \to H$</span>.</p> <p><strong>(2)</strong> <span class="math-container">$\,$</span> (Leptin-Liebert) If <span class="math-container">$G$</span>, <span class="math-container">$H$</span> are abelian <span class="math-container">$p$</span>-groups <span class="math-container">$(p &gt;3)$</span> and <span class="math-container">$\mathrm{Aut}(G)$</span> is isomorphic to <span class="math-container">$\mathrm{Aut}(H)$</span>, then <span class="math-container">$G$</span> is isomorphic to <span class="math-container">$H$</span>.</p> </blockquote> <p>See A. V. Mikhalev, G. Pilz: <a href="https://books.google.it/books?id=i2g2cstPDfEC&amp;printsec=frontcover&amp;dq=concise+handbook+of+algebra+page+74&amp;hl=it&amp;sa=X&amp;ved=0ahUKEwi40bW36sbRAhXHShQKHcSJC4gQ6AEIIzAA#v=onepage&amp;q&amp;f=false" rel="noreferrer"><em>The Concise Handbook of Algebra</em>, p.74</a> and the references given therein.</p> Answer 17: <p>If $M$ and $N$ are smooth manifolds, then their diffeomorphisms groups are isomorphic if and only if $M$ and $N$ are diffeomorphic. This smooth counterpart to Whittaker's theorem has been proved by Filipkiewicz (<a href="http://www.ams.org/mathscinet-getitem?mr=693972" rel="nofollow noreferrer">Ergodic Theory and Dynamical Systems, 1982</a>).</p> Answer 18: <p>If $M,N$ are two countable, $\omega$-categorical and $\omega$-stable structures, and $\operatorname{Aut}(M)\cong \operatorname{Aut}(N)$ (as topological groups), then $M$ and $N$ are bi-interpretable.</p> <p>More generally, $M$ and $N$ only need to be countable, $\omega$-categorical and satisfy the so-called small index property. As far as I can recall, an analogous result is true for metric structures (in continuous logic).</p> Answer 19: <p>Under the Generalized Continuum Hypothesis, $$2^{\aleph_\alpha}=\aleph_{\alpha+1}\quad(\forall\alpha),$$ <strong>sets</strong> with no structure (so automorphisms are just bijections) is an example.</p> <p>Namely, by</p> <p><a href="https://mathoverflow.net/questions/27785/cardinality-of-the-permutations-of-an-infinite-set/">Cardinality of the permutations of an infinite set</a></p> <p>we have $\text{card Aut}(X)=2^{\text{card}(X)}$, and $$2^{\aleph_\alpha}=2^{\aleph_\beta}\implies\aleph_{\alpha+1}=\aleph_{\beta+1}\implies\alpha+1=\beta+1\implies\alpha=\beta.$$</p> <p>(I suppose it's easy to check that we need more than ZFC but do not <em>need</em> GCH here.)</p> Answer 20: <p>Geometric Complexity Theory:</p> <blockquote> <p><strong>Theorem (Mulmuley and Sohoni <a href="http://epubs.siam.org/doi/pdf/10.1137/S009753970038715X" rel="nofollow noreferrer">[MS]</a>)</strong> The permanent (respectively the determinant) polynomial is characterized by its symmetry group. </p> </blockquote> <p>That is if $P$ is a homogeneous polynomial of degree $m$ in $m^2$ variables and its symmetry group $G_P$ also fixes the permanent (respectively the determinant), then $P$ must be a scalar multiple of the permanent (respectively the determinant). </p> <p>Landsberg and Ressayre <a href="https://arxiv.org/pdf/1508.05788v2.pdf" rel="nofollow noreferrer">[LR]</a> made progress on Valiant's version of P vs NP using this result.</p>